id
stringlengths
25
96
input
stringlengths
137
1.08M
output
stringlengths
501
1.6k
instruction
stringclasses
5 values
arxiv/05070982_6195_461c_a159_ac118928bee6.md
Spatiotemporal Weather Data Predictions with Shortcut Recurrent-Convolutional Networks: A Solution for the Weather4cast challenge Jussi Leinonen 1Federal Office of Meteorology and Climatology MeteoSwis, Locarno-Monti, Switzerland ###### weather, satellite data, neural networks, gated recurrent units + Footnote †: 2 were provided for regions R1-R3, constituting the \"Core\" competition. Meanwhile, R4-R6 only had test data available, meaning that they had to be evaluated using models trained on R1-R3; this was called the \"Transfer Learning\" competition. Furthermore, all regions had a set of \"held-out\" data which were made available only during the final week of the competition; the final results were based on the performance with these data. The performance of the models was evaluated using the mean-square error (MSE) for each variable. However, some adjustments were made to the MSE to account for the particular needs of each variable, except for _crr_intensity_. First, the loss for _temperature_ was modified to account for varying amounts of missing data in each region. Second, _asii_turb_trop_prob_ is a probabilistic variable and the output of the model was passed through a truncated and normalized logit transform before the evaluation of the MSE. Third, although _cma_ is technically evaluated using the MSE, the variable in the output data file is required to be quantized such that the value is either \\(0\\) or \\(1\\); therefore, model output values \\(<0.5\\) are rounded to \\(0\\) and outputs \\(\\geq 0.5\\) are rounded to \\(1\\) before evaluation. The details of the metrics can be found in [6]. ## 3 Solution ### Models The model presented here is a neural network combining recurrent-convolutional layers and shortcut connections in an encoder-forecaster architecture. The architecture is presented in Fig. 1. It is based on that developed in [7] for precipitation nowcasting and adopted by [8], as well as similar to that of [9], with some differences that are described below. The encoder section consists of four recurrent downsampling stages. Each stage first passes the sequence through a residual block [10], with each frame processed using the same convolutional filters. A strided convolution in the residual block is used to downsample the input by a factor of \\(2\\). Then, the sequence is processed by a gated recurrent unit (GRU) layer [11]; a tensor of zeros is passed as the initial state of the GRU. The number of channels in the convolutions is increased with increasing depth in the encoder. The forecaster section is approximately a mirror image of the encoder section. Each stage consists of a GRU layer which is followed by bilinear upsampling and a residual block. A shortcut similar to U-Net [12] is utilized: The final state of each GRU in the encoder is passed through a convolution and then used as the initial state of the GRU of corresponding depth in the forecaster. This allows the high-resolution features of the recent frames to be passed through, preventing the first predictions from being blurry. A final projection and a sigmoid activation produce the output as a single variable constrained between \\(0\\) and \\(1\\). The main difference of the architecture presented here to that of [7] is that the use of Trajectory GRU (TrajGRU) is rejected as TrajGRU was found to cause training instability. Two variants are considered instead. The first utilizes the Convolutional GRU (ConvGRU) layer adopted by e.g. [9, 13, 14]. In the second variant, the convolution in the ConvGRU is replaced by a residual block modified to be used for this purpose. The use of the residual block increases the depth of the operations in the GRU and is thus expected to allow it to better process nonlinear transformations and also to increase the Figure 1: Illustration of the network architecture. distance at which pixels can influence each other at each step of the ConvGRU. The latter effect may recover some of the advantages of TrajGRU over ConvGRU that [7] found. The author is unaware of previously published instances of a residual layer being used in place of the convolution in GRU. In this paper, this variant is called \"ResGRU\", although the same abbreviation was used for a different combination of GRUs and residual connections in [15]. The models were implemented using TensorFlow/Keras [16] version 2.4. The source code and the pre-trained models can be found through the links in Appendix A. ### Training Since the scores for the target variables were evaluated independently from each other, a separate instance of the model was trained for each target variable, but using all variables as inputs for each model. The models were trained on the training dataset of R1-R3 such that every available gapless sequence of \\(36\\) frames was used for training, resulting in \\(72192\\) different sequences (albeit with considerable overlap). The training was performed with combined data from all regions R1-R3 in order to increase the training dataset size and improve the ability of the model to generalize; specializing the model to single regions was not attempted. The static data (latitude, longitude and elevation) were also used for training. Data augmentation by random rotation in \\(90\\)deg increments as well as random top-down and left-right mirroring was used to further increase the effective number of training samples. The model for _asii_turb_trop_prob_ was trained using a custom logit loss corresponding to the metric specified in [6], while the other variables were trained using the standard MSE loss. The Adam optimizer [17] was used to train the models with a batch size of \\(32\\). The progress of the training was evaluated using the provided validation dataset for R1-R3. After each training epoch, the evaluation metric was computed on the validation set and then: 1. If the metric improved upon the best evaluation result, the model weights were saved. 2. If the metric had not improved in \\(3\\) epochs, the learning rate was reduced by a factor of \\(5\\). 3. If the metric had not improved in \\(10\\) epochs, the training was stopped early. In practice, condition \\(3\\) was never activated as the model continued to achieve marginal gains on the validation data at least every few epochs until the maximum training time of \\(12\\) h or \\(24\\) h (depending on the training run) was reached. This suggests that the model did not suffer significantly from overfitting, which typically causes the validation loss to start increasing even as the training loss keeps decreasing. This is perhaps due to the relatively modest number of weights in the models by standards of modern ConvNets, approximately \\(12.1\\) million weights in the ConvGRU variant and \\(18.6\\) million in the ResGRU variant. The loss over the validation set was used as the metric for each variable except _cma_ for which a rounded MSE that takes the \\(0,1\\) quantization into account was used. A parallel setup of eight Nvidia Tesla V100 GPUs was used to train the models. Training for one epoch took approximately 20 minutes with this hardware. The eight parallel GPUs only provided a speedup of a factor of approximately \\(3\\) compared to training on a single GPU, suggesting that single-GPU training of the models should be feasible, although the batch size would likely have to be reduced as the models require rather large amounts of GPU memory. ## 4 Results Both the ConvGRU and ResGRU variants of the model were trained for each target variable. The evaluation results for the validation dataset are shown in Table 1. Comparisons to TrajGRU were found impractical as the models using TrajGRU would not converge properly due to the training instability mentioned in Sect. 3.1. Based on the evaluation results, three submissions were made to the final leaderboards of Weather4cast Stage 1: one using the ConvGRU variant for all variables (codenamed V4c), another using ResGRU (V4rc), and a third using the best model for each variable based on the validation metrics (V4pc). It was indeed this last model that produced the best results also on the leaderboards for both the Core and Transfer Learning competitions, as shown in Table 2. Figures 2-5 show examples of the predictions using the validation dataset. These are all shown for the same scene except for Fig. 3, where a different scene was chosen because the one used for the others did not contain precipitation. It is clear that the predictions start relatively sharp and get blurrier over time, reflecting the increasing uncertainty. The blurriness is likely exacerbated by the use of the MSE metric, specified in the data challenge, which is prone to regression to the mean. Especially in Fig. 4, one can also see that the model can predict the motion of features in the images. ## 5 Conclusions The model presented here reached the top of the final leaderboards in both the Core and the Transfer Learning categories of the Weather4cast 2021 Challenge Stage 1. It is a versatile solution to the problem of predicting the evolution of atmospheric fields, producing sharp predictions for the near term and increasing the uncertainty for \\begin{table} \\begin{tabular}{l c c c c} \\hline \\hline & _temperature_ & _crr\\_intensity_ & _asii\\_turb\\_trop\\_prob_ & _cma_ \\\\ \\hline ConvGRU & \\(0.004564\\) & \\(\\mathbf{0.0001259}\\) & \\(0.002250\\) & \\(0.1393\\) \\\\ ResGRU & \\(\\mathbf{0.004356}\\) & \\(0.0001278\\) & \\(\\mathbf{0.002161}\\) & \\(\\mathbf{0.1376}\\) \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 1: Evaluation metrics for the validation dataset. Figure 4: As Fig. 2, but for _asii\\_turb\\_trop\\_prob_. Figure 3: As Fig. 2, but for _crr\\_intensity_. A different case is shown as the case of Fig. 2 does not contain precipitation. Figure 2: An example of predictions for the _temperature_ variable. The frames on the left correspond to past temperature, while the frames on the right show the real future temperature (top row) and the predicted temperature (bottom row). The \\(T\\) coordinate refers to the index of the frame in the sequence, with \\(T=0\\) represents the last input data point and \\(T=1\\) the first prediction. The model output normalized to the range \\((0,1)\\) is shown. longer lead times. The architecture can be easily adapted to other tasks such as probabilistic predictions or outputs that are different from the inputs. Further research is needed to handle, for instance, different spatial and temporal resolutions of inputs and data available for future time steps. ## Acknowledgments This project benefited from parallel development in the fellowship \"Seamless Artificially Intelligent Thunderstorm Nowcasts\" from the European Organisation for the Exploitation of Meteorological Satellites (EUMETSAT). The hosting institution of this fellowship is MeteoSwiss in Switzerland. The author thanks U. Hamann and A. Rigazzi for discussions regarding the model and training. ## References * Bauer et al. [2015] P. Bauer, A. Thorpe, G. Brunet, The quiet revolution of numerical weather prediction, Nature 525 (2015) 47-55. doi:org/10.1038/nature14956. * McGovern et al. [2017] A. McGovern, K. L. Elmore, D. J. Gagne, II, S. E. Haupt, C. D. Karstens, R. Lagerquist, T. Smith, J. K. Williams, Using artificial intelligence to improve real-time decision-making for high-impact weather, Bull. Amer. Meteor. Soc. 98 (2017) 2073-2090. doi:10.1175/BAMS-D-16-0123.1. * Reichstein et al. [2019] M. Reichstein, G. Camps-Valls, B. Stevens, M. Jung, J. Denzler, N. Carvalho, Prabhat, Deep learning and process understanding for data-driven earth system science, Nature 566 (2019) 195-204. doi:10.1038/s41586-019-0912-1. * Huntingford et al. [2019] C. Huntingford, E. S. Jeffers, M. B. Bonsall, H. M. Christensen, T. Lees, H. Yang, Machine learning and artificial intelligence to aid climate change research and preparedness, Environmental Research Letters 14 (2019) 124007. doi:10.1088/1748-9326/ab4e55. * Haupt et al. [2021] S. E. Haupt, W. Chapman, S. V. Adams, C. Kirkwood, J. S. Hosking, N. H. Robinson, S. Lerch, A. C. Subramanian, Towards implementing artificial intelligence post-processing in weather and climate: proposed actions from the oxford 2019 workshop, Philos. Trans. R. Soc. London, Ser. A 379 (2021) 20200091. doi:10.1098/rsta.2020.0091. * IARAI [2021] IARAI, Weather4cast 2021: Competition metrics, 2021. URL: [https://www.iarai.ac.at/weather4cast/wp-content/uploads/sites/3/2021/04/w4c.pdf](https://www.iarai.ac.at/weather4cast/wp-content/uploads/sites/3/2021/04/w4c.pdf). * Shi et al. [2017] X. Shi, Z. Gao, L. Lausen, H. Wang, D.-Y. Yeung, W.-k. Wong, W.-c. WOO, Deep learning for precipitation nowcasting: A benchmark and a new model, in: I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, R. Garnett (Eds.), Advances in Neural Information Processing Systems, volume 30, Curran Associates, Inc., 2017. URL: [https://proceedings.neurips.cc/paper/2017/file/a6db4ed04f1621a119799fd3d7545d3d-Paper.pdf](https://proceedings.neurips.cc/paper/2017/file/a6db4ed04f1621a119799fd3d7545d3d-Paper.pdf). * Franch et al. [2020] G. Franch, D. Nerini, M. Pendesini, L. Coviello, G. Jurman, C. Furlanello, Precipitation nowcasting with orographic enhanced stacked generalization: Improving deep learning predictions on extreme events, Atmosphere 11 (2020). doi:10.3390/atmos11030267. \\begin{table} \\begin{tabular}{c c c} \\hline \\hline & Core & Transfer learning \\\\ \\hline ConvGRU & \\(0.5051\\) & \\(0.4658\\) \\\\ ResGRU & \\(0.5014\\) & \\(0.4626\\) \\\\ Best combination & \\(\\mathbf{0.4987}\\) & \\(\\mathbf{0.4607}\\) \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 2: Evaluation metrics for the held-out test dataset, as computed by the Weather4cast website ([https://www.iarai.ac.at/weather4cast/](https://www.iarai.ac.at/weather4cast/)). Figure 5: As Fig. 2, but for _cma_. The white contours in the predictions indicate \\(0.5\\), the threshold of the cloud mask in the output. * [9] S. Ravuri, K. Lenc, M. Willson, D. Kangin, R. Lam, P. Mirowski, M. Fitzsimons, M. Athanassiadou, S. Kashem, S. Madge, R. Prudden, A. Mandhane, A. Clark, A. Brock, K. Simonyan, R. Hadsell, N. Robinson, E. Clancy, A. Arribas, S. Mohamed, Skillful precipitation nowcasting using deep generative models of radar, 2021. arXiv:2104.00954. * [10] K. He, X. Zhang, S. Ren, J. Sun, Deep residual learning for image recognition, in: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016. doi:10.1109/CVPR.2016.90. * [11] K. Cho, B. van Merrienboer, D. Bahdanau, Y. Bengio, On the properties of neural machine translation: Encoder-decoder approaches, in: Proceedings of SSST-8, Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation, 2014, pp. 103-111. * MICCAI 2015, 2015, pp. 234-241. doi:10.1007/978-3-319-24574-4_28. * [13] L. Tian, X. Li, Y. Ye, P. Xie, Y. Li, A generative adversarial gated recurrent unit model for precipitation nowcasting 17 (2020) 601-605. doi:10.1109/LGRS.2019.2926776. * [14] J. Leinonen, D. Nerini, A. Berne, Stochastic super-resolution for downscaling time-evolving atmospheric fields with a generative adversarial network, IEEE Trans. Geosci. Remote Sens. (2020). doi:10.1109/TGRS.2020.3032790. * [15] W. Gao, R.-J. Wai, A novel fault identification method for photovoltaic array via convolutional neural network and residual gated recurrent unit, IEEE Access 8 (2020) 159493-159510. doi:10.1109/ACCESS.2020.3020296. * [16] F. Chollet, et al., Keras, [https://keras.io](https://keras.io), 2015. * [17] D. P. Kingma, J. Ba, Adam: A method for stochastic optimization, in: 3rd International Conference for Learning Representations, San Diego, California, USA, 2014. URL: [https://arxiv.org/abs/1412.6980](https://arxiv.org/abs/1412.6980). * [18] J. Leinonen, Model weights for a Weather4cast 2021 Challenge Stage 1 solution, 2021. doi:10.5281/zenodo.5101213. ## Appendix A Online Resources The source code with instructions to replicate the results presented in this paper can be found at [https://github.com/jleinonen/weather4cast-stage1](https://github.com/jleinonen/weather4cast-stage1). The model weights used in the challenge submissions can be downloaded at [18].
This paper presents the neural network model that was used by the author in the Weather4cast 2021 Challenge Stage 1, where the objective was to predict the time evolution of satellite-based weather data images. The network is based on an encoder-forecaster architecture making use of gated recurrent units (GRU), residual blocks and a contracting/expanding architecture with shortcuts similar to U-Net. A GRU variant utilizing residual blocks in place of convolutions is also introduced. Example predictions and evaluation metrics for the model are presented. These demonstrate that the model can retain sharp features of the input for the first predictions, while the later predictions become more blurred to reflect the increasing uncertainty.
Give a concise overview of the text below.
arxiv/20ea3d07_7db2_402a_b276_44d5ea43c701.md
# PreDiff: Precipitation Nowcasting with Latent Diffusion Models Zhihan Gao The Hong Kong University of Science and Technology [email protected] &Xingjian Shi Boson AI [email protected] &Boran Han AWS [email protected] &Hao Wang AWS AI Labs [email protected] &Xiaoyong Jin Amazon [email protected] &Danielle Maddix AWS AI Labs [email protected] &Yi Zhu Boson AI [email protected] &Mu Li Boson AI [email protected] &Yuyang Wang AWS AI Labs [email protected] Work conducted during an internship at Amazon.Work conducted while at Amazon. ## 1 Introduction Earth's intricate climate system significantly influences daily life. Precipitation nowcasting, tasked with delivering accurate rainfall forecasts for the near future (e.g., 0-6 hours), is vital for decision-making across numerous industries and services. Recent advancements in data-driven deep learning (DL) techniques have demonstrated promising potential in this field, rivaling conventional numerical methods [8; 5] with their advantages of being more skillful [5], efficient [37], and scalable [3]. However, accurately predicting the future rainfall remains challenging for data-driven algorithms. The state-of-the-art Earth system forecasting algorithms [47; 61; 41; 37; 8; 69; 2; 29; 3] typically generate blurry predictions. This is caused by the high variability and complexity inherent to Earth's climatic system. Even minor differences in initial conditions can lead to vastly divergent outcomes that are difficult to predict. Most methods adopt a point estimation of the future rainfall and are trained by minimizing pixel-wise loss functions (e.g., mean-squared error). These methods lack the capability of capturing multiple plausible futures and will generate blurry forecasts which lose important operational details. Therefore, what are needed instead are probabilistic models that can represent the uncertainty inherent in stochastic systems. The probabilistic models can capture multiple plausible futures, generating diverse high-quality predictions that better align with real-world data. The emergence of diffusion models (DMs) [22] has enabled powerful probabilistic frameworks for generative modeling. DMs have shown remarkable capabilities in generating high-quality images [40; 45; 43] and videos [15; 23]. As a likelihood-based model, DMs do not exhibit mode collapse or training instabilities like GANs [10]. Compared to autoregressive (AR) models [53; 46; 63; 39; 65] that generate images pixel-by-pixel, DMs can produce higher resolution images faster and with higher quality. They are also better at handling uncertainty [62; 57; 58; 59; 34] without drawbacks like exposure bias [13] in AR models. Latent diffusion models (LDMs) [52; 42] further improve on DMs by separating the model into two phases, only applying the costly diffusion in a compressed latent space. This alleviates the computational costs of DMs without significantly impairing performance. Despite DMs' success in image and video generation [56; 42; 15; 66; 32; 32], its application to precipitation nowcasting and Earth system forecasting is in early stages [16]. One of the major concerns is that this purely data-centric approach lacks constraints and controls from prior knowledge about the dynamic system. Some spatiotemporal forecasting approaches have incorporated domain knowledge by modifying the model architecture or adding extra training losses [11; 1; 37]. This enables them to be aware of prior knowledge and generate physically plausible forecasts. However, these approaches still face challenges, such as requiring to design new model architectures or retrain the entire model from scratch when constraints change. More detailed discussions on related works are provided in Appendix A. Inspired by recent success in controllable generative models [68; 24; 4; 33; 6], we propose a general two-stage pipeline for training data-driven Earth system forecasting model. 1) In the first stage, we focus on capturing the intrinsic semantics in the data by training an LDM. To capture Earth's long-term and complex changes, we instantiate the LDM's core neural network as a UNet-style architecture based on Earthformer [8]. 2) In the second stage, we inject prior knowledge of the Earth system by training a knowledge alignment network that guides the sampling process of the LDM. Specifically the alignment network parameterizes an energy function that adjusts the transition probabilities during each denoising step. This encourages the generation of physically plausible intermediate latent states while suppressing those likely to violate the given domain knowledge. We summarize our main contributions as follows: * We introduce a novel LDM based model _PreDiff_ for precipitation nowcasting. * We propose a general two-stage pipeline for training data-driven Earth system forecasting models. Specifically, we develop _knowledge alignment_ mechanism to guide the sampling process of PreDiff. This mechanism ensures that the generated predictions align with domain-specific prior knowledge better, thereby enhancing the reliability of the forecasts, without requiring any modifications to the trained PreDiff model. * Our method achieves state-of-the-art performance on the \\(N\\)-body MNIST [8] dataset and attains state-of-the-art perceptual quality on the SEVIR [55] dataset. ## 2 Method We follow [47; 48; 55; 1; 8] to formulate precipitation nowcasting as a spatiotemporal forecasting problem. The \\(L_{\\text{in}}\\)-step observation is represented as a spatiotemporal sequence \\(y=[y^{j}]_{j=1}^{L_{\\text{in}}}\\in\\mathbb{R}^{L_{\\text{in}}\\times H\\times W \\times C}\\), where \\(H\\) and \\(W\\) denote the spatial resolution, and \\(C\\) denotes the number of measurements at each space-time coordinate. Probabilistic forecasting aims to model the conditional probabilistic distribution \\(p(x|y)\\) of the \\(L_{\\text{out}}\\)-step-ahead future \\(x=[x^{j}]_{j=1}^{L_{\\text{out}}}\\in\\mathbb{R}^{L_{\\text{out}}\\times H\\times W \\times C}\\), given the observation \\(y\\). In what follows, we will present the parameterization of \\(p(x|y)\\) by a controllable LDM. ### Preliminary: Diffusion Models Diffusion models (DMs) learn the data distribution \\(p(x)\\) by training a model to reverse a predefined noising process that progressively corrupts the data. Specifically, the noising process is defined as \\(q(x_{t}|x_{t-1})=\\mathcal{N}(x_{t};\\sqrt{\\alpha_{t}}x_{t-1},(1-\\alpha_{t})I),1 \\leq t\\leq T\\), where \\(x_{0}\\sim p(x)\\) is the true data, and \\(x_{T}\\sim\\mathcal{N}(0,1)\\) is random noise. The coefficients \\(\\alpha_{t}\\) follow a fixed schedule over the timesteps \\(t\\). DMs factorize and parameterize the joint distribution over the data \\(x_{0}\\) and noisy latents \\(x_{i}\\) as \\(p_{\\theta}(x_{0:T})=p(x_{T})\\prod_{t=1}^{T}p_{\\theta}(x_{t-1}|x_{t})\\), where each step of the reverse denoising process is a Gaussian distribution \\(p_{\\theta}(x_{t-1}|x_{t})=\\mathcal{N}(\\mu_{\\theta}(x_{t},t),\\Sigma_{\\theta}(x _{t},t))\\), which is trained to recover \\(x_{t-1}\\) from \\(x_{t}\\). To apply DMs for spatiotemporal forecasting, \\(p(x|y)\\) is factorized and parameterized as \\(p_{\\theta}(x|y)=\\int p_{\\theta}(x_{0:T}|y)dx_{1:T}=\\int p(x_{T})\\prod_{t=1}^{ T}p_{\\theta}(x_{t-1}|x_{t},y)dx_{1:T}\\), where \\(p_{\\theta}(x_{t-1}|x_{t},y)\\) represents the conditional denoising transition with the condition \\(y\\). ### Conditional Diffusion in Latent Space To improve the computational efficiency of DM training and inference, our _PreDiff_ follows LDM to adopt a two-phase training that leverages the benefits of lower-dimensional latent representations. The two sequential phases of the PreDiff training are: 1) Training a frame-wise variational autoencoder (VAE) [28] that encodes pixel space into a lower-dimensional latent space, and 2) Training a conditional DM that generates predictions in this acquired latent space. Frame-wise autoencoder.We follow [7] to train a frame autoencoder using a combination of the pixel-wise loss (e.g. L2 loss) and an adversarial loss. Different from [7], we exclude the perceptual loss since there are no standard pretrained models for perception on Earth observation data. Specifically, the encoder \\(\\mathcal{E}\\) is trained to encode a data frame \\(x^{j}\\in\\mathbb{R}^{H\\times W\\times C}\\) to a latent representation \\(z^{j}=\\mathcal{E}(x^{j})\\in\\mathbb{R}^{H_{x}\\times W_{x}\\times C_{x}}\\). The decoder \\(\\mathcal{D}\\) learns to reconstruct the data frame \\(\\widehat{x}^{j}=\\mathcal{D}(z^{j})\\) from Figure 1: **Overview of PreDiff inference with knowledge alignment.** An observation sequence \\(y\\) is encoded into a latent context \\(z_{\\text{cond}}\\) by the frame-wise encoder \\(\\mathcal{E}\\). The latent diffusion model \\(p_{\\theta}(z_{t}|z_{t+1},z_{\\text{cond}})\\), which is parameterized by an Earthformer-UNet, then generates the latent future \\(z_{0}\\) by autoregressively denoising Gaussian noise \\(z_{T}\\) conditioned on \\(z_{\\text{cond}}\\). It takes the concatenation of the latent context \\(z_{\\text{cond}}\\) (in the blue border) and the previous-step noisy latent future \\(z_{t+1}\\) (in the cyan border) as input, and outputs \\(z_{t}\\). The transition distribution of each step from \\(z_{t+1}\\) to \\(z_{t}\\) can be further refined as \\(p_{\\theta,\\phi}(z_{t}|z_{t+1},y,\\mathcal{F}_{0})\\) via knowledge alignment, according to auxiliary prior knowledge. This denoising process iterates from \\(t=T\\) to \\(t=0\\), resulting in a denoised latent future \\(z_{0}\\). Finally, \\(z_{0}\\) is decoded back to pixel space by the frame-wise decoder \\(\\mathcal{D}\\) to produce the final prediction \\(\\widehat{x}\\). (Best viewed in color). the encoded latent. We denote \\(z\\sim p_{\\mathcal{E}}(z|x)\\in\\mathbb{R}^{L\\times H_{z}\\times W_{z}\\times C_{z}}\\) as equivalent to \\(z=[z^{j}]=[\\mathcal{E}(x^{j})]\\), representing encoding a sequence of frames in pixel space into a latent spatiotemporal sequence. And \\(x\\sim p_{\\mathcal{D}}(x|z)\\) denotes decoding a latent spatiotemporal sequence. Latent diffusion.With the context \\(y\\) being encoded by the frame-wise encoder \\(\\mathcal{E}\\) into the learned latent space as \\(z_{\\text{cond}}\\in\\mathbb{R}^{L_{\\text{in}}\\times H_{z}\\times W_{z}\\times C_{z}}\\) as (1). The conditional distribution \\(p_{\\theta}(z_{0:T}|z_{\\text{cond}})\\) of the latent future \\(z_{i}\\in\\mathbb{R}^{L_{\\text{out}}\\times H_{z}\\times W_{z}\\times C_{z}}\\) given \\(z_{\\text{cond}}\\) is factorized and parameterized as (2): \\[z_{\\text{cond}} \\sim p_{\\mathcal{E}}(z_{\\text{cond}}|y), \\tag{1}\\] \\[p_{\\theta}(z_{0:T}|z_{\\text{cond}}) =p(z_{T})\\prod_{t=1}^{T}p_{\\theta}(z_{t-1}|z_{t},z_{\\text{cond}}). \\tag{2}\\] where \\(z_{T}\\sim p(z_{T})=\\mathcal{N}(0,I)\\). As proposed by [22; 45],an equivalent parameterization is to have the DMs learn to match the transition noise \\(\\epsilon_{\\theta}(z_{t},t)\\) of step \\(t\\) instead of directly predicting \\(z_{t-1}\\). The training objective of PreDiff is simplified as shown in (3): \\[L_{\\text{CLDM}}=\\mathbb{E}_{(x,y),t,\\epsilon\\sim\\mathcal{N}(0,I)}\\|\\epsilon- \\epsilon_{\\theta}(z_{t},t,z_{\\text{cond}})\\|_{2}^{2}. \\tag{3}\\] where \\((x,y)\\) is a sampled context sequence and target sequence data pair, and given that, \\(z_{t}\\sim q(z_{t}|z_{0})p_{\\mathcal{E}}(z_{0}|x)\\) and \\(z_{\\text{cond}}\\sim p_{\\mathcal{E}}(z_{\\text{cond}}|y)\\). Instantiating \\(p_{\\theta}(z_{t-1}|z_{t},z_{\\text{cond}})\\).Compared to images, modeling spatiotemporal observation data in precipitation nowcasting poses greater challenges due to their higher dimensionality. We propose replacing the UNet backbone in LDM [42] with _Earthformer-UNet_, derived from Earthformer's encoder [8], which is known for its ability to model intricate and extensive spatiotemporal dependencies in the Earth system. Earthformer-UNet adopts a hierarchical UNet architecture with self cuboid attention [8] as the building blocks, excluding the bridging cross-attention in the encoder-decoder architecture of Earthformer. More details of the architecture design of Earthformer-UNet are provide in Appendix B.1. We find Earthformer-UNet to be more stable and effective at modeling the transition distribution \\(p_{\\theta}(z_{t-1}|z_{t},z_{\\text{cond}})\\). It takes the concatenation of the encoded latent context \\(z_{\\text{cond}}\\) and the noisy latent future \\(z_{t}\\) along the temporal dimension as input, and predicts the one-step-ahead noisy latent future \\(z_{t-1}\\) (in practice, the transition noise \\(\\epsilon\\) from \\(z_{t}\\) to \\(z_{t-1}\\) is predicted as shown in (3)). ### Incorporating Knowledge Alignment Though DMs hold great promise for diverse and realistic generation, the generated predictions may violate physical constraints, or disregard domain-specific prior knowledge, thereby fail to give plausible and non-trivial results [14; 44]. One possible reason for this is that DMs are not necessarily trained on data full compliant with domain knowledge. When trained on such data, there is no guarantee that the generations sampled from the learned distribution will remain physically realizable. The causes may also stem from the stochastic nature of chaotic systems, the approximation error in denoising steps, etc. To address this issue, we propose _knowledge alignment_ to incorporate auxiliary prior knowledge: \\[\\mathcal{F}(\\widehat{x},y)=\\mathcal{F}_{0}(y)\\in\\mathbb{R}^{d}, \\tag{4}\\] into the diffusion generation process. The knowledge alignment imposes a constraint \\(\\mathcal{F}\\) on the forecast \\(\\widehat{x}\\), optionally with the observation \\(y\\), based on domain expertise. E.g., for an isolated physical system, the knowledge \\(E(\\widehat{x},\\cdot)=E_{0}(y^{L_{\\text{in}}})\\in\\mathbb{R}\\) imposes the conservation of energy by enforcing the generation \\(\\widehat{x}\\) to keep the total energy \\(E(\\widehat{x},\\cdot)\\) the same as the last observation \\(E_{0}(y^{L_{\\text{in}}})\\). The violation \\(\\|\\mathcal{F}(\\widehat{x},y)-\\mathcal{F}_{0}(y)\\|\\) quantifies the deviation of a prediction \\(\\widehat{x}\\) from prior knowledge. The larger violation indicates \\(\\widehat{x}\\) diverges further from the constraints. Knowledge alignment hence aims to suppress the probability of generating predictions with large violation. Notice that even the target \\(t\\) futures \\(x\\) from training data may violate the knowledge, i.e. \\(\\mathcal{F}(x,y)\ eq\\mathcal{F}_{0}(y)\\), due to noise in data collection or simulation. Inspired by classifier guidance [4], we achieve knowledge alignment by training a knowledge alignment network \\(U_{\\phi}(z_{t},t,y)\\) to estimate \\(\\mathcal{F}(\\widehat{x},y)\\) from the intermediate latent \\(z_{t}\\) at noising step \\(t\\). The key idea is to adjust the transition probability distribution \\(p_{\\theta}(z_{t-1}|z_{t},z_{\\text{cond}})\\) in (2) during each latent denoising step to reduce the likelihood of sampling \\(z_{t}\\) values expected to violate the constraints: \\[p_{\\theta,\\phi}(z_{t}|z_{t+1},y,\\mathcal{F}_{0})\\propto p_{\\theta}(z_{t}|z_{t+ 1},z_{\\text{cond}})\\cdot e^{-\\lambda_{\\mathcal{F}}\\|U_{\\phi}(z_{t},t,y)- \\mathcal{F}_{0}(y)\\|}, \\tag{5}\\] where \\(\\lambda_{\\mathcal{F}}\\) is a guidance scale factor. The knowledge alignment network is trained by optimizing the objective \\(L_{U}\\) in Alg. 1. According to [4], (5) can be approximated by shifting the predicted mean of the denoising transition \\(\\mu_{\\theta}(z_{t+1},t,z_{\\text{cond}})\\) by \\(-\\lambda_{\\mathcal{F}}\\Sigma_{\\theta}\ abla_{z_{t}}\\|U_{\\phi}(z_{t},t,y)- \\mathcal{F}_{0}(y)\\|\\), where \\(\\Sigma_{\\theta}\\) is the variance of the original transition distribution \\(p_{\\theta}(z_{t}|z_{t+1},z_{\\text{cond}})=\\mathcal{N}(\\mu_{\\theta}(z_{t+1},t,z_{\\text{cond}}),\\Sigma_{\\theta}(z_{t+1},t,z_{\\text{cond}}))\\). Detailed derivation is provided in Appendix C. The training procedure of knowledge alignment is outlined in Alg. 1. The noisy latent \\(z_{t}\\) for training the knowledge alignment network \\(U_{\\phi}\\) is sampled by encoding the target \\(x\\) using the frame-wise encoder \\(\\mathcal{E}\\) and the forward noising process \\(q(z_{t}|z_{0})\\), eliminating the need for an inference sampling process. This makes the training of the knowledge alignment network \\(U_{\\phi}\\) independent of the LDM training. At inference time, the knowledge alignment mechanism is applied as a plug-in, without impacting the trained VAE and the LDM. This modular approach allows training lightweight knowledge alignment networks \\(U_{\\phi}\\) to flexibly explore various constraints and domain knowledge, without the need for retraining the entire model. This stands as a key advantage over incorporating constraints into model architectures or training losses. ## 3 Experiments We conduct empirical studies and compare PreDiff with other state-of-the-art spatiotemporal forecasting models on a synthetic dataset \\(N\\)-body MNIST [8] and a real-world precipitation nowcasting benchmark SEVIR2[55] to verify the effectiveness of PreDiff in handling the dynamics and uncertainty in complex spatiotemporal systems and generating high quality, accurate forecasts. We impose data-specific knowledge alignment: **energy conservation** on \\(N\\)-body MNIST and **anticipated precipitation intensity** on SEVIR. Experiments demonstrate that PreDiff under the guidance of knowledge alignment (PreDiff-KA) is able to generate predictions that comply with domain expertise much better, without severely sacrificing fidelity. Footnote 2: Dataset is available at [https://sevir.mit.edu/](https://sevir.mit.edu/) ### \\(N\\)-body MNIST Digits Motion Forecasting Dataset.The Earth is a chaotic system with complex dynamics. The real-world Earth observation data, such as radar echo maps and satellite imagery, are usually not physically complete. We are unable to directly verify whether certain domain knowledge, like conservation laws of energy and momentum, is satisfied or not. This makes it difficult to verify if a method is really capable of modeling certain dynamics and adhering to the corresponding constraints. To address this, we follow [8] to generate a synthetic dataset named \\(N\\)-body MNIST3, which is an extension of MovingMNIST [50]. The dataset contains sequences of digits moving subject to the gravitational force from other digits. The governing equation for the motion is \\(\\frac{d^{2}\\mathbf{x}_{i}}{dt^{2}}=-\\sum_{j\ eq i}\\frac{Gm_{j}(\\mathbf{x}_{i}-\\mathbf{x}_{ j})}{(|\\mathbf{x}_{i}-\\mathbf{x}_{j}|+d_{\\text{end}})}\\), where \\(\\mathbf{x}_{i}\\) is the spatial coordinates of the \\(i\\)-th digit, \\(G\\) is the gravitational constant, \\(m_{j}\\) is the mass of the \\(j\\)-th digit, \\(r\\) is a constant representing the power scale in the gravitational law, and \\(d_{\\text{soft}}\\) is a small softening distance that ensures numerical stability. The motion occurs within a \\(64\\times 64\\) frame. When a digit hits the boundaries of the frame, it bounces back by elastic collision. We use \\(N=3\\) for chaotic \\(3\\)-body motion [35]. The forecasting task is to predict \\(10\\)-step ahead future frames \\(x\\in\\mathbb{R}^{10\\times 64\\times 64\\times 1}\\) given the length-\\(10\\) context \\(y\\in\\mathbb{R}^{10\\times 64\\times 64\\times 1}\\). We generate 20,000 sequences for training and 1,000 sequences for testing. Empirical studies on such a synthetic dataset with known dynamics helps provide useful insights for model development and evaluation. Evaluation.In addition to standard metrics MSE, MAE and SSIM, we also report the scores of Frechet Video Distance (FVD) [51], a metric for evaluating the visual quality of generated videos. Similar to Frechet Inception Distance (FID) [20] for evaluating image generation, FVD estimates the distance between the learned distribution and the true data distribution by comparing the statistics of feature vectors extracted from the generations and the real data. The inception network used in FVD for feature extraction is pre-trained on video classification and is not specifically adapted for processing \"unnatural videos\" such as spatiotemporal observation data in Earth systems. Consequently, the FVD scores on the \\(N\\)-body MNIST and SEVIR datasets cannot be directly compared with those on natural video datasets. Nevertheless, the relative ranking of the FVD scores remains a meaningful indicator of model ability to achieve high visual quality, as FVD has shown consistency with expert evaluations across various domains beyond natural images [38, 26]. Scores for all involved metrics are calculated using an ensemble of eight samples from each model. #### 3.1.1 Comparison with the State of the Art We evaluate seven deterministic spatiotemporal forecasting models: **UNet**[55], **ConvLSTM**[47], **PredRNN**[61], **PhyDNet**[11], **E3D-LSTM**[60], **Rainformer**[1] and **Earthformer**[8], as well as two probabilistic spatiotemporal forecasting models: **VideoGPT**[65] and **LDM**[42]. All baselines are trained following the default configurations in their officially released code. More implementation details of baselines are provided in Appendix B.2. Results in Table 1 show that PreDiff outperforms these baselines by a large margin in both conventional video prediction metrics (i.e., MSE, MAE, SSIM), and a perceptual quality metric, FVD. The example predictions in Fig. 2 demonstrate that PreDiff generate predictions with sharp and clear digits in accurate positions. In contrast, deterministic baselines resort to generating blurry predictions to accommodate uncertainty. Probabilistic baselines, though producing sharp strokes, either predict _incorrect_ positions or _fail to reconstruct_ the digits. The performance gap between LDM [42] and PreDiff serves as an ablation study that highlights the importance of the latent backbone's spatiotemporal modeling capacity. Specifically, the Earthformer-UNet utilized in PreDiff demonstrates superior performance compared to the UNet in LDM [42]. #### 3.1.2 Knowledge Alignment: Energy Conservation In the \\(N\\)-body MNIST simulation, digits move based on Newton's law of gravity, and interact with the boundaries through elastic collisions. Consequently, this system obeys the law of conservation of Figure 2: A set of example predictions on the \\(N\\)-body MNIST test set. From top to bottom: context sequence \\(y\\), target sequence \\(x\\), predictions by ConvLSTM [47], Earthformer [8], VideoGPT [65], LDM [42], PreDiff, and PreDiff with knowledge alignment (PreDiff-KA). E.MSE denotes the average error between the total energy (kinetic \\(+\\) potential) of the predictions \\(E(\\widehat{x}^{j})\\) and the total energy of the last context frame \\(E(y^{L_{m}})\\). The red dashed line is to help the reader to judge the position of the digit “2” in the last frame. energy. The total energy of the whole system \\(E(x^{j})\\) at any future time step \\(j\\) during evolution should equal the total energy at the last observation time step \\(E(y^{L_{\\text{u}}})\\). We impose the law of conservation of energy for the knowledge alignment on \\(N\\)-body MNIST in the form of (4) : \\[\\mathcal{F}(\\widehat{x},y) \\equiv[E(\\widehat{x}^{1}),\\dots,E(\\widehat{x}^{L_{\\text{ave}}})]^ {T}, \\tag{6}\\] \\[\\mathcal{F}_{0}(y) \\equiv[E(y^{L_{\\text{u}}}),\\dots,E(y^{L_{\\text{u}}})]^{T}. \\tag{7}\\] The ground-truth values of the total energy \\(E(y^{L_{\\text{u}}})\\) and \\(E(x^{j})\\) are directly accessible since \\(N\\)-body MNIST is a synthetic dataset from simulation. The total energy can be derived from the velocities (kinetic energy) and positions (potential energy) of the moving digits. A knowledge alignment network \\(U_{\\phi}\\) is trained following Alg. 1 to guide the PreDiff to generate forecasts \\(\\widehat{x}\\) that conserve the same energy as the initial step \\(E(y^{L_{\\text{u}}})\\). To verify the effectiveness of the knowledge alignment on guiding the generations to comply with the law of conservation of energy, we train an energy detector \\(E_{\\text{det}}(\\widehat{x})\\)4 that detects the total energy of the forecasts \\(\\widehat{x}\\). We evaluate the energy error between the forecasts and the initial energy using \\(\\text{E.MSE}(\\widehat{x},y)\\equiv\\text{MSE}(E_{\\text{det}}(\\widehat{x}),E(y^{ L_{\\text{u}}}))\\) and \\(\\text{E.MAE}(\\widehat{x},y)\\equiv\\text{MAE}(E_{\\text{det}}(\\widehat{x}),E(y^{ L_{\\text{u}}}))\\). In this evaluation, we exclude the methods that generate blurred predictions with ambiguous digit positions. We only focus on the methods that are capable of producing clear digits in precise positions. Footnote 4: The test MSE of the energy detector is \\(5.56\\times 10^{-5}\\), which is much smaller than the scores of E.MSE shown in Table 1. This indicates that the energy detector has high precision and reliability for verifying energy conservation in the model forecasts. As illustrated in Table 1, PreDiff-KA substantially outperforms all baseline methods and PreDiff without knowledge alignment in E.MSE and E.MAE. This demonstrates that the forecasts of PreDiff-KA comply much better with the law of conservation of energy, while still maintaining high visual quality with an FVD score of \\(4.063\\). Furthermore, we detect energy errors in the target data sequences. The first row of Table 1 indicates that even the target from the training data may not strictly adhere to the prior knowledge. This could be due to discretization errors in the simulation. Table 1 shows that all baseline methods and PreDiff have larger energy errors than the target, meaning purely data-oriented approaches cannot eliminate the impact of noise in the training data. In contrast, PreDiff-KA, guided by the law of conservation of energy, overcomes the intrinsic defects in the training data, achieving even lower energy errors compared to the target. A typical example shown in Fig. 2 demonstrates that while PreDiff precisely reproduces the ground-truth position of digit \"2\" in the last frame (aligned to the red dashed line), resulting in nearly the same energy error (\\(\\text{E.MSE}=0.0277\\)) as the ground-truth's (\\(\\text{E.MSE}=0.0261\\)), PreDiff-KA successfully \\begin{table} \\begin{tabular}{l|c|c c c|c c c} \\hline \\hline \\multirow{2}{*}{Model} & \\multirow{2}{*}{\\#Param. (M)} & \\multicolumn{4}{c|}{Frame Metrics} & \\multicolumn{2}{c}{Energy Metrics} \\\\ & & MSE \\(\\downarrow\\) & MAE \\(\\downarrow\\) & SSIM \\(\\uparrow\\) & FVD \\(\\downarrow\\) & E.MSE \\(\\downarrow\\) & E.MAE \\(\\downarrow\\) \\\\ \\hline \\hline Target & - & 0.000 & 0.000 & 1.0000 & 0.000 & 0.0132 & 0.0697 \\\\ Persistence & - & 104.9 & 139.0 & 0.7270 & 168.3 & - & - \\\\ \\hline UNet [55] & 16.6 & 38.90 & 94.29 & 0.8260 & 142.3 & - & - \\\\ ConvLSTM [47] & 14.0 & 32.15 & 72.64 & 0.8886 & 86.31 & - & - \\\\ PredRNN [61] & 23.8 & 21.76 & 54.32 & 0.9288 & 20.65 & - & - \\\\ PhyDNet [11] & 3.1 & 28.97 & 78.66 & 0.8206 & 178.0 & - & - \\\\ E3D-LSTM [60] & 12.9 & 22.98 & 62.52 & 0.9131 & 22.28 & - & - \\\\ Rainformer [1] & 19.2 & 38.89 & 96.47 & 0.8036 & 163.5 & - & - \\\\ Earthformer [8] & 7.6 & 14.82 & 39.93 & 0.9538 & 6.798 & - & - \\\\ \\hline VideoGPT [65] & 92.2 & 53.68 & 77.42 & 0.8468 & 39.28 & 0.0228 & 0.1092 \\\\ LDM [42] & 410.3 & 46.29 & 72.19 & 0.8773 & 3.432 & 0.0243 & 0.1172 \\\\ \\hline PreDiff & 120.7 & **9.492** & **25.01** & **0.9716** & **0.9871** & 0.0226 & 0.1083 \\\\ PreDiff-KA & 129.4 & 21.90 & 43.57 & 0.9303 & 4.063 & **0.0039** & **0.0443** \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 1: Performance comparison on \\(N\\)-body MNIST. We report conventional frame quality metrics (MSE, MAE, SSIM), along with Fréchet Video Distance (FVD) [51] for assessing visual quality. Energy conservation is evaluated via E.MSE and E.MAE between the energy of predictions \\(E_{\\text{det}}(\\widehat{x})\\) and the initial energy \\(E(y^{L_{\\text{u}}})\\). Lower values on the energy metrics indicate better compliance with conservation of energy. corrects the motion of digit \"2\", providing it with physically plausible velocity and position (slightly off the red dashed line). The knowledge alignment ensures that the generation complies better with the law of conservation of energy, resulting in a much lower \\(\\text{E.MSE}=0.0086\\). On the contrary, none of the evaluated baselines can overcome the intrinsic noise from the data, resulting in energy errors comparable to or larger than that of the ground-truth. Notice that the pixel-wise scores MSE, MAE and SSIM are less meaningful for evaluating PreDiff-KA, since correcting the noise of the energy results in changing the velocities and positions of the digits. A minor change in the position of a digit can cause a large pixel-wise error, even though the digit is still generated sharply and in high quality as shown in Fig. 2. ### SEVIR Precipitation Nowcasting Dataset.The Storm EVent ImageRy (SEVIR) [55] is a spatiotemporal Earth observation dataset which consists of \\(384\\) km \\(\\times 384\\) km image sequences spanning over 4 hours. Images in SEVIR are sampled and aligned across five different data types: three channels (C02, C09, C13) from the GOES-16 advanced baseline imager, NEXRAD Vertically Integrated Liquid (VIL) mosaics, and GOES-16 Geostationary Lightning Mapper (GLM) flashes. The SEVIR benchmark supports scientific research on multiple meteorological applications including precipitation nowcasting, synthetic radar generation, front detection, etc. Due to computational resource limitations, we adopt a downsampled version of SEVIR for benchmarking precipitation nowcasting. The task is to predict the future VIL up to 60 minutes (6 frames) given 70 minutes of context VII (7 frames) at a spatial resolution of \\(128\\times 128\\), i.e. \\(x\\in\\mathbb{R}^{6\\times 128\\times 128\\times 1}\\), \\(y\\in\\mathbb{R}^{7\\times 128\\times 128\\times 1}\\). Evaluation.Following [55; 8], we adopt the Critical Success Index (CSI) for evaluation, which is commonly used in precipitation nowcasting and is defined as \\(\\texttt{CSI}=\\frac{\\#\\texttt{Bits}}{\\#\\texttt{Bits}+\\#\\texttt{Mises}+\\# \\texttt{Jarms}}\\). To count the \\(\\#\\texttt{Bits}\\) (truth=1, pred=1), \\(\\#\\texttt{Misses}\\) (truth=1, pred=0) and \\(\\#\\texttt{F.Alamrs}\\) (truth=0, pred=1), the prediction and the ground-truth are rescaled to the range \\(0-255\\) and binarized at thresholds \\([16,74,133,160,181,219]\\). We also follow [41] to report the CSI at pooling scale \\(4\\times 4\\) and \\(16\\times 16\\), which evaluate the performance on neighborhood aggregations at multiple spatial scales. These pooled CSI metrics assess the models' ability to capture local pattern distributions. Additionally, we incorporate FVD [51] and continuous ranked probability score Figure 3: A set of example forecasts from baselines and PreDiff on the SEVIR test set. From top to bottom: context sequence \\(y\\), target sequence \\(x\\), forecasts from ConvLSTM [47], Earthformer [8], VideoGPT[65], LDM [42], PreDiff. (CRPS) [9] for assessing the visual quality and uncertainty modeling capabilities of the investigated methods. CRPS measures the discrepancy between the predicted distribution and the true distribution. When the predicted distribution collapses into a single value, as in deterministic models, CRPS reduces to Mean Absolute Error (MAE). A lower CRPS value indicates higher forecast accuracy. Scores for all involved metrics are calculated using an ensemble of eight samples from each model. #### 3.2.1 Comparison to the State of the Art We adjust the configurations of involved baselines accordingly and tune some of the hyperparameters for adaptation on the SEVIR dataset. More implementation details of baselines are provided in Appendix B.2. The experiment results listed in Table 2 show that probabilistic spatiotemporal forecasting methods are not good at achieving high CSI scores. However, they are more powerful at capturing the patterns and the true distribution of the data, hence achieving much better FVD scores and CSI-pool16. Qualitative results shown in Fig. 3 demonstrate that CSI is not aligned with human perceptual judgement. For such a complex system, deterministic methods give up capturing the real patterns and resort to averaging the possible futures, i.e. blurry predictions, to keep the scores from appearing too inaccurate. Probabilistic approaches, of which PreDiff is the best, though are not favored by per-pixel metrics, perform better at capturing the data distribution within a local area, resulting in higher CSI-pool16, lower CRPS, and succeed in keeping the correct local patterns, which can be crucial for recognizing weather events. More detailed quantitative results on SEVIR are provided in Appendix D. #### 3.2.2 Knowledge Alignment: Anticipated Average Intensity Earth system observation data, such as the Vertically Integrated Liquid (VIL) data in SEVIR, are usually not physically complete, posing challenges for directly incorporating physical laws for guidance. However, with highly flexible knowledge alignment mechanism, we can still utilize auxiliary prior knowledge to guide the forecasting effectively. Specifically for precipitation nowcasting on SEVIR, we use anticipated precipitation intensity to align the generations to simulate possible extreme weather events. We denote the average intensity of a data sequence as \\(I(x)\\in\\mathbb{R}^{+}\\). In order to estimate the conditional quantiles of future intensity, we train a simple probabilistic time series forecasting model with a parametric (Gaussian) distribution \\(p_{\\tau}(I(x)|[I(y^{j})])=\\mathcal{N}(\\mu_{\\tau}([I(y^{j})]),\\sigma_{\\tau}([I(y ^{j})]))\\) that predict Figure 4: A set of example forecasts from PreDiff-KA, i.e., PreDiff under the guidance of anticipated average intensity. From top to bottom: context sequence \\(y\\), target sequence \\(x\\), forecasts from PreDiff and PreDiff-KA showcasing different levels of anticipated future intensity (\\(\\mu_{\\tau}+n\\sigma_{\\tau}\\)), where \\(n\\) takes the values of \\(4,2,-2,-4\\). the distribution of the average future intensity \\(I(x)\\) given the average intensity of each context frame \\([I(y^{j})]_{j=1}^{L_{n}}\\) (abbreviated as \\([I(y^{j})]\\)). By incorporating \\(\\mathcal{F}(\\widehat{x},y)\\equiv I(\\widehat{x})\\) and \\(\\mathcal{F}_{0}(y)\\equiv\\mu_{\\tau}+n\\sigma_{\\tau}\\) for knowledge alignment, PreDiff-KA gains the capability of generating forecasts for potential extreme cases, e.g., where \\(I(\\widetilde{x})\\) falls outside the typical range of \\(\\mu_{\\tau}\\pm\\sigma_{\\tau}\\). Fig. 4 shows a set of generations from PreDiff and PreDiff-KA with anticipated future intensity \\(\\mu_{\\tau}+n\\sigma_{\\tau}\\), \\(n\\in\\{-4,-2,2,4\\}\\). This qualitative example demonstrates that PreDiff is not only capable of capturing the distribution of the future, but also flexible at highlighting possible extreme cases like rainstorms and droughts with the knowledge alignment mechanism, which is crucial for decision-making and precaution. According to Table 2, the FVD score of PreDiff-KA (\\(34.18\\)) is only slightly worse than the FVD score of PreDiff (\\(33.05\\)). This indicates that knowledge alignment effectively aligns the generations with prior knowledge while maintaining fidelity and adherence to the true data distribution. ## 4 Conclusions and Broader Impacts In this paper, we propose PreDiff, a novel latent diffusion model for precipitation nowcasting. We also introduce a general two-stage pipeline for training DL models for Earth system forecasting. Specifically, we develop knowledge alignment mechanism that is capable of guiding PreDiff to generate forecasts in compliance with domain-specific prior knowledge. Experiments demonstrate that our method achieves state-of-the-art performance on \\(N\\)-body MNIST and SEVIR datasets. Our work has certain limitations: 1) Benchmark datasets and evaluation metrics for precipitation nowcasting and Earth system forecasting are still maturing compared to the computer vision domain. While we utilize conventional precipitation forecasting metrics and visual quality evaluation, aligning these assessments with expert judgement remains an open challenge. 2) Effective integration of physical principles and domain knowledge into DL models for precipitation nowcasting remains an active research area. Close collaboration between DL researchers and domain experts in meteorology and climatology will be key to developing hybrid models that effectively leverage both data-driven learning and scientific theory. 3) While Earth system observation data have grown substantially in recent years, high-quality data remain scarce in many domains. This scarcity can limit PreDiff's ability to accurately capture the true distribution, occasionally resulting in unrealistic forecast hallucinations under the guidance of prior knowledge as it attempts to circumvent the knowledge alignment mechanism. Further research on enhancing the sample efficiency of PreDiff and the knowledge alignment mechanism is needed. In conclusion, PreDiff represents a promising advance in knowledge-aligned DL for Earth system forecasting, but work remains to improve benchmarking, incorporate scientific knowledge, and boost model robustness through collaborative research between AI and domain experts. \\begin{table} \\begin{tabular}{l|c|c c c c c} \\hline \\hline Model & \\#Param. (M) & \\multicolumn{5}{c}{Metrics} \\\\ & & \\multicolumn{1}{c}{FVD \\(\\downarrow\\)} & \\multicolumn{1}{c}{CRPS \\(\\downarrow\\)} & \\multicolumn{1}{c}{CSI \\(\\uparrow\\)} & \\multicolumn{1}{c}{CSI-pool\\(\\uparrow\\)} & \\multicolumn{1}{c}{CSI-pool\\(\\uparrow\\)} \\\\ \\hline \\hline Persistence & - & 525.2 & 0.0526 & 0.2613 & 0.3702 & 0.4690 \\\\ \\hline UNet [55] & 16.6 & 753.6 & 0.0353 & 0.3593 & 0.4098 & 0.4805 \\\\ ConvLSTM [47] & 14.0 & 659.7 & 0.0332 & 0.4185 & 0.4452 & 0.5135 \\\\ PredMesh [61] & 46.6 & 663.5 & 0.0306 & 0.4080 & 0.4497 & 0.5005 \\\\ PhyDNet [11] & 13.7 & 723.2 & 0.0319 & 0.3940 & 0.4379 & 0.4854 \\\\ E3D-LSTM [60] & 35.6 & 600.1 & 0.0297 & 0.4038 & 0.4492 & 0.4961 \\\\ Rainformer [1] & 184.0 & 760.5 & 0.0357 & 0.3661 & 0.4232 & 0.4738 \\\\ Earthformer [8] & 15.1 & 690.7 & 0.0304 & **0.4419** & 0.4562 & 0.5005 \\\\ \\hline DGMR [41] & 71.5 & 485.2 & 0.0435 & 0.2675 & 0.3431 & 0.4832 \\\\ VideoGPT [65] & 99.6 & 261.6 & 0.0381 & 0.3653 & 0.4349 & 0.5798 \\\\ LDM [42] & 438.6 & 133.0 & 0.0280 & 0.3580 & 0.4022 & 0.5522 \\\\ \\hline PreDiff & 220.5 & **33.05** & **0.0246** & 0.4100 & **0.4624** & **0.6244** \\\\ PreDiff-KA (\\(\\in[-2\\sigma_{\\tau},2\\sigma_{\\tau}]\\)) & 229.4 & 34.18 & - & - & - & - \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 2: Performance comparison on SEVIR. The Critical Success Index, also known as the intersection over union (IoU), is calculated at different precipitation thresholds and denoted as \\(\\texttt{CSI-}thresh\\). \\(\\texttt{CSI}\\) reports the mean of \\(\\texttt{CSI-}[16,74,133,160,181,219]\\). \\(\\texttt{CSI-pool}s\\) with \\(s=4\\) and \\(s=16\\) report the \\(\\texttt{CSI}\\) at pooling scales of \\(4\\times 4\\) and \\(16\\times 16\\). Besides, we include the continuous ranked probability score (CRPS) for probabilistic forecast assessment, and the scores of Frechet Video Distance (FVD) for evaluating visual quality. ## References * [1] Cong Bai, Feng Sun, Jinglin Zhang, Yi Song, and Shengyong Chen. Rainformer: Features extraction balanced network for radar-based precipitation nowcasting. _IEEE Geoscience and Remote Sensing Letters_, 19:1-5, 2022. * [2] Kaifeng Bi, Lingxi Xie, Hengheng Zhang, Xin Chen, Xiaotao Gu, and Qi Tian. Accurate medium-range global weather forecasting with 3d neural networks. _Nature_, pages 1-6, 2023. * [3] Kang Chen, Tao Han, Junchao Gong, Lei Bai, Fenghua Ling, Jing-Jia Luo, Xi Chen, Leiming Ma, Tianning Zhang, Rui Su, et al. Fengwu: Pushing the skillful global medium-range weather forecast beyond 10 days lead. _arXiv preprint arXiv:2304.02948_, 2023. * [4] Prafulla Dhariwal and Alexander Nichol. Diffusion models beat gans on image synthesis. _Advances in Neural Information Processing Systems_, 34:8780-8794, 2021. * [5] Lasse Espeholt, Shreya Agrawal, Casper Sonderby, Manoj Kumar, Jonathan Heek, Carla Bromberg, Cenk Gazen, Jason Hickey, Aaron Bell, and Nal Kalchbrenner. Skillful twelve hour precipitation forecasts using large context neural networks. _arXiv preprint arXiv:2111.07470_, 2021. * [6] Patrick Esser, Johnathan Chiu, Parmida Atighehchian, Jonathan Grasskog, and Anastasis Germanidis. Structure and content-guided video synthesis with diffusion models. _arXiv preprint arXiv:2302.03011_, 2023. * [7] Patrick Esser, Robin Rombach, and Bjorn Ommer. Taming transformers for high-resolution image synthesis. In _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_, pages 12873-12883, 2021. * [8] Zhihan Gao, Xingjian Shi, Hao Wang, Yi Zhu, Yuyang Wang, Mu Li, and Dit-Yan Yeung. Earthformer: Exploring space-time transformers for earth system forecasting. In _NeurIPS_, 2022. * [9] Tilmann Gneiting and Adrian E Raftery. Strictly proper scoring rules, prediction, and estimation. _Journal of the American statistical Association_, 102(477):359-378, 2007. * [10] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. _Advances in neural information processing systems_, 27, 2014. * [11] Vincent Le Guen and Nicolas Thome. Disentangling physical dynamics from unknown factors for unsupervised video prediction. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 11474-11484, 2020. * [12] John Guibas, Morteza Mardani, Zongyi Li, Andrew Tao, Anima Anandkumar, and Bryan Catanzaro. Adaptive fourier neural operators: Efficient token mixers for transformers. _arXiv preprint arXiv:2111.13587_, 2021. * [13] Shantanu Gupta, Hao Wang, Zachary Lipton, and Yuyang Wang. Correcting exposure bias for link recommendation. In _ICML_, 2021. * [14] Derek Hansen, Danielle C. Maddix, Shima Alizadeh, Gaurav Gupta, and Michael W. Mahoney. Learning physical models that can respect conservation laws. In _Proceedings of the \\(40^{th}\\) of International Conference on Machine Learning_, volume 202, 2023. * [15] William Harvey, Saeid Naderiparizi, Vaden Masrani, Christian Weilbach, and Frank Wood. Flexible diffusion modeling of long videos. _arXiv preprint arXiv:2205.11495_, 2022. * [16] Yusuke Hatanaka, Yannik Glaser, Geoff Galgon, Giuseppe Torri, and Peter Sadowski. Diffusion models for high-resolution solar forecasts. _arXiv preprint arXiv:2302.00170_, 2023. * [17] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In _Proceedings of the IEEE conference on computer vision and pattern recognition_, pages 770-778, 2016. * [18] Dan Hendrycks and Kevin Gimpel. Gaussian error linear units (gelus). _arXiv preprint arXiv:1606.08415_, 2016. * [19] Hans Hersbach, Bill Bell, Paul Berrisford, Shoji Hirahara, Andras Horanyi, Joaquin Munoz-Sabater, Julien Nicolas, Carole Peubey, Raluca Radu, Dinand Schepers, et al. The era5 global reanalysis. _Quarterly Journal of the Royal Meteorological Society_, 146(730):1999-2049, 2020. * [20] Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. Gans trained by a two time-scale update rule converge to a local nash equilibrium. _Advances in neural information processing systems_, 30, 2017. * [21] Geoffrey E Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, and Ruslan R Salakhutdinov. Improving neural networks by preventing co-adaptation of feature detectors. _arXiv preprint arXiv:1207.0580_, 2012. * [22] Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. _Advances in Neural Information Processing Systems_, 33:6840-6851, 2020. * [23] Jonathan Ho, Tim Salimans, Alexey Gritsenko, William Chan, Mohammad Norouzi, and David J Fleet. Video diffusion models. _arXiv preprint arXiv:2204.03458_, 2022. * [24] Lianghua Huang, Di Chen, Yu Liu, Yujun Shen, Deli Zhao, and Jingren Zhou. Composer: Creative and controllable image synthesis with composable conditions. _arXiv preprint arXiv:2302.09778_, 2023. * [25] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In _International conference on machine learning_, pages 448-456. pmlr, 2015. * [26] Kevin Kilgour, Mauricio Zuluaga, Dominik Roblek, and Matthew Sharifi. Fr\\(\\backslash\\)'echet audio distance: A metric for evaluating music enhancement algorithms. _arXiv preprint arXiv:1812.08466_, 2018. * [27] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. _arXiv preprint arXiv:1412.6980_, 2014. * [28] Diederik P Kingma and Max Welling. Auto-encoding variational bayes. _arXiv preprint arXiv:1312.6114_, 2013. * [29] Remi Lam, Alvaro Sanchez-Gonzalez, Matthew Willson, Peter Wirnsberger, Meire Fortunato, Alexander Pritzel, Suman Ravuri, Timo Ewalds, Ferran Alet, Zach Eaton-Rosen, et al. Graphcast: Learning skillful medium-range global weather forecasting. _arXiv preprint arXiv:2212.12794_, 2022. * [30] Jussi Leinonen, Ulrich Hamann, Daniele Nerini, Urs Germann, and Gabriele Franch. Latent diffusion models for generative precipitation nowcasting with accurate uncertainty quantification. _arXiv preprint arXiv:2304.12891_, 2023. * [31] Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. _arXiv preprint arXiv:1711.05101_, 2017. * [32] Zhengxiong Luo, Dayou Chen, Yingya Zhang, Yan Huang, Liang Wang, Yujun Shen, Deli Zhao, Jingren Zhou, and Tieniu Tan. Videofusion: Decomposed diffusion models for high-quality video generation. _arXiv e-prints_, pages arXiv-2303, 2023. * [33] Francois Maze and Faez Ahmed. Topodiff: A performance and constraint-guided diffusion model for topology optimization. _arXiv preprint arXiv:2208.09591_, 2022. * [34] Lu Mi, Hao Wang, Yonglong Tian, and Nir Shavit. Training-free uncertainty estimation for neural networks. In _AAAI_, 2022. * [35] Valtonen MJ, Mauri Valtonen, and Hannu Karttunen. _The three-body problem_. Cambridge University Press, 2006. * [36] Haomiao Ni, Changhao Shi, Kai Li, Sharon X. Huang, and Martin Renqiang Min. Conditional image-to-video generation with latent flow diffusion models, 2023. * [37] Jaideep Pathak, Shashank Subramanian, Peter Harrington, Sanjeev Raja, Ashesh Chattopadhyay, Morteza Mardani, Thorsten Kurth, David Hall, Zongyi Li, Kamyar Azizzadenesheli, et al. FourCastNet: A global data-driven high-resolution weather model using adaptive fourier neural operators. _arXiv preprint arXiv:2202.11214_, 2022. * [38] Kristina Preuer, Philipp Renz, Thomas Unterthiner, Sepp Hochreiter, and Gunter Klambauer. Frechet chemnet distance: a metric for generative models for molecules in drug discovery. _Journal of chemical information and modeling_, 58(9):1736-1741, 2018. * [39] Ruslan Rakhimov, Denis Volkhonskiy, Alexey Artemov, Denis Zorin, and Evgeny Burnaev. Latent video transformer. _arXiv preprint arXiv:2006.10704_, 2020. * [40] Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen. Hierarchical text-conditional image generation with clip latents. _arXiv preprint arXiv:2204.06125_, 2022. * [41] Suman Ravuri, Karel Lenc, Matthew Willson, Dmitry Kangin, Remi Lam, Piotr Mirowski, Megan Fitzsimons, Maria Athanassiadou, Sheleem Kashem, Sam Madge, et al. Skiful precipitation nowcasting using deep generative models of radar. _Nature_, 597(7878):672-677, 2021. * [42] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Bjorn Ommer. High-resolution image synthesis with latent diffusion models. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 10684-10695, 2022. * [43] Nataniel Ruiz, Yuanzhen Li, Varun Jampani, Yael Pritch, Michael Rubinstein, and Kfir Aberman. Dreambooth: Fine tuning text-to-image diffusion models for subject-driven generation. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 22500-22510, 2023. * [44] Nadim Saad, Gaurav Gupta, Shima Alizadeh, and Danielle C. Maddix. Guiding continuous operator learning through physics-based boundary constraints. In _Proceedings of the \\(11^{th}\\) International Conference on Learning Representations_, 2023. * [45] Chitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily Denton, Seyed Kamyar Seyed Ghasemipour, Burcu Karagol Ayan, S Sara Mahdavi, Rapha Gontijo Lopes, et al. Photorealistic text-to-image diffusion models with deep language understanding. _arXiv preprint arXiv:2205.11487_, 2022. * [46] Tim Salimans, Andrej Karpathy, Xi Chen, and Diederik P Kingma. Pixelcnn++: Improving the pixelcnn with discretized logistic mixture likelihood and other modifications. _arXiv preprint arXiv:1701.05517_, 2017. * [47] Xingjian Shi, Zhourong Chen, Hao Wang, Dit-Yan Yeung, Wai-Kin Wong, and Wang-chun Woo. Convolutional LSTM network: A machine learning approach for precipitation nowcasting. In _NeurIPS_, volume 28, 2015. * [48] Xingjian Shi, Zhihan Gao, Leonard Lausen, Hao Wang, Dit-Yan Yeung, Wai-kin Wong, and Wang-chun Woo. Deep learning for precipitation nowcasting: A benchmark and a new model. In _NeurIPS_, volume 30, 2017. * [49] Casper Kaae Sonderby, Lasse Espeholt, Jonathan Heek, Mostafa Dehghani, Avital Oliver, Tim Salimans, Shreya Agrawal, Jason Hickey, and Nal Kalchbrenner. Metnet: A neural weather model for precipitation forecasting. _arXiv preprint arXiv:2003.12140_, 2020. * [50] Nitish Srivastava, Elman Mansimov, and Ruslan Salakhudinov. Unsupervised learning of video representations using LSTMs. In _ICML_, pages 843-852. PMLR, 2015. * [51] Thomas Unterthiner, Sjoerd van Steenkiste, Karol Kurach, Raphael Marinier, Marcin Michalski, and Sylvain Gelly. Fvd: A new metric for video generation. In _DGS@ICLR_, 2019. * [52] Arash Vahdat, Karsten Kreis, and Jan Kautz. Score-based generative modeling in latent space. In _Neural Information Processing Systems (NeurIPS)_, 2021. * [53] Aaron Van den Oord, Nal Kalchbrenner, Lasse Espeholt, Oriol Vinyals, Alex Graves, et al. Conditional image generation with pixelcnn decoders. _Advances in neural information processing systems_, 29, 2016. * [54] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In _NeurIPS_, volume 30, 2017. * [55] Mark Veillette, Siddharth Samsi, and Chris Mattioli. SEVIR: A storm event imagery dataset for deep learning applications in radar and satellite meteorology. _Advances in Neural Information Processing Systems_, 33:22009-22019, 2020. * [56] Vikram Voleti, Alexia Jolicoeur-Martineau, and Christopher Pal. Masked conditional video diffusion for prediction, generation, and interpolation. _arXiv preprint arXiv:2205.09853_, 2022. * [57] Hao Wang, SHI Xingjian, and Dit-Yan Yeung. Natural-parameter networks: A class of probabilistic neural networks. In _NIPS_, pages 118-126, 2016. * [58] Hao Wang and Dit-Yan Yeung. Towards bayesian deep learning: A framework and some existing methods. _TDKE_, 28(12):3395-3408, 2016. * [59] Hao Wang and Dit-Yan Yeung. A survey on bayesian deep learning. _CSUR_, 53(5):1-37, 2020. * [60] Yunbo Wang, Lu Jiang, Ming-Hsuan Yang, Li-Jia Li, Mingsheng Long, and Li Fei-Fei. Eidetic 3D LSTM: A model for video prediction and beyond. In _International conference on learning representations_, 2018. * [61] Yunbo Wang, Haixu Wu, Jianjin Zhang, Zhifeng Gao, Jianmin Wang, Philip Yu, and Mingsheng Long. PredRNN: A recurrent neural network for spatiotemporal predictive learning. _IEEE Transactions on Pattern Analysis and Machine Intelligence_, 2022. * [62] Ziyan Wang and Hao Wang. Variational imbalanced regression: Fair uncertainty quantification via probabilistic smoothing. In _NeurIPS_, 2023. * [63] Dirk Weissenborn, Oscar Tackstrom, and Jakob Uszkoreit. Scaling autoregressive video models. In _International Conference on Learning Representations_, 2019. * [64] Yuxin Wu and Kaiming He. Group normalization. In _Proceedings of the European conference on computer vision (ECCV)_, pages 3-19, 2018. * [65] Wilson Yan, Yunzhi Zhang, Pieter Abbeel, and Aravind Srinivas. VideoGPT: Video generation using vq-vae and transformers. _arXiv preprint arXiv:2104.10157_, 2021. * [66] Sihyun Yu, Kihyuk Sohn, Subin Kim, and Jinwoo Shin. Video probabilistic diffusion models in projected latent space. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, 2023. * [67] Ye Yuan, Jiaming Song, Umar Iqbal, Arash Vahdat, and Jan Kautz. Physdiff: Physics-guided human motion diffusion model. _arXiv preprint arXiv:2212.02500_, 2022. * [68] Lvmin Zhang and Maneesh Agrawala. Adding conditional control to text-to-image diffusion models. _arXiv preprint arXiv:2302.05543_, 2023. * [69] Lu Zhou and Rong-Hua Zhang. A self-attention-based neural network for three-dimensional multivariate modeling and its skillful enso predictions. _Science Advances_, 9(10):eadf2827, 2023. Related Work Deep learning for precipitation nowcastingIn recent years, the field of DL has experienced remarkable advancements, revolutionizing various domains of study, including Earth science. One area where DL has particularly made significant strides is in the field of Earth system forecasting, especially precipitation nowcasting. Precipitation nowcasting benefits from the success of DL architectures such as convolutional neural networks (CNNs), recurrent neural networks (RNNs), and Transformers, which have demonstrated their effectiveness in handling spatiotemporal tensors, the typical formulation for Earth system observation data. ConvLSTM [47], a pioneering approach in DL for precipitation nowcasting, combines the strengths of CNNs and LSTMs processing spatial and temporal data. PredRNN [61] builds upon ConvLSTM by incorporating a spatiotemporal memory flow structure. E3D-LSTM [60] integrates 3D CNN to LSTM to enhance long-term high-level relation modeling. PhyDNet [11] incorporated partial differential equation (PDE) constraints in the latent space. MetNet [49] and its successor, MetNet-2 [5], propose architectures based on ConvLSTM and dilated CNN, enabling skillful precipitation forecasts up to twelve hours ahead. DGMR [41] takes an adversarial training approach to generate sharp and accurate nowcasts, addressing the issue of blurry predictions. In addition to precipitation nowcasting, there has been a surge in the modeling of global weather and medium-range weather forecasting due to the availability of extensive Earth observation data, such as the European Centre for Medium-Range Weather Forecasts (ECMWF)'s ERA5 [19] dataset. Several DL-based models have emerged in this area. FourCastNet [37] proposes an architecture with Adaptive Fourier Neural Operators (AFNO) [12] as building blocks for autoregressive weather forecasting. FengWu [3] introduces a multi-model Transformer-based global medium-range weather forecast model that achieves skillful forecasts up to ten days ahead. GraphCast [29] combines graph neural networks with convolutional LSTMs to tackle sub-seasonal forecasting tasks, representing weather phenomena as spatiotemporal graphs. Pangu-Weather [2] proposes a 3D Transformer model with Earth-specific priors and a hierarchical temporal aggregation strategy for medium-range global weather forecasting. While recent years have seen remarkable progress in DL for precipitation nowcasting, existing methods still face some limitations. Some methods are deterministic, failing to capture uncertainty and resulting in blurry generation. Others lack the capability of incorporating prior knowledge, which is crucial for machine learning for science. In contrast, PreDiff captures the uncertainty in the underlying data distribution via diffusion models, avoiding simply averaging all possibilities into blurry forecasts. Our knowledge alignment mechanism facilitates post-training alignment with physical principles and domain-specific prior knowledge. Diffusion modelsDiffusion models (DMs) [22] are a class of generative models that have become increasingly popular in recent years. DMs learn the data distribution by constructing a forward process that adds noise to the data, and then approximating the reverse process to remove the noise. Latent diffusion models (LDMs) [42] are a variant of DMs that are trained on latent vector outputs from a variational autoencoder. LDMs have been shown to be more efficient in both training and inference compared to original DMs. Building on the success of DMs in image generation, DMs have also been adopted for video generation. MCVD [56] trains a DM by randomly masking past and/or future frames in blocks and conditioning on the remaining frames. It generates long videos by autoregressively sampling blocks of frames in a sliding window manner. PVDM [66] projects videos into low-dimensional latent space as 2D vectors, and presents a joint training of unconditional and frame conditional video generations. LFDM [36] employs a flow predictor to estimate latent flows between video frames and learns a DM for temporal latent flow generation. VideoFusion [32] decomposes the transition noise in DMs into per-frame noise and the noise along time axis, and trains two networks jointly to match the noise decomposition. While DMs have demonstrated impressive performance in video synthesis, its applications to precipitation nowcasting and other Earth science tasks have not been well explored. Hatanaka et al. [16] uses DMs to super-resolve coarse numerical predictions for solar forecast. Concurrent to our work, LDCast [30] applies LDMs for precipitation nowcasting. However, LDCast has not studied how to integrate prior knowledge to the DM, which is a unique advantage and novelty of PreDiff. Conditional controls on diffusion modelsAnother key advantage of DMs is the ability to condition generation on text, class labels, and other modalities for controllable and diverse output. For instance, ControlNet [68] enables fine-tuning a pretrained DM by freezing the base model and training a copy end-to-end with conditional inputs. Composer [24] decomposes images into representative factors used as conditions to guide the generation. Beyond text and class labels, conditions in other modalities, including physical constraints, can also be leveraged to provide valuable guidance. TopDiff [33] constrains topology optimization using load, boundary conditions, and volume fraction. Physdiff [67] trains a physics-based motion projection module with reinforcement learning to project denoised motions in diffusion steps into physically plausible ones. Nonetheless, while conditional control has proven to be a powerful technique in various domains, its application in DL for precipitation nowcasting remains an unexplored area. Implementation Details All experiments are conducted on machines with NVIDIA A10G GPUs (24GB memoery). All models, including PreDiff, knowledge alignment networks and the baselines, can fit in a single GPU without the need for gradient checkpointing or model parallelization. ### PreDiff Frame-wise autoencoderWe follow [7; 42] to build frame-wise VAEs (not VQVAEs) and train them adversarially from scratch on \\(N\\)-body MNIST and SEVIR frames. As shown in Sec. 2.2, on \\(N\\)-body MNIST dataset, the spatial downsampling ratio is \\(4\\times 4\\). A frame \\(x^{j}\\in\\mathbb{R}^{64\\times 64\\times 1}\\) is encoded to \\(z^{j}\\in\\mathbb{R}^{16\\times 16\\times 3}\\) by parameterizing \\(p_{\\mathcal{E}}(z^{j}|x^{j})=\\mathcal{N}(\\mu_{\\mathcal{E}}(x^{j})|\\sigma_{ \\mathcal{E}}(x^{j}))\\). On SEVIR dataset, the spatial downsampling ratio is \\(8\\times 8\\). A frame \\(x^{j}\\in\\mathbb{R}^{128\\times 128\\times 1}\\) is encoded to \\(z^{j}\\in\\mathbb{R}^{16\\times 16\\times 4}\\) similarly. The detailed configurations of the encoder and decoder of the VAE on \\(N\\)-body MNIST are shown in Table 3 and Table 4. The detailed configurations of the encoder and decoder of the VAE on SEVIR are shown in Table 5 and Table 6. The discriminators for adversarial training on \\(N\\)-body MNIST and SEVIR datasets share the same configurations, which are shown in Table 7. Latent diffusion model that instantiates \\(p_{\\theta}(z_{t-1}|z_{t},z_{\\text{cond}})\\)Stemming from Earthformer [8], we build _Earthformer-UNet_, which is a hierarchical UNet with self cuboid attention [8] layers as basic building blocks, as shown in Fig. 5. On \\(N\\)-body MNIST, it takes the concatenation along the temporal dimension (the sequence length axis) of \\(z_{\\text{cond}}\\in\\mathbb{R}^{10\\times 16\\times 16\\times 3}\\) and \\(z_{t}\\in\\mathbb{R}^{10\\times 16\\times 3}\\) as input, and outputs \\(z_{t-1}\\in\\mathbb{R}^{10\\times 16\\times 3}\\). On SEVIR, it takes the concatenation along the temporal dimension (the sequence length axis) of \\(z_{\\text{cond}}\\in\\mathbb{R}^{7\\times 16\\times 16\\times 4}\\) and \\(z_{t}\\in\\mathbb{R}^{6\\times 16\\times 16\\times 4}\\) as input, and outputs \\(z_{t-1}\\in\\mathbb{R}^{6\\times 16\\times 16\\times 4}\\). Besides, we add the embedding of the denoising step \\(t\\) to the state in front of each cuboid attention block via an embeding layer TEmbed, following [22]. The detailed configurations of the Earthformer-UNet is described in Table 8. Knowledge alignment networksA knowledge alignment network parameterizes \\(U_{\\phi}(z_{t},t,y)\\) to predict \\(\\mathcal{F}(\\widehat{x},y)\\) using the noisy latent \\(z_{t}\\). In practice, we build an Earthformer encoder [8] with a final pooling block as the knowledge alignment network to parameterize \\(U_{\\phi}(z_{t},t,z_{\\text{cond}})\\), which takes \\(t\\), and the concatenation of \\(z_{\\text{cond}}\\) and \\(z_{t}\\), instead of \\(t\\), \\(y\\) and \\(z_{t}\\) as the inputs. We find this implementation accurate enough when \\(t\\) is small. The detailed configurations of the knowledge alignment network is described in Table 9 OptimizationWe train the frame-wise VAEs using the Adam optimizer [27] following [7]. We train the latent Earthformer-UNet and the knowledge alignment network using the AdamW optimizer [31] following [8]. Detailed configurations are shown in Table 10, Table 11 and Table 12 for the frame-wise VAE, the latent Earthformer-UNet and the knowledge alignment network, respectively. We adopt data parallel and gradient accumulation to use a larger total batch size while the GPU can only afford a smaller micro batch size. Figure 5: **Earthformer-UNet architecture.** PreDiff employs an Earthformer-UNet as the backbone for parameterizing the latent diffusion model \\(p_{\\theta}(z_{t}|z_{t+1},z_{\\text{cond}})\\). It takes the concatenation of the latent context \\(z_{\\text{cond}}\\) (in the blue border) and the previous-step noisy latent future \\(z_{t+1}\\) (in the cyan border) along the temporal dimension (the sequence length axis) as input, and outputs \\(z_{t}\\). (Best viewed in color). \\begin{table} \\begin{tabular}{l|l|c|c} \\hline \\hline Block & Layer & Resolution & Channels \\\\ \\hline \\hline Input \\(x^{j}\\) & - & \\(64\\times 64\\) & \\(1\\) \\\\ \\hline 2D CNN & Conv3 \\(\\times\\) 3 & \\(64\\times 64\\) & \\(1\\to 128\\) \\\\ \\hline \\multirow{6}{*}{ResNet Block \\(\\times 2\\)} & GroupNorm32 & \\(64\\times 64\\) & \\(128\\) \\\\ & Conv3 \\(\\times\\) 3 & \\(64\\times 64\\) & \\(128\\) \\\\ & GroupNorm32 & \\(64\\times 64\\) & \\(128\\) \\\\ & Conv3 \\(\\times\\) 3 & \\(64\\times 64\\) & \\(128\\) \\\\ & SiLU & \\(64\\times 64\\) & \\(128\\) \\\\ \\hline Downsampler & Conv3 \\(\\times\\) 3 & \\(64\\times 64\\to 32\\times 32\\) & \\(128\\) \\\\ \\hline \\multirow{6}{*}{ResNet Block \\(\\times 2\\)} & GroupNorm32 & \\(32\\times 32\\) & \\(128\\to 256,256\\) \\\\ & Conv3 \\(\\times\\) 3 & \\(32\\times 32\\) & \\(256\\) \\\\ & SiLU & \\(32\\times 32\\) & \\(256\\) \\\\ \\hline Downsampler & Conv3 \\(\\times\\) 3 & \\(32\\times 32\\to 16\\times 16\\) & \\(256\\) \\\\ \\hline \\multirow{6}{*}{ResNet Block \\(\\times 2\\)} & GroupNorm32 & \\(16\\times 16\\) & \\(256\\) \\\\ & Conv3 \\(\\times\\) 3 & \\(16\\times 16\\) & \\(256\\to 512,512\\) \\\\ & GroupNorm32 & \\(16\\times 16\\) & \\(512\\) \\\\ & SiLU & \\(16\\times 16\\) & \\(512\\) \\\\ \\hline \\multirow{6}{*}{ResNet Block \\(\\times 2\\)} & GroupNorm32 & \\(16\\times 16\\) & \\(512\\) \\\\ & Linear & \\(16\\times 16\\) & \\(512\\) \\\\ \\hline \\multirow{6}{*}{ResNet Block \\(\\times 2\\)} & GroupNorm32 & \\(16\\times 16\\) & \\(512\\) \\\\ & Conv3 \\(\\times\\) 3 & \\(16\\times 16\\) & \\(512\\) \\\\ & GroupNorm32 & \\(16\\times 16\\) & \\(512\\) \\\\ & Conv3 \\(\\times\\) 3 & \\(16\\times 16\\) & \\(512\\) \\\\ & SiLU & \\(16\\times 16\\) & \\(512\\) \\\\ \\hline \\multirow{6}{*}{Output Block} & GroupNorm32 & \\(16\\times 16\\) & \\(512\\) \\\\ & SiLU & \\(16\\times 16\\) & \\(512\\) \\\\ \\cline{1-1} & Conv3 \\(\\times\\) 3 & \\(16\\times 16\\) & \\(512\\to 6\\) \\\\ \\cline{1-1} & Conv3 \\(\\times\\) 3 & \\(16\\times 16\\) & \\(6\\) \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 3: The details of the encoder of the frame-wise VAE on \\(N\\)-body MNIST frames. It encodes an input frame \\(x^{j}\\in\\mathbb{R}^{64\\times 64\\times 1}\\) into a latent \\(z^{j}\\in\\mathbb{R}^{16\\times 16\\times 3}\\). Conv3 \\(\\times\\) 3 is the 2D convolutional layer with \\(3\\times 3\\) kernel. GroupNorm32 is the Group Normalization (GN) layer [64] with \\(32\\) groups. SiLU is the Sigmoid Linear Unit activation layer [18] with function SiLU\\((x)=x\\cdot\\texttt{sigmoid}(x)\\). The Attention is the self attention layer [54] that first maps the input to queries \\(Q\\), keys \\(K\\) and values \\(V\\) by three Linear layers, and then does self attention operation: Attention\\((x)=\\texttt{Softmax}(QK^{T}/\\sqrt{C})V)\\). \\begin{table} \\begin{tabular}{l|l|c|c} \\hline \\hline Block & Layer & Resolution & Channels \\\\ \\hline \\hline Input \\(z^{j}\\) & - & \\(16\\times 16\\) & \\(3\\) \\\\ \\hline 2D CNN & Conv\\(3\\times 3\\) & \\(16\\times 16\\) & \\(3\\) \\\\ & Conv\\(3\\times 3\\) & \\(16\\times 16\\) & \\(3\\to 512\\) \\\\ \\hline \\multirow{3}{*}{Self Attention Block} & GroupNorm32 & \\(16\\times 16\\) & \\(512\\) \\\\ & Attention & \\(16\\times 16\\) & \\(512\\) \\\\ & Linear & \\(16\\times 16\\) & \\(512\\) \\\\ \\hline \\multirow{3}{*}{ResNet Block \\(\\times 3\\)} & GroupNorm32 & \\(16\\times 16\\) & \\(512\\) \\\\ & Conv\\(3\\times 3\\) & \\(16\\times 16\\) & \\(512\\) \\\\ & GroupNorm32 & \\(16\\times 16\\) & \\(512\\) \\\\ & Conv\\(3\\times 3\\) & \\(16\\times 16\\) & \\(512\\) \\\\ & SiLU & \\(16\\times 16\\) & \\(512\\) \\\\ \\hline Upsampler & Conv\\(3\\times 3\\) & \\(16\\times 16\\to 32\\times 32\\) & \\(512\\) \\\\ \\hline \\multirow{3}{*}{ResNet Block \\(\\times 3\\)} & GroupNorm32 & \\(32\\times 32\\) & \\(512\\to 256,256,256\\) \\\\ & Conv\\(3\\times 3\\) & \\(32\\times 32\\) & \\(256\\) \\\\ & Conv\\(3\\times 3\\) & \\(32\\times 32\\) & \\(256\\) \\\\ & SiLU & \\(32\\times 32\\) & \\(256\\) \\\\ \\hline Upsampler & Conv\\(3\\times 3\\) & \\(32\\times 32\\to 64\\times 64\\) & \\(256\\) \\\\ \\hline \\multirow{3}{*}{ResNet Block \\(\\times 3\\)} & GroupNorm32 & \\(64\\times 64\\) & \\(256\\) \\\\ & Conv\\(3\\times 3\\) & \\(64\\times 64\\) & \\(256\\to 128,128,128\\) \\\\ & GroupNorm32 & \\(64\\times 64\\) & \\(128\\) \\\\ & Conv\\(3\\times 3\\) & \\(64\\times 64\\) & \\(128\\) \\\\ & SiLU & \\(64\\times 64\\) & \\(128\\) \\\\ \\hline \\multirow{3}{*}{Output Block} & GroupNorm32 & \\(64\\times 64\\) & \\(128\\) \\\\ & SiLU & \\(64\\times 64\\) & \\(128\\) \\\\ & Conv\\(3\\times 3\\) & \\(64\\times 64\\) & \\(128\\to 1\\) \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 4: The details of the decoder of the frame-wise VAE on \\(N\\)-body MNIST frames. It decodes a latent \\(z^{j}\\in\\mathbb{R}^{16\\times 16\\times 3}\\) back to a frame in pixel space \\(x^{j}\\in\\mathbb{R}^{64\\times 64\\times 1}\\). Conv\\(3\\times 3\\) is the 2D convolutional layer with \\(3\\times 3\\) kernel. GroupNorm32 is the Group Normalization (GN) layer [64] with \\(32\\) groups. SiLU is the Sigmoid Linear Unit activation layer [18] with function \\(\\texttt{SiLU}(x)=x\\cdot\\texttt{sigmoid}(x)\\). The Attention is the self attention layer [54] that first maps the input to queries \\(Q\\), keys \\(K\\) and values \\(V\\) by three Linear layers, and then does self attention operation: \\(\\texttt{Attention}(x)=\\texttt{Softmax}(QK^{T}/\\sqrt{C})V)\\). \\begin{table} \\begin{tabular}{l|l|c|c} \\hline \\hline Block & Layer & Resolution & Channels \\\\ \\hline \\hline Input \\(x^{j}\\) & - & \\(128\\times 128\\) & \\(1\\) \\\\ \\hline 2D CNN & Conv3 \\(\\times\\) 3 & \\(128\\times 128\\) & \\(1\\to 128\\) \\\\ \\hline \\multirow{4}{*}{ResNet Block \\(\\times 2\\)} & GroupNorm32 & \\(128\\times 128\\) & \\(128\\) \\\\ & Conv3 \\(\\times\\) 3 & \\(128\\times 128\\) & \\(128\\) \\\\ & GroupNorm32 & \\(128\\times 128\\) & \\(128\\) \\\\ & Conv3 \\(\\times\\) 3 & \\(128\\times 128\\) & \\(128\\) \\\\ & SiLU & \\(128\\times 128\\) & \\(128\\) \\\\ \\hline Downsampler & Conv3 \\(\\times\\) 3 & \\(128\\times 128\\to 64\\times 64\\) & \\(128\\) \\\\ \\hline \\multirow{4}{*}{ResNet Block \\(\\times 2\\)} & GroupNorm32 & \\(64\\times 64\\) & \\(128\\) \\\\ & Conv3 \\(\\times\\) 3 & \\(64\\times 64\\) & \\(128\\to 256,256\\) \\\\ & GroupNorm32 & \\(64\\times 64\\) & \\(256\\) \\\\ & Conv3 \\(\\times\\) 3 & \\(64\\times 64\\) & \\(256\\) \\\\ & SiLU & \\(64\\times 64\\) & \\(256\\) \\\\ \\hline Downsampler & Conv3 \\(\\times\\) 3 & \\(64\\times 64\\to 32\\times 32\\) & \\(256\\) \\\\ \\hline \\multirow{4}{*}{ResNet Block \\(\\times 2\\)} & GroupNorm32 & \\(32\\times 32\\) & \\(256\\) \\\\ & Conv3 \\(\\times\\) 3 & \\(32\\times 32\\) & \\(256\\to 512,512\\) \\\\ & GroupNorm32 & \\(32\\times 32\\) & \\(512\\) \\\\ & Conv3 \\(\\times\\) 3 & \\(32\\times 32\\) & \\(512\\) \\\\ & SiLU & \\(32\\times 32\\) & \\(512\\) \\\\ \\hline Downsampler & Conv3 \\(\\times\\) 3 & \\(32\\times 32\\to 16\\times 16\\) & \\(512\\) \\\\ \\hline \\multirow{4}{*}{ResNet Block \\(\\times 2\\)} & GroupNorm32 & \\(16\\times 16\\) & \\(512\\) \\\\ & Conv3 \\(\\times\\) 3 & \\(16\\times 16\\) & \\(512\\) \\\\ & SiLU & \\(16\\times 16\\) & \\(512\\) \\\\ \\hline \\multirow{4}{*}{Self Attention Block} & GroupNorm32 & \\(16\\times 16\\) & \\(512\\) \\\\ & Attention & \\(16\\times 16\\) & \\(512\\) \\\\ & Linear & \\(16\\times 16\\) & \\(512\\) \\\\ \\hline \\multirow{4}{*}{ResNet Block \\(\\times 2\\)} & GroupNorm32 & \\(16\\times 16\\) & \\(512\\) \\\\ & Conv3 \\(\\times\\) 3 & \\(16\\times 16\\) & \\(512\\) \\\\ & Conv3 \\(\\times\\) 3 & \\(16\\times 16\\) & \\(512\\) \\\\ & SiLU & \\(16\\times 16\\) & \\(512\\) \\\\ \\hline \\multirow{4}{*}{Output Block} & GroupNorm32 & \\(16\\times 16\\) & \\(512\\) \\\\ & SiLU & \\(16\\times 16\\) & \\(512\\) \\\\ & Conv3 \\(\\times\\) 3 & \\(16\\times 16\\) & \\(512\\to 8\\) \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 5: The details of the encoder of the frame-wise VAE on SEVIR frames. It encodes an input frame \\(x^{j}\\in\\mathbb{R}^{128\\times 128\\times 1}\\) into a latent \\(z^{j}\\in\\mathbb{R}^{16\\times 16\\times 4}\\). \\(\\texttt{Conv3}\\times\\) 3 is the 2D convolutional layer with \\(3\\times 3\\) kernel. \\(\\texttt{GroupNorm32}\\) is the Group Normalization (GN) layer [64] with \\(32\\) groups. \\(\\texttt{SiLU}\\) is the Sigmoid Linear Unit activation layer [18] with function \\(\\texttt{SiLU}(x)=x\\cdot\\texttt{sigmoid}(x)\\). The \\(\\texttt{Attention}\\) is the self attention layer [54] that first maps the input to queries \\(Q\\), keys \\(K\\) and values \\(V\\) by three \\(\\texttt{Linear}\\) layers, and then does self attention operation: \\(\\texttt{Attention}(x)=\\texttt{Softmax}(QK^{T}/\\sqrt{C})V)\\). \\begin{table} \\begin{tabular}{l|l|c|c} \\hline \\hline Block & Layer & Resolution & Channels \\\\ \\hline \\hline Input \\(z^{j}\\) & - & \\(16\\times 16\\) & \\(4\\) \\\\ \\hline \\multirow{2}{*}{2D CNN} & Conv3 \\(\\times\\) 3 & \\(16\\times 16\\) & \\(4\\) \\\\ & Conv3 \\(\\times\\) 3 & \\(16\\times 16\\) & \\(4\\to 512\\) \\\\ \\hline \\multirow{3}{*}{Self Attention Block} & GroupNorm32 & \\(16\\times 16\\) & \\(512\\) \\\\ & Attention & \\(16\\times 16\\) & \\(512\\) \\\\ & Linear & \\(16\\times 16\\) & \\(512\\) \\\\ \\hline \\multirow{3}{*}{ResNet Block \\(\\times 3\\)} & GroupNorm32 & \\(16\\times 16\\) & \\(512\\) \\\\ & Conv3 \\(\\times\\) 3 & \\(16\\times 16\\) & \\(512\\) \\\\ & Conv3 \\(\\times\\) 3 & \\(16\\times 16\\) & \\(512\\) \\\\ & Conv3 \\(\\times\\) 3 & \\(16\\times 16\\) & \\(512\\) \\\\ & SiLU & \\(16\\times 16\\) & \\(512\\) \\\\ \\hline Upsampler & Conv3 \\(\\times\\) 3 & \\(16\\times 16\\to 32\\times 32\\) & \\(512\\) \\\\ \\hline \\multirow{3}{*}{ResNet Block \\(\\times 3\\)} & GroupNorm32 & \\(32\\times 32\\) & \\(512\\) \\\\ & Conv3 \\(\\times\\) 3 & \\(32\\times 32\\) & \\(512\\) \\\\ & GroupNorm32 & \\(32\\times 32\\) & \\(512\\) \\\\ & Conv3 \\(\\times\\) 3 & \\(32\\times 32\\) & \\(512\\) \\\\ & SiLU & \\(32\\times 32\\) & \\(512\\) \\\\ \\hline Upsampler & Conv3 \\(\\times\\) 3 & \\(32\\times 32\\to 64\\times 64\\) & \\(512\\) \\\\ \\hline \\multirow{3}{*}{ResNet Block \\(\\times 3\\)} & GroupNorm32 & \\(64\\times 64\\) & \\(512\\) \\\\ & Conv3 \\(\\times\\) 3 & \\(64\\times 64\\) & \\(512\\to 256,256,256\\) \\\\ & GroupNorm32 & \\(64\\times 64\\) & \\(256\\) \\\\ & Conv3 \\(\\times\\) 3 & \\(64\\times 64\\) & \\(256\\) \\\\ & SiLU & \\(64\\times 64\\) & \\(256\\) \\\\ \\hline Upsampler & Conv3 \\(\\times\\) 3 & \\(64\\times 64\\to 128\\times 128\\) & \\(256\\) \\\\ \\hline \\multirow{3}{*}{ResNet Block \\(\\times 3\\)} & GroupNorm32 & \\(128\\times 128\\) & \\(256\\) \\\\ & Conv3 \\(\\times\\) 3 & \\(128\\times 128\\) & \\(256\\to 128,128,128\\) \\\\ & GroupNorm32 & \\(128\\times 128\\) & \\(128\\) \\\\ & Conv3 \\(\\times\\) 3 & \\(128\\times 128\\) & \\(128\\) \\\\ & SiLU & \\(128\\times 128\\) & \\(128\\) \\\\ \\hline \\multirow{3}{*}{Output Block} & GroupNorm32 & \\(128\\times 128\\) & \\(128\\) \\\\ & SiLU & \\(128\\times 128\\) & \\(128\\to 1\\) \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 6: The details of the decoder of the frame-wise VAE on SEVIR frames. It decodes a latent \\(z^{j}\\in\\mathbb{R}^{16\\times 16\\times 4}\\) back to a frame in pixel space \\(x^{j}\\in\\mathbb{R}^{128\\times 128\\times 1}\\). Conv3 \\(\\times\\) 3 is the 2D convolutional layer with \\(3\\times 3\\) kernel. GroupNorm32 is the Group Normalization (GN) layer [64] with \\(32\\) groups. SiLU is the Sigmoid Linear Unit activation layer [18] with function SiLU\\((x)=x\\cdot\\texttt{sigmoid}(x)\\). The Attention is the self attention layer [54] that first maps the input to queries \\(Q\\), keys \\(K\\) and values \\(V\\) by three Linear layers, and then does self attention operation: Attention\\((x)=\\texttt{Softmax}(QK^{T}/\\sqrt{C})V)\\). \\begin{table} \\begin{tabular}{l|l|c|c|c} \\hline \\hline \\multirow{2}{*}{Block} & \\multirow{2}{*}{Layer} & \\multicolumn{2}{c|}{Resolution} & \\multirow{2}{*}{Channels} \\\\ & & \\(N\\)-body MNIST & & \\\\ \\hline \\hline Input \\(x^{j}\\) & - & \\(64\\times 64\\) & \\(128\\times 128\\) & \\(1\\) \\\\ \\hline 2D CNN & Conv4 \\(\\times\\) 4 & \\(64\\times 64\\to 32\\times 32\\) & \\(128\\times 128\\to 64\\times 64\\) & \\(1\\to 64\\) \\\\ \\hline \\multirow{2}{*}{Downsampler} & LeakyReLU & \\(32\\times 32\\) & \\(64\\times 64\\) & \\(64\\) \\\\ & Conv4 \\(\\times\\) 4 & \\(32\\times 32\\to 16\\times 16\\) & \\(64\\times 64\\to 32\\times 32\\) & \\(64\\to 128\\) \\\\ & BatchNorm & \\(16\\times 16\\) & \\(32\\times 32\\) & \\(128\\) \\\\ \\hline \\multirow{2}{*}{Downsampler} & LeakyReLU & \\(16\\times 16\\) & \\(32\\times 32\\) & \\(128\\) \\\\ & Conv4 \\(\\times\\) 4 & \\(16\\times 16\\to 8\\times 8\\) & \\(32\\times 32\\to 16\\times 16\\) & \\(128\\to 256\\) \\\\ & BatchNorm & \\(8\\times 8\\) & \\(16\\times 16\\) & \\(256\\) \\\\ \\hline \\multirow{2}{*}{Downsampler} & LeakyReLU & \\(8\\times 8\\) & \\(16\\times 16\\) & \\(256\\) \\\\ & Conv4 \\(\\times\\) 4 & \\(8\\times 8\\to 7\\times 7\\) & \\(16\\times 16\\to 15\\times 15\\) & \\(256\\to 512\\) \\\\ & BatchNorm & \\(7\\times 7\\) & \\(15\\times 15\\) & \\(512\\) \\\\ \\hline \\multirow{2}{*}{Output Block} & LeakyReLU & \\(7\\times 7\\) & \\(15\\times 15\\) & \\(512\\) \\\\ & Conv4 \\(\\times\\) 4 & \\(7\\times 7\\to 6\\times 6\\) & \\(15\\times 15\\to 14\\times 14\\) & \\(1\\) \\\\ & AvgPool & \\(6\\times 6\\to 1\\) & \\(15\\times 15\\to 1\\) & \\(1\\) \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 7: The details of the discriminator for the adversarial loss of on \\(N\\)-body MNIST and SEVIR frames. Conv4 \\(\\times\\) 4 is the 2D convolutional layer with \\(4\\times 4\\) kernel, \\(2\\times 2\\) or \\(1\\times 1\\) stride, and \\(1\\times 1\\) padding. BatchNorm is the Batch Normalization (BN) layer [25]. The negative slope in LeakyReLU is \\(0.2\\). \\begin{table} \\begin{tabular}{l|l|c|c|c} \\hline \\hline \\multirow{2}{*}{Block} & \\multirow{2}{*}{Layer} & \\multirow{2}{*}{Spatial Resolution} & \\multicolumn{2}{c}{Channels} \\\\ & & & \\(N\\)-body MNIST & SEVIR \\\\ \\hline \\hline Input \\([z_{\\text{cond}},z_{t}]\\) & - & \\(16\\times 16\\) & \\(3\\) & \\(4\\) \\\\ \\hline Observation Mask & ConcatMask & \\(16\\times 16\\) & \\(3\\to 4\\) & \\(4\\to 5\\) \\\\ \\hline \\multirow{9}{*}{Projector} & GroupNorm32 & \\(16\\times 16\\) & \\(4\\) & \\(5\\) \\\\ & SiLU & \\(16\\times 16\\) & \\(4\\to 256\\) & \\(5\\to 256\\) \\\\ \\cline{1-1} & Conv\\(3\\times 3\\) & \\(16\\times 16\\) & \\(4\\to 256\\) & \\(5\\to 256\\) \\\\ \\cline{1-1} & GroupNorm32 & \\(16\\times 16\\) & \\(256\\) & \\\\ \\cline{1-1} & SiLU & \\(16\\times 16\\) & \\(256\\) & \\\\ \\cline{1-1} & Dropout & \\(16\\times 16\\) & \\(256\\) & \\\\ \\cline{1-1} & Conv\\(3\\times 3\\) & \\(16\\times 16\\) & \\(256\\) & \\\\ \\hline Positional Embedding & PosEmbed & \\(16\\times 16\\) & \\(256\\) & \\\\ \\hline \\multirow{9}{*}{Cuboid Attention Block \\(\\times 4\\)} & TEmbed & \\(16\\times 16\\) & \\(256\\) & \\\\ \\cline{1-1} & LayerNorm & \\(16\\times 16\\) & \\(256\\) & \\\\ \\cline{1-1} & Cuboid(\\(T,1,1\\)) & \\(16\\times 16\\) & \\(256\\) & \\\\ \\cline{1-1} & FFN & \\(16\\times 16\\) & \\(256\\) & \\\\ \\cline{1-1} & LayerNorm & \\(16\\times 16\\) & \\(256\\) & \\\\ \\cline{1-1} & Cuboid(\\(1,\\text{H},1\\)) & \\(16\\times 16\\) & \\(256\\) & \\\\ \\cline{1-1} & FFN & \\(16\\times 16\\) & \\(256\\) & \\\\ \\cline{1-1} & LayerNorm & \\(16\\times 16\\) & \\(256\\) & \\\\ \\cline{1-1} & Cuboid(\\(1,\\text{H},1\\)) & \\(16\\times 16\\) & \\(256\\) & \\\\ \\cline{1-1} & FFN & \\(16\\times 16\\) & \\(256\\) & \\\\ \\hline \\multirow{9}{*}{Cuboid Attention Block \\(\\times 8\\)} & PatchMerge & \\(16\\times 16\\to 8\\times 8\\) & \\(256\\to 1024\\) & \\\\ \\cline{1-1} & LayerNorm & \\(8\\times 8\\) & \\(1024\\) & \\\\ \\cline{1-1} & Linear & \\(8\\times 8\\) & \\(1024\\) & \\\\ \\hline \\multirow{9}{*}{Cuboid Attention Block \\(\\times 8\\)} & TEmbed & \\(8\\times 8\\) & \\(1024\\) & \\\\ \\cline{1-1} & LayerNorm & \\(8\\times 8\\) & \\(1024\\) & \\\\ \\cline{1-1} & Cuboid(\\(T,1,1\\)) & \\(8\\times 8\\) & \\(1024\\) & \\\\ \\cline{1-1} & FFN & \\(8\\times 8\\) & \\(1024\\) & \\\\ \\cline{1-1} & LayerNorm & \\(8\\times 8\\) & \\(1024\\) & \\\\ \\cline{1-1} & Cuboid(\\(1,\\text{H},1\\)) & \\(8\\times 8\\) & \\(1024\\) & \\\\ \\cline{1-1} & FFN & \\(8\\times 8\\) & \\(1024\\) & \\\\ \\hline \\hline \\multirow{9}{*}{Upsampler} & NearestNeighborInterp & \\(8\\times 8\\to 16\\times 16\\) & \\(1024\\) & \\\\ \\cline{1-1} & Conv\\(3\\times 3\\) & \\(16\\times 16\\) & \\(1024\\to 256\\) & \\\\ \\hline \\multirow{9}{*}{Cuboid Attention Block \\(\\times 4\\)} & TEmbed & \\(16\\times 16\\) & \\(256\\) & \\\\ \\cline{1-1} & LayerNorm & \\(16\\times 16\\) & \\(256\\) & \\\\ \\cline{1-1} & Cuboid(\\(T,1,1\\)) & \\(16\\times 16\\) & \\(256\\) & \\\\ \\cline{1-1} & FFN & \\(16\\times 16\\) & \\(256\\) & \\\\ \\cline{1-1} & LayerNorm & \\(16\\times 16\\) & \\(256\\) & \\\\ \\cline{1-1} & FFN & \\(16\\times 16\\) & \\(256\\to 3\\) & \\(256\\to 4\\) \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 8: The details of the Earthformer-UNet as the latent diffusion backbone on \\(N\\)-body MNIST and SEVIR datasets. The ConcatMask layer for the Observation Mask block concatenates one more channel to the input to indicates whether the input is the encoded observation \\(z_{\\text{cond}}\\) or the noisy latent \\(z_{t}\\). \\(1\\) for \\(z_{\\text{cond}}\\) and \\(0\\) for \\(z_{t}\\). Conv\\(3\\times 3\\) is the 2D convolutional layer with \\(3\\times 3\\) kernel. GroupNorm32 is the Group Normalization (GN) layer [64] with \\(32\\) groups. If the number of the input data channels is smaller than \\(32\\), then the number of groups is set to the number of channels. SiLU is the Sigmoid Linear Unit activation layer [18] with function SiLU\\((x)=x\\cdot\\texttt{sigmaid}(x)\\). The negative slope in LeakyReLU is \\(0.1\\). Dropout is the dropout layer [21] with the probability \\(0.1\\) to drop an element to be zeroed. The FFN consists of two Linear layers separated by a GeLU activation layer [18]. PosEmbed is the positional embedding layer [54] that adds learned positional embeddings to the input. TEmbed is the embedding layer [22] that embeds the denoising step \\(t\\). PatchMerge splits a 2D input tensor with \\(C\\) channels into \\(N\\) non-overlapping \\(p\\times p\\) patches and merges the spatial dimensions into channels, gets \\(N\\)\\(1\\times 1\\) patches with \\(p^{2}\\cdot C\\) channels and concatenates them back along spatial dimensions. Residual connections [17] are added from blocks in the downsampling phase to corresponding blocks in the upsampling phase. \\begin{table} \\begin{tabular}{l|l|c|c|c} \\hline \\hline \\multirow{2}{*}{Block} & \\multirow{2}{*}{Layer} & \\multirow{2}{*}{Spatial Resolution} & \\multicolumn{2}{c}{Channels} \\\\ & & & \\(N\\)-body MNIST & SEVIR \\\\ \\hline \\hline Input \\([z_{\\text{cond}},z_{t}]\\) & - & \\(16\\times 16\\) & \\(3\\) & \\(4\\) \\\\ \\hline Observation Mask & ConcatMask & \\(16\\times 16\\) & \\(3\\to 4\\) & \\(4\\to 5\\) \\\\ \\hline \\multirow{7}{*}{Projector} & GroupNorm32 & \\(16\\times 16\\) & \\(4\\) & \\(5\\) \\\\ & SiLU & \\(16\\times 16\\) & \\(4\\to 64\\) & \\(5\\to 64\\) \\\\ & Conv3 \\(\\times\\) 3 & \\(16\\times 16\\) & \\(4\\to 64\\) & \\(5\\to 64\\) \\\\ \\cline{1-1} & GroupNorm32 & \\(16\\times 16\\) & \\(64\\) & \\\\ \\cline{1-1} & SiLU & \\(16\\times 16\\) & \\(64\\) & \\\\ \\cline{1-1} & Dropout & \\(16\\times 16\\) & \\(64\\) & \\\\ \\cline{1-1} & Conv3 \\(\\times\\) 3 & \\(16\\times 16\\) & \\(64\\) & \\\\ \\hline Positional Embedding & PosEmbed & \\(16\\times 16\\) & \\(64\\) & \\\\ \\hline \\multirow{7}{*}{Cuboid Attention Block} & TE embed & \\(16\\times 16\\) & \\(64\\) & \\\\ & LayerNorm & \\(16\\times 16\\) & \\(64\\) & \\\\ \\cline{1-1} & Cuboid(T, \\(1,1\\)) & \\(16\\times 16\\) & \\(64\\) & \\\\ \\cline{1-1} & FFN & \\(16\\times 16\\) & \\(64\\) & \\\\ \\cline{1-1} & LayerNorm & \\(16\\times 16\\) & \\(64\\) & \\\\ \\cline{1-1} & FFN & \\(16\\times 16\\) & \\(64\\) & \\\\ \\cline{1-1} \\cline{2-5} & \\multicolumn{1}{c}{} & & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} \\\\ \\cline{1-1} \\cline{2-5} & PatchMerge & \\(16\\times 16\\to 8\\times 8\\) & \\(64\\to 256\\) \\\\ \\cline{1-1} & LayerNorm & \\(8\\times 8\\) & \\(256\\) & \\\\ \\hline \\multirow{7}{*}{Cuboid Attention Block} & TE embed & \\(8\\times 8\\) & \\(256\\) & \\\\ \\cline{1-1} & LayerNorm & \\(8\\times 8\\) & \\(256\\) & \\\\ \\cline{1-1} & Cuboid(T, \\(1,1\\)) & \\(8\\times 8\\) & \\(256\\) & \\\\ \\cline{1-1} & FFN & \\(8\\times 8\\) & \\(256\\) & \\\\ \\cline{1-1} & LayerNorm & \\(8\\times 8\\) & \\(256\\) & \\\\ \\cline{1-1} & FFN & \\(8\\times 8\\) & \\(256\\) & \\\\ \\cline{1-1} & LayerNorm & \\(8\\times 8\\) & \\(256\\) & \\\\ \\cline{1-1} & Cuboid(\\(1,1,\\text{W}\\)) & \\(8\\times 8\\) & \\(256\\) & \\\\ \\cline{1-1} & FFN & \\(8\\times 8\\) & \\(256\\) & \\\\ \\cline{1-1} & \\multicolumn{1}{c}{} & & \\multicolumn{1}{c}{} & \\multicolumn{1}{c}{} \\\\ \\cline{1-1} & FFN & \\(8\\times 8\\) & \\(256\\) & \\\\ \\cline{1-1} & FFN & \\(8\\times 8\\) & \\(256\\) & \\\\ \\hline \\multirow{7}{*}{Output Pooling Block} & GroupNorm32 & \\(8\\times 8\\) & \\(256\\) & \\\\ \\cline{1-1} & Attention & \\(8\\times 8\\to 1\\) & \\(256\\) & \\\\ \\cline{1-1} & Linear & \\(1\\) & \\(256\\to 1\\) & \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 9: The details of the Earthformer encoders for the parameterization of the knowledge alignment networks \\(U_{\\phi}(z_{t},t,z_{\\text{cond}})\\) on \\(N\\)-body MNIST and SEVIR datasets. The ConcatMask layer for the Observation Mask block concatenates one more channel to the input to indicates whether the input is the encoded observation \\(z_{\\text{cond}}\\) or the noisy latent \\(z_{t}\\). \\(1\\) for \\(z_{\\text{cond}}\\) and \\(0\\) for \\(z_{t}\\). \\(\\texttt{Conv3}\\times\\text{3}\\) is the 2D convolutional layer with \\(3\\times 3\\) kernel. GroupNorm32 is the Group Normalization (GN) layer [64] with \\(32\\) groups. If the number of the input data channels is smaller than \\(32\\), then the number of groups is set to the number of channels. SiLU is the Sigmoid Linear Unit activation layer [18] with function \\(\\texttt{SiLU}(x)=x\\cdot\\texttt{sigmoid}(x)\\). The negative slope in LeakyReLU is \\(0.1\\). Dropout is the dropout layer [21] with the probability \\(0.1\\) to drop an element to be zeroed. The FFN consists of two Linear layers separated by a GeLU activation layer [18]. PosEmbed is the positional embedding layer [54] that adds learned positional embeddings to the input. TE embed is the embedding layer [22] that embeds the denoising step \\(t\\). PatchMerge splits a 2D input tensor with \\(C\\) channels into \\(N\\) non-overlapping \\(p\\times p\\) patches and merges the spatial dimensions into channels, gets \\(N\\)\\(1\\times 1\\) patches with \\(p^{2}\\cdot C\\) channels and concatenates them back along spatial dimensions. Residual connections [17] are added from blocks in the downsampling phase to corresponding blocks in the upsampling phase. The Attention is the self attention layer [54] with an extra “cls” token for information aggregation. It first flattens the input and concatenates it with the “cls” token. Then it maps the concatenated input to queries \\(Q\\), keys \\(K\\) and values \\(V\\) by three Linear layers, and then does self attention operation: \\(\\texttt{Attention}(x)=\\texttt{Softmax}(QK^{T}/\\sqrt{C})V\\). Finally, the value of the “cls” token after self attention operation serves as the layer’s output. \\begin{table} \\begin{tabular}{l|c} \\hline \\hline Hyper-parameter of VAE & Value \\\\ \\hline Learning rate & \\(4.5\\times 10^{-6}\\) \\\\ \\(\\beta_{1}\\) & \\(0.5\\) \\\\ \\(\\beta_{2}\\) & \\(0.9\\) \\\\ Weight decay & \\(10^{-2}\\) \\\\ Batch size & \\(512\\) \\\\ Training epochs & \\(200\\) \\\\ \\hline \\hline Hyper-parameter of discriminator & Value \\\\ \\hline Learning rate & \\(4.5\\times 10^{-6}\\) \\\\ \\(\\beta_{1}\\) & \\(0.5\\) \\\\ \\(\\beta_{2}\\) & \\(0.9\\) \\\\ Weight decay & \\(10^{-2}\\) \\\\ Batch size & \\(512\\) \\\\ Training epochs & \\(200\\) \\\\ Training start step & \\(50000\\) \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 10: Hyperparameters of the Adam optimizer for training frame-wise VAEs and discriminators on \\(N\\)-body MNIST and SEVIR datasets. \\begin{table} \\begin{tabular}{l|c} \\hline \\hline Hyper-parameter of VAE & Value \\\\ \\hline Learning rate & \\(1.0\\times 10^{-3}\\) \\\\ \\(\\beta_{1}\\) & \\(0.9\\) \\\\ \\(\\beta_{2}\\) & \\(0.999\\) \\\\ Weight decay & \\(10^{-5}\\) \\\\ Batch size & \\(64\\) \\\\ Training epochs & \\(1000\\) \\\\ Warm up percentage & \\(10\\%\\) \\\\ Learning rate decay & Cosine \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 11: Hyperparameters of the AdamW optimizer for training LDMs on \\(N\\)-body MNIST and SEVIR datasets. \\begin{table} \\begin{tabular}{l|c} \\hline \\hline Hyper-parameter of VAE & Value \\\\ \\hline Learning rate & \\(1.0\\times 10^{-3}\\) \\\\ \\(\\beta_{1}\\) & \\(0.9\\) \\\\ \\(\\beta_{2}\\) & \\(0.999\\) \\\\ Weight decay & \\(10^{-5}\\) \\\\ Batch size & \\(64\\) \\\\ Training epochs & \\(200\\) \\\\ Warm up percentage & \\(10\\%\\) \\\\ Learning rate decay & Cosine \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 12: Hyperparameters of the AdamW optimizer for training knowledge alignment networks on \\(N\\)-body MNIST and SEVIR datasets. ### Baselines We train baseline algorithms following their officially released configurations and tune the learning rate, learning rate scheduler, working resolution, etc., to optimize their performance on each dataset. We list the modifications we applied to the baselines for each dataset in Table 13. \\begin{table} \\begin{tabular}{l|c|c} \\hline \\hline Model & \\(N\\)-body MNIST & SEVIR \\\\ \\hline \\hline UNet [55] & - & - \\\\ \\hline \\multirow{3}{*}{ConvLSTM [47]} & reverse enc-dec [48] & reverse enc-dec [48] \\\\ & conv\\_kernels = [(7,7),(5,5),(3,3)] & conv\\_kernels = [(7,7),(5,5),(3,3)] \\\\ & deconv\\_kernels = [(6,6),(4,4),(4,4)] & deconv\\_kernels = [(6,6),(4,4),(4,4)] \\\\ & channels=[96, 128, 256] & channels=[96, 128, 256] \\\\ \\hline PredRNN [61] & - & - \\\\ \\hline PhyDNet [11] & - & convcell\\_hidden = [256, 256, 256, 64] \\\\ \\hline E3D-LSTM [60] & - & - \\\\ \\hline \\multirow{3}{*}{Rainformer [1]} & downscaling\\_factors=[2, 2, 2, 2] & downscaling\\_factors=[4, 2, 2, 2] \\\\ & hidden\\_dim=32 & - \\\\ & heads=[4, 4, 8, 16] & - \\\\ & head\\_dim=8 & - \\\\ \\hline Earthformer [8] & - & - \\\\ \\hline \\hline DGMR [41] & - & context\\_steps = 7 \\\\ \\hline \\multirow{2}{*}{VideoGPT [65]} & vqvae\\_n codes = 512 & vqvae\\_downsample = [1, 4, 4] \\\\ & vqvae\\_downsample = [1, 8, 8] \\\\ \\hline \\multirow{3}{*}{LDM [42]} & vae: \\(64\\times 64\\times 1\\to 16\\times 16\\times 3\\) & vae: \\(128\\times 128\\times 1\\to 16\\times 16\\times 4\\) \\\\ & conv\\_dim = 3 & conv\\_dim = 3 \\\\ & model\\_channels = 256 & model\\_channels = 256 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 13: Implementation details of baseline algorithms. Modifications based on the officially released implementations are listed according to different datasets. “-” means no modification is applied. “reverse enc-dec” means adopting the reversed encoder-decoder architecture proposed in [48]. Other terms listed are the hyperparameters in their officially released implementations. Derivation of the Approximation to Knowledge Alignment Guidance We derive the approximation to the knowledge alignment guided denoising transition (5) following [4]. We rewrite (5) to (8) using a normalization constant \\(Z\\) that normalizes \\(Z\\int e^{-\\lambda_{\\mathcal{F}}\\|U_{\\phi}(z_{t},t,y)-\\mathcal{F}_{0}(y)\\|}dz_{t}=1\\): \\[p_{\\theta,\\phi}(z_{t}|z_{t+1},y,\\mathcal{F}_{0})=p_{\\theta}(z_{t}|z_{t+1},z_{ \\text{cond}})\\cdot Ze^{-\\lambda_{\\mathcal{F}}\\|U_{\\phi}(z_{t},t,y)-\\mathcal{F}_ {0}(y)\\|}. \\tag{8}\\] In what follows, we abbreviate \\(\\mu_{\\theta}(z_{t+1},t,z_{\\text{cond}})\\) as \\(\\mu_{\\theta}\\), and \\(\\Sigma_{\\theta}(z_{t+1},t,z_{\\text{cond}})\\) as \\(\\Sigma_{\\theta}\\) for brevity. We use \\(C_{i},i=\\{1,\\ldots,7\\}\\) to denote constants. \\[p_{\\theta}(z_{t}|z_{t+1},z_{\\text{cond}}) =\\mathcal{N}(\\mu_{\\theta},\\Sigma_{\\theta}), \\tag{9}\\] \\[\\log p_{\\theta}(z_{t}|z_{t+1},z_{\\text{cond}}) =-\\frac{1}{2}(z_{t}-\\mu_{\\theta})^{T}\\Sigma_{\\theta}^{-1}(z_{t}- \\mu_{\\theta})+C_{1},\\] \\[\\log Ze^{-\\lambda_{\\mathcal{F}}\\|U_{\\phi}(z_{t},t,y)-\\mathcal{F}_ {0}(y)\\|} =-\\lambda_{\\mathcal{F}}\\|U_{\\phi}(z_{t},t,y)-\\mathcal{F}_{0}(y)\\|+C_{2},\\] By assuming that \\(\\log Ze^{-\\lambda_{\\mathcal{F}}\\|U_{\\phi}(z_{t},t,y)-\\mathcal{F}_{0}(y)\\|}\\) has low curvature compared to \\(\\Sigma_{\\theta}^{-1}\\), which is reasonable in the limit of infinite diffusion steps (\\(\\|\\Sigma_{\\theta}\\|\\to 0\\)), we can approximate it by a Taylor expansion at \\(z_{t}=\\mu_{\\theta}\\) \\[\\log Ze^{-\\lambda_{\\mathcal{F}}\\|U_{\\phi}(z_{t},t,y)-\\mathcal{F}_ {0}(y)\\|} \\approx-\\lambda_{\\mathcal{F}}\\|U_{\\phi}(z_{t},t,y)-\\mathcal{F}_{0}(y) \\|_{z_{t}=\\mu_{\\theta}} \\tag{10}\\] \\[-(z_{t}-\\mu_{\\theta})\\lambda_{\\mathcal{F}}\ abla_{z_{t}}\\|U_{ \\phi}(z_{t},t,y)-\\mathcal{F}_{0}(y)\\|\\|_{z_{t}=\\mu_{\\theta}}\\] \\[=(z_{t}-\\mu_{\\theta})g+C_{3},\\] where \\(g=-\\lambda_{\\mathcal{F}}\ abla_{z_{t}}\\|U_{\\phi}(z_{t},t,y)-\\mathcal{F}_{0}(More Quantitative Results on SEVIR ### Quantitative Analysis of BIAS on SEVIR Similar to Critical Success Index (CSI) introduced in Sec. 3.2, BIAS \\(=\\frac{\\#\\texttt{Bias-}\\#\\texttt{F.Alarms}}{\\#\\texttt{Bias-}+\\#\\texttt{F.Alarms}}\\) is calculated by counting the \\(\\#\\texttt{Hits}\\) (truth=1, pred=1), \\(\\#\\texttt{Misses}\\) (truth=1, pred=0) and \\(\\#\\texttt{F.Alarms}\\) (truth=0, pred=1) of the predictions binarized at thresholds \\([16,74,133,160,181,219]\\). This measurement assesses the model's inclination towards either F.Alarms or Misses. The results from Table 14 demonstrate that deterministic spatiotemporal forecasting models, such as UNet [55], ConvLSTM [47], PredRNN [61], PhyDNet [11], E3D-LSTM [60], and Earthformer [8], tend to produce predictions with lower intensity. These models prioritize avoiding high-intensity predictions that have a higher chance of being incorrect due to their limited ability to handle such uncertainty effectively. On the other hand, probabilistic spatiotemporal forecasting baselines, including DGMR [41], VideoGPT [65] and LDM [42], demonstrate a more daring approach by predicting possible high-intensity signals, even if it results in lower CSI scores, as depicted in Table 2. Among these baselines, PreDiff achieves the best performance in BIAS. It consistently achieves BIAS scores closest to \\(1\\), irrespective of the chosen threshold. These results demonstrate that PreDiff has effectively learned to unbiasedly capture the distribution of intensity. ### CSI at Varying Thresholds on SEVIR We include representative deterministic methods ConvLSTM and Earthformer, and all studied probabilistic methods to compare CSI, CSI, CSI-pool14 and CSI-pool16 at varying thresholds. It is important to note that CSI tends to favor conservative predictions, especially in situations with high levels of uncertainty. To ensure a fair comparison, we calculated the CSI scores by averaging the samples for each model, while scores in other metrics are averaged over the scores of each sample. The results presented in Table 15, 16, 17 demonstrate that our PreDiff achieves competitive CSI scores and outperforms baselines in CSI scores at pooling scale \\(4\\times 4\\) and \\(16\\times 16\\), particularly at higher thresholds. More Qualitative Results on \\(N\\)-body MNIST Fig. 6 to Fig. 13 show several sets of example predictions on the \\(N\\)-body MNIST test set. In each figure, visualizations from top to bottom are context sequence \\(y\\), target sequence \\(x\\), predictions by ConvLSTM [47], Earthformer [8], VideoGPT [65], LDM [42], PreDiff, PreDiff-KA. E.MSE denotes the average error between the total energy (the sum of kinetic energy and potential energy) of the predictions \\(E(\\widehat{x}^{j})\\) and the total energy of the last step context \\(E(y^{L_{n}})\\). Figure 8: A set of example predictions on the \\(N\\)-body MNIST test set. The red dashed line is to help the reader to judge the position of the digit “0” in the last frame. Figure 9: A set of example predictions on the \\(N\\)-body MNIST test set. The red dashed line is to help the reader to judge the position of the digit “8” in the last frame. Figure 11: A set of example predictions on the \\(N\\)-body MNIST test set. The red dashed line is to help the reader to judge the position of the digit “1” in the last frame. Figure 10: A set of example predictions on the \\(N\\)-body MNIST test set. The red dashed line is to help the reader to judge the position of the digit “4” in the last frame. Figure 12: A set of example predictions on the \\(N\\)-body MNIST test set. The red dashed line is to help the reader to judge the position of the digit “7” in the last frame. Figure 13: A set of example predictions on the \\(N\\)-body MNIST test set. The red dashed line is to help the reader to judge the position of the digit “7” in the last frame. More Qualitative Results on SEVIR Fig. 14 to Fig. 19 show several sets of example predictions on the SEVIR test set. In subfigure (a) of each figure, visualizations from top to bottom are context sequence \\(y\\), target sequence \\(x\\), predictions by ConvLSTM [47], Earthformer [8], VideoGPT [65], LDM [42], PreDiff, PreDiff-KA. In subfigure (b) of each figure, visualizations from top to bottom are context sequence \\(y\\), target sequence \\(x\\), predictions by PreDiff-KA with anticipated average future intensity \\(\\mu_{\\tau}+n\\sigma_{\\tau}\\), \\(n=4,2,0-2,-4\\). Figure 16: A set of example predictions on the SEVIR test set. (a) Comparison of PreDiff with baselines. (b) Predictions by PreDiff-KA under the guidance of anticipated average intensity. Figure 17: A set of example predictions on the SEVIR test set. (a) Comparison of PreDiff with baselines. (b) Predictions by PreDiff-KA under the guidance of anticipated average intensity. Figure 19: A set of example predictions on the SEVIR test set. (a) Comparison of PreDiff with baselines. (b) Predictions by PreDiff-KA under the guidance of anticipated average intensity. Figure 18: A set of example predictions on the SEVIR test set. (a) Comparison of PreDiff with baselines. (b) Predictions by PreDiff-KA under the guidance of anticipated average intensity.
Earth system forecasting has traditionally relied on complex physical models that are computationally expensive and require significant domain expertise. In the past decade, the unprecedented increase in spatiotemporal Earth observation data has enabled data-driven forecasting models using deep learning techniques. These models have shown promise for diverse Earth system forecasting tasks. However, they either struggle with handling uncertainty or neglect domain-specific prior knowledge; as a result, they tend to suffer from averaging possible futures to blurred forecasts or generating physically implausible predictions. To address these limitations, we propose a two-stage pipeline for probabilistic spatiotemporal forecasting: 1) We develop PreDiff, a conditional latent diffusion model capable of probabilistic forecasts. 2) We incorporate an explicit knowledge alignment mechanism to align forecasts with domain-specific physical constraints. This is achieved by estimating the deviation from imposed constraints at each denoising step and adjusting the transition distribution accordingly. We conduct empirical studies on two datasets: \\(N\\)-body MNIST, a synthetic dataset with chaotic behavior, and SEVIR, a real-world precipitation nowcasting dataset. Specifically, we impose the law of conservation of energy in \\(N\\)-body MNIST and anticipated precipitation intensity in SEVIR. Experiments demonstrate the effectiveness of PreDiff in handling uncertainty, incorporating domain-specific prior knowledge, and generating forecasts that exhibit high operational utility.
Summarize the following text.
arxiv/83b578ed_5f25_466b_89d1_6fe0e12a2701.md
# Self-Supervised Super-Resolution for Multi-Exposure Push-Frame Satellites Ngoc Long Nguyen\\({}^{1}\\) Jeremy Anger\\({}^{1,2}\\) Axel Davy\\({}^{1}\\) Pablo Arias\\({}^{1}\\) Gabriele Facciolo\\({}^{1}\\) \\({}^{1}\\) Universite Paris-Saclay, CNRS, ENS Paris-Saclay, Centre Borelli, France [https://centreborelli.github.io/HDR-DSP-SR/](https://centreborelli.github.io/HDR-DSP-SR/) ## 1 Introduction High resolution (HR) satellite imagery is a key element in a broad range of tasks, including human activity monitoring and disaster relief. Super-resolution by computational methods has recently been adopted [7, 41] by the remote sensing industry (Planet SkySat, Satellite Aleph-1). By leveraging high frame rate low-resolution (LR) acquisitions, low-cost constellations can be effective competitors to more traditional high-cost satellites. In order to capture the full dynamic range of the scene, some satellites use exposure bracketing, resulting in sequences with varying exposures. While several works have addressed multi-image super-resolution (MISR) of single-exposure sequences, almost no previous work considers the multi-exposure case. MISR techniques exploit the aliasing in several LR acquisitions to reconstruct a HR image. The maximum attainable resolution is capped by the spectral decay of the blur kernel resulting from the sensor's pixel integration and the camera optics, which imposes a frequency cutoff beyond which there is no usable high frequency information. Aggregating many frames is also interesting as it allows significant noise reduction. If the LR frames are acquired with bracketed exposures, it is possible to integrate them in a super-resolved high dynamic range (HDR) image. Long exposures have higher signal-to-noise ratio (SNR) which helps reduce the noise in dark regions, whereas short exposures provide information in bright regions which can cause saturation with longer exposure times. In this work, our goal is to perform joint super-resolution and denoising from a time series of bracketed satellite images. We focus on push-frame satellite sensors such as the SkySat constellation from Planet. We increase the resolution by a factor of two, which is the frequency cutoff of the combined optical and sensor's imaging system. The SkySat satellites [41] contain a full-frame sensor capable of capturing bursts of overlapping frames: a given point on the ground is seen in several consecutive images. However, our technique is general and can be applied to other satellites, or beyond satellite imagery to consumer cameras capable of multi-exposure burst or video acquisition. Several methods have addressed either MISR or HDR imaging from multiple exposures, but their combination has received little attention. Existing works consider an ideal setup in which frames can be aligned with an affinity [7, 53] or a homography [55], and the number of acquisitions is large enough to render the problem an overdetermined system of equations. Such motion models are good approximations for satellite bursts, but ignore parallax [8], which can be noticeable for mountains and tall buildings. In the case of satellite imaging, push-frame cameras capable of capturing multi-exposure bursts are relatively recent, which explains why all previous works on MISR focus on the single-exposure case [7, 17, 40, 43], except for SkySat's proprietary method [41] producing the L1B product, whose details are not public. Deep learning methods currently outperform traditional model-based approaches [47]. In general, learning-based methods require large realistic datasets with ground truth to be trained, asmethods trained on synthetic data [4] fail to generalize to real images [14]. One of such datasets is the PROBA-V dataset [37], acquired with a satellite equipped with two cameras of different resolutions. This dataset has fostered the publication of several deep learning approaches to satellite MISR [9, 17, 39]. However, the PROBA-V dataset is not appropriate for MISR of LR image bursts acquired at a high frame rate, as the PROBA-V sequences are multi-date and present significant content and illumination changes. A promising direction is to use self-supervised learning techniques, which have been applied to video restoration tasks such as denoising and demosaicing [18, 19, 20, 51, 58], and recently to MISR [43]. These techniques benefit from the temporal redundancy in videos. Instead of using ground truth labels, one of the degraded frames in the input sequence is withheld from the network and used as label. Our work builds upon _Deep Shift-and-Add_ (DSA) [43], a self-supervised deep learning method for MISR of single-exposure bursts of satellite images. The model is trained without supervision by exploiting the frame redundancy. **Contributions.** In this work, we propose _High Dynamic Range Deep Shift-and-Pool_, HDR-DSP a self-supervised method for joint super-resolution and denoising of bracketed satellite imagery. The method is able to handle time-series with a variable number of frames and is robust to errors in the exposure times, as the ones provided in the metadata are often inaccurate. This makes our method directly applicable to real image data (see Figure 1). This is, to the best of our knowledge, the first multi-exposure MISR method for satellite imaging, and beyond satellite imagery, it is the first approach based on deep-learning. Our contributions are the following: _Feature Shift-and-Pool._ We propose a _shift-and-pool_ module that merges features (computed by an encoder network on each input LR frame) into a HR feature map by temporal pooling using permutation invariant statistics: average, maximum, and standard deviation. This gives a rich fused representation which yields a substantial improvement over the average [43], in both single and multiple exposure cases. _Robustness to inaccurate exposure times via base-detail decomposition._ We propose normalizing the input frames and decomposing them into base and detail. The errors caused by the inaccurate exposure times affect mainly the base, whereas the detail containing the aliasing required for super-resolution can be safely processed by the network. Note that vignetting and stray light can also cause exposure issues that affect single and multi-exposure MISR alike. _Noise-level-aware detail encodings._ The noise present in the LR images is signal-dependent, its variance being an affine function of the intensity. To deal with such noise, we provide the un-normalized LR images to the encoder in addition to the normalized detail components. This gives the encoder information about the noise level of each pixel, necessary for an optimal fusion. _Self-supervised loss with grid shifting._ Using random shifts of the high-resolution grid, we make the self-supervised loss of [43] translation equivariant, leading to improved results. We validate our contributions with an ablation study on a synthetic dataset (SS5.2), designed to model the main characteristics of real bracketed SkySat sequences. Since there are no previous works on multi-exposure MISR, we compare against state-of-the-art single-exposure MISR methods which we adapt and retrain to multi-exposure inputs (SS5.3). We also introduce a dataset of 2500 multi-exposure real SkySat bursts (SS5.4). The dataset only consists of noisy LR images, but we can nevertheless train our network on it, since it is self-supervised. Both on synthetic and real data, the proposed HDR-DSP method attains the best results by a significant margin _even though it is trained without high resolution ground truth data._ The dataset is available for Figure 1: Super-resolution from a real multi-exposure sequence of 10 SkySat images. Top row: Original low resolution images with different exposures. Bottom row: Reconstructions from five methods, including ours trained with self-supervision (right). download on the project website. ## 2 Related work Most works on video and burst super-resolution focus on the single-exposure case [7, 12, 17, 34, 39, 43, 52]. The problem of super-resolution from multi-exposure sequences has received much less attention. In [53] it is modeled as an overdetermined system and solved via a non regularized least-squares approach. An affine motion model and exact knowledge of the exposure times are assumed. The authors in [55] address the case in which the images have motion blur due to the camera shake. They also consider a static scene and do not consider noise. A related method for HDR imaging uses dual exposure sensors, which interlace two exposures in even and odd columns of the image [15, 26]. This can be seen as horizontally super-resolving the video. Other works perform a related task: joint super-resolution and reverse tone-mapping [30, 31, 32]. The difference with our problem is that the input video is a single-exposure LR video, and the goal is to artificially increase its dynamic range to adapt it to HDR screens. Methods for HDR imaging from multiple exposures need to deal with the noise. Granados et al. [23] address the case of signal-dependent noise and propose a fixed point iteration of the MLE estimator which is close to the Cramer-Rao bound [3]. In these works, the denoising comes only from the temporal fusion. In [1, 2], this is incorporated in into spatio-temporal patch-based denoisers. Our work can also be related to burst and video joint denoising and demosaicing [19, 25, 56], as demosaicing can be regarded as a super-resolution problem. ## 3 Observation model We denote by \\(\\mathfrak{I}_{t}\\) a dynamic infinite-resolution ideal scene. The camera on the satellite captures a sequence of \\(m\\) low resolution images \\(\\bar{I}_{i}^{LR}\\) with different exposures. For the \\(i-\\)th acquisition, the dynamic scene \\(\\mathfrak{I}_{t}\\) is integrated during an exposure time \\(e_{i}\\) centered at \\(t_{i}\\). Even if satellites travel at a very high speed relative to the ground, precise electro-optical image stabilization systems (with piezo-electric actuators [29, 33] or steering mirrors [46]) assure that the observed scene \\(\\mathfrak{I}_{t}\\) is mostly constant during the exposure time (\\(\\sim\\)2ms), which allows us to approximate the temporal integration with a product in our observation model \\[\\bar{I}_{i}^{LR}=e_{i}\\Pi_{1}\\left(\\mathfrak{I}_{t_{i}}*k\\right)+n_{i}=e_{i} \\mathcal{I}_{i}^{LR}+n_{i}. \\tag{1}\\] Here \\(k\\) is the Point Spread Function (PSF) modeling jointly optical blur and pixel integration, \\(\\Pi_{1}\\) is the bi-dimensional sampling operator due to the sensor array, \\(\\mathcal{I}_{i}^{LR}\\) is the clean low-resolution image corresponding to an exposure of \\(1\\) unit of time and \\(n_{i}\\) denotes the noise. Throughout the text, call-graphic fonts \\(\\mathcal{I}_{i}\\) denote noise-free images and regular fonts \\(I_{i}\\) noisy ones. A bar \\(\\bar{I}_{i}=e_{i}I_{i}\\) indicates that the image is multiplied by its exposure time (i.e. as it is acquired by the sensor), while its absence denotes images _normalized_ to an exposure time of 1. We consider the \\(r\\)-th image \\(\\bar{I}_{r}^{LR}\\) in the time series as the _reference_, and without loss of generality we assume its exposure time to be one, \\(e_{r}=1\\). We model the noise as spatially independent, additive Gaussian noise with zero mean and signal-dependent variance \\(n_{i}(x)\\sim\\mathcal{N}(0,\\sigma^{2}(\\bar{\\mathcal{I}}_{i}^{LR}(x)))\\), where \\[\\sigma^{2}(\\bar{\\mathcal{I}}_{i}^{LR}(x))=ae_{i}\\mathcal{I}_{i}^{LR}(x)+b, \\tag{2}\\] is an approximation of the Poisson shot noise plus Gaussian readout noise [45, 21], with parameters \\(a\\) and \\(b\\). Because of the spectral decay imposed by the pixel integration and optical blur (\\(k\\)), the images \\(\\mathfrak{I}_{t_{i}}*k\\) are band limited with a cutoff at about twice the sampling rate of the LR images for SkySat. _Our goal is to increase the resolution by a factor \\(2\\) by estimating \\(\\bar{\\mathcal{I}}_{r}^{HR}\\), a non-aliased sampling of \\(\\mathfrak{I}_{t_{r}}*k\\) from several LR observations \\(\\{\\bar{I}_{i}^{LR}\\}_{i=1}^{m}\\) with varying exposures \\(\\{e_{i}\\}_{i=1}^{m}\\)._ A sharp super-resolved image can then be recovered by partially deconvolving \\(k\\). In order for the method to be applicable in practice, it needs to handle time series with a variable number of frames \\(m\\), and to be robust to inaccuracies in the exposure times \\(e_{i}\\), as the exposure times in the image metadata are only a coarse approximation of the real ones. ## 4 Proposed method Our method builds upon the DSA method for MISR introduced in [43], which can be regarded as a trainable generalization of the traditional shift-and-add (S&A) algorithms [24, 28, 22, 38, 6]. A _feature S&A_ is used to fuse feature representations produced from the LR images by an encoder network. A motion estimation network computes the optical flows between each input LR frame and the reference frame. The output of the feature S&A is a high-resolution aggregated feature map, which is then decoded by another network to produce the output image. The DSA method could be extended to multi-exposure sequences by applying it to the normalized images \\(\\bar{I}_{i}^{LR}=\\bar{I}_{i}^{LR}/e_{i}\\). This approach however is sub-optimal because it neglects the fact that the normalization alters the noise variance model, and fails if the reported exposure times are inaccurate, which is the case in practice. To better exploit multiple exposures, we propose two modifications: (1) A base-detail decomposition, which provides robustness to errors in the exposure times; (2) An encoding of the images that is made dependent on the noise variance, which allows the encoder to weight different contributions according to their signal-to-noise ratio. In addition, we also propose a new feature pooling fusion intended to capture a richer picture of the encoded features, leading to a substantial improvement in reconstruction quality, both for single and multiple exposure cases. The resulting network can be trained end-to-end with self-supervision, i.e. without requiring ground truth. ### Architecture Figure 2 shows a diagram of our proposed architecture which takes as input a sequence of multi-exposed LR images \\(\\{\\bar{I}_{i}^{LR}\\}_{i=1}^{m}\\) along with the corresponding exposure times \\(e_{i}\\) and produces one super-resolved image \\(\\widehat{\\mathcal{I}}_{r}^{HR}\\). The input LR images are first normalized to unit exposure time. The normalized LR images \\(\\{I_{i}^{LR}\\}_{i=1}^{m}\\) are then decomposed into base \\(\\{B_{i}^{LR}\\}\\) and detail \\(\\{D_{i}^{LR}\\}\\) components. The bases contain the low frequencies. We align and average them to reduce the low frequency noise and upsample the result using bilinear zooming to produce the HR base component. The LR detail images are fed to a shared convolutional _Encoder_ network that outputs a feature representation of each LR image. The features are then merged into a HR feature map by our _shift-and-pool_ block (FSP), which aligns the LR features into the HR grid of the reference frame, and applies different pooling operations. The pooled features are then concatenated and fed to a _Decoder_ CNN module that produces the HR detail image. The final HR image is obtained by adding the HR base and detail \\(\\widehat{\\mathcal{I}}_{r}^{HR}=\\widehat{\\mathcal{B}}_{r}^{HR}+\\widehat{ \\mathcal{D}}_{r}^{HR}\\). The trainable modules of the proposed architecture (shown in red in Figure 2) include the Motion Estimator, the Encoder, and the Decoder. **Base-Detail decomposition.** As mentioned above, normalizing a sequence of the frames \\(\\bar{I}_{i}^{LR}\\) by their reported exposures \\(e_{i}\\) does not result in stable intensity levels across the sequence. This can be due to small errors in \\(e_{i}\\). However, uncorrected vignetting or stray light also contribute the same effect, even in single-exposure imagery. The nature of the super-resolution task makes it very sensitive to these exposure fluctuations. The shift-and-add operation would merge the LR features into an incoherent high-resolution feature map, making the task of the decoder more difficult, resulting in loss of details or high-frequency artifacts (see Figure 3). Refining the initial \\(e_{i}\\) could limit this problem. But this entails its own challenges, especially if one also considers vignetting and stray light sources. Instead, in this paper we propose a more robust and simple alternative, which is based on a base-detail decomposition [44] of the normalized LR images defined as follows \\[B_{i}^{LR}=I_{i}^{LR}*G,\\hskip 28.452756ptD_{i}^{LR}=I_{i}^{LR}-B_{i}^{LR}, \\tag{3}\\] for \\(i=1,\\dots,m\\). Here \\(G\\) is a Gaussian kernel of standard deviation 1. We then process independently the details \\(\\{D_{i}^{LR}\\}\\) and the bases \\(\\{B_{i}^{LR}\\}\\) to produce the corresponding high resolution estimates \\(\\widehat{\\mathcal{D}}_{r}^{HR}\\) and \\(\\widehat{\\mathcal{B}}_{r}^{HR}\\). This decomposition is linear and does not affect the super-resolution since the alias is preserved in the detail components \\(\\{D_{i}^{LR}\\}\\). As the detail images span a smaller intensity range than the complete image \\(I_{i}^{LR}\\), an error \\(\\delta\\) in the exposure time results in a small deviation in the detail and a large one in the base: \\(\\delta\\,B_{i}^{LR}+\\delta\\,D_{i}^{LR}=\\delta\\,I_{i}^{LR}\\). The small error in the detail can be handled by a super-resolution method. On the other hand, the base images do not need to be super-resolved, but still need to be denoised. In this work we propose a simple processing that aligns and averages the bases and upsamples the result. To fully exploit the high signal-to-noise ratio of longer exposures, the average is weighted by the exposure times \\(e_{i}\\) \\[B^{HR}=\\mathrm{Zoom}\\left(\\frac{\\sum_{i}e_{i}\\,\\mathrm{Warp}(B_{i}^{LR})}{\\sum _{i}e_{i}}\\right). \\tag{4}\\] This weighting is an approximation of the ML estimator of Granados et al. [23] (details in the supplementary material). Base and detail decompositions have been used in super-resolution networks [31, 27] to focus the network capacity on the details. In our case, the decomposition also provides robustness to errors in the radiometric normalization. Figure 2: Overview of our proposed multi-exposure super-resolution network architecture HDR-DSP at inference time. **Motion Estimator.** We follow the works of [43, 49] to build a network (with the same hourglass architecture) that estimates the optical flows between the normalized LR frames \\(\\{I_{i}^{LR}\\}_{i=1}^{m}\\) and the normalized reference frame \\(I_{r}^{LR}\\) \\[F_{i\\to r}\\!=\\!\\textbf{MotionEst}(I_{i}^{LR},I_{r}^{LR};\\Theta_{\\textbf{M}})\\! \\in\\![-R,R]^{H\\!\\times\\!W\\!\\!\\sim\\!2}, \\tag{5}\\] where \\(\\Theta_{\\textbf{M}}\\) denotes the network parameters. A small Gaussian filter (\\(\\sigma=1\\)) is applied to the input images to reduce the alias [54, 43]. The network is trained with a maximum motion range of \\([-R,R]^{2}\\) (with \\(R=5\\) pixels). The training was adapted to better handle the noise difference due to the multi-exposure setting (see SS4.2). **Noise-level-aware detail encodings.** The Encoder module generates relevant features \\(J_{i}^{LR}\\) for each normalized LR detail image \\(D_{i}^{LR}\\) in the sequence \\[J_{i}^{LR}=\\textbf{Encoder}(D_{i}^{LR},\\bar{I}_{i}^{LR};\\Theta_{\\textbf{E}}) \\in\\mathbb{R}^{H\\times W\\times N}, \\tag{6}\\] where \\(\\Theta_{\\textbf{E}}\\) is the set of parameters of the encoder and \\(N=64\\) is the number of produced features. The network architecture is detailed in the supplementary material. The un-normalized low resolution frames \\(\\bar{I}_{i}^{LR}\\) are also fed to the encoder. This is motivated by the fact that the maximum likelihood fusion of noisy acquisitions into a (HDR) image is a weighted average, where the weights are the inverse of the noise variances [23, 3]. In the proposed architecture, the normalized details \\(D_{i}^{LR}\\) are fused to produce a high resolution detail \\(\\widehat{\\mathcal{D}}_{r}^{HR}\\). The noisy un-normalized images are unbiased estimators of an affine function of the noise variances \\(\\sigma^{2}(\\tilde{I}_{i}^{LR})/a-b/a\\), thus they provide to the encoder the information required to compute the optimal fusion weights. The resulting features \\(J_{i}^{LR}\\) are then aggregated via a set of pooling operations, without any particular handling related to different source exposures. **Feature Pooling.** We propose the Feature Shift-and-Pool block (FSP) which maps the LR features into their positions on the reference HR grid and pools them. First the features are \"splatted\" bilinearly onto the HR grid by the SPMC module [52]. Each LR frame is upscaled by introducing zeros between samples and motion compensated following the flows \\(F_{i\\to r}\\). This is differentiable with respect to the intensities and the optical flows. Each splatted pixel is assigned a bilinear weight depending on the fractional part of its position in the HR grid. See [52, 43] for details. This results in a set of aligned sparse HR feature maps \\[J_{i}^{HR}=\\text{SPMC}(J_{i}^{LR},\\{F_{i\\to r}\\})\\in\\mathbb{R}^{sH\\times sW \\times N}, \\tag{7}\\] and the corresponding bilinear splatting weights \\(W_{i}^{HR}=\\text{SPMC}(1,\\{F_{i\\to r}\\})\\). The upscaling factor \\(s\\) is set to 2. As in [43], we use a weighted average pooling in the temporal direction (8). In addition, we propose computing the standard deviation and the max (9): \\[J_{A}^{HR} =(\\sum_{i}J_{i}^{HR})(\\sum_{i}W_{i}^{HR})^{-1}, \\tag{8}\\] \\[J_{M}^{HR} =\\max_{i}J_{i}^{HR},\\qquad J_{S}^{HR}=\\operatorname*{std}_{i}J_{i }^{HR}. \\tag{9}\\] Note that this block does not have any trainable parameters, a trainable layer may attain a similar performance at a much higher computational cost (see the supplementary material). These feature pooling operations render the architecture invariant to permutations of the input frames [5]. The key idea is that through end-to-end training, the encoder network will learn to output features for which the pooling is meaningful. Therefore, it is essential that the pooling operation is capable of passing all the necessary information to the decoder. Indeed, average pooling captures a consensus of the features, which amounts to a temporal denoising. But in aliased image sequences, it is common to come across features that are only visible in a single frame. Thus, the idea of the max-pooling operation is to preserve these unique features that would otherwise be lost in the average. The standard deviation pooling completes the picture by measuring the point-wise variability of the features. The pooled features are independent of the number of processed frames. But this information is important as the decoder may interpret features resulting from aggregating many images differently than those resulting from just a few. For this reason, the aggregation weights \\(W^{HR}=\\sum_{i}W_{i}^{HR}\\) are also concatenated with the pooled features. As we will see in SS5.2, incorporating \\(W^{HR}\\) improves the network ability to handle a variable number of input frames. **Decoder.** The Decoder network reconstructs the HR detail image \\(\\widehat{\\mathcal{D}}_{r}^{HR}\\) from the pooled features \\[\\widehat{\\mathcal{D}}_{r}^{HR}\\!=\\!\\textbf{Decoder}(J_{A}^{HR},J_{M}^{HR},J_{S}^ {HR},W^{HR};\\Theta_{\\textbf{D}})\\!\\in\\!\\mathbb{R}^{sH\\times sW}, \\tag{10}\\] where \\(\\Theta_{\\textbf{D}}\\) denotes the set of parameters of the decoder. The architecture is detailed in the supplementary material. Figure 3: High frequency artifacts in a reconstruction from a real SkySat sequence (using DSA [43]) with exposure time errors (left). HDR-DSP with the proposed base-detail (BD) decomposition does not present artifacts (right). ### Self-supervised learning To train the HDR-DSP detail fusion network, we adapt the fully self-supervised framework of [43], which requires no ground truth HR images. During training, the LR frames are randomly selected and for every sequence, one frame is set apart as the reference \\(I_{r}^{LR}\\). Then, all the other LR images in each sequence are registered against the reference using the **MotionEst** network yielding the flows \\(F_{i\\to r}\\). The reference frame serves as the target for the self-supervised training similarly to noise-to-noise [20, 35]. The procedure relies on the minimization of a reconstruction loss in the LR domain plus a motion estimation loss to ensure accurate alignment of the frames. The losses and the proposed adaptations are detailed in the following paragraphs. **Self-supervised SR loss.** The self-supervised loss forces the network to produce an HR detail \\(\\widehat{\\mathcal{D}}_{r}^{HR}\\) such that when subsampled, it coincides (modulo the noise) with the withheld target detail \\(D_{r}^{LR}\\) \\[\\ell_{self}(\\widehat{\\mathcal{D}}_{r}^{HR},D_{r}^{LR})\\,=\\,\\|\\Pi_{2}(\\widehat{ \\mathcal{D}}_{r}^{HR}\\,*k)\\,-\\,D_{r}^{LR}\\|_{1}, \\tag{11}\\] where \\(\\widehat{\\mathcal{D}}_{r}^{HR}=\\textbf{Net}(\\{D_{i}^{LR}\\}_{i\ eq r},\\{\\tilde {I}_{i}^{LR}\\}_{i=1}^{m})\\) is the SR output, and \\(\\Pi_{2}\\) is the subsampling operator that takes one pixel over two in each direction. As in [43] we include the convolution kernel \\(k\\) in the loss. This forces the network to produce a deconvolved HR image that once convolved with \\(k\\) and subsampled matches the optical blur present in \\(D_{r}^{LR}\\). During training, the LR reference is only used in the motion estimator to compute the optical flows, but it is not fused into the HR result to avoid unwanted trivial solutions [11, 18, 43]. At inference time we use the reference as this leads to improved results [43]. **Grid shifting.** The self-supervised loss (11) downsamples the super-resolved detail to compare it with the reference LR detail. But since the downsampling is fixed, only the sampled positions intervene in the loss, which breaks the translation equivariance of the method. To avoid this issue, during training we augment the data by adding to the estimated optical flows a random shift of \\(0.5e\\) in each dimension (\\(\\epsilon\\in\\{0,1\\}\\)). As a result, the super-resolved image is shifted by \\(\\epsilon\\), which is easily compensated before computing the loss. This yields an improvement in PSNR of 0.2dB. **Motion estimation loss.** The motion estimator is trained with unsupervised learning as in [57]. The loss consists of a warping term and a regularization term. We observed that the optical flow is very sensitive to the intensity fluctuations between frames (as in our normalized LR frames \\(I_{i}^{LR}\\)), which result in imprecise alignments. To prevent this issue we compute the warping loss on the details rather than on the images, which is common in traditional optical flow [36, 49]. The loss is computed for each flow \\(F_{i\\to r}\\) estimated by the **MotionEst** module \\[\\ell_{me}(\\{F_{i\\to r}\\}_{i=1}^{m})=\\lambda_{1}TV(F_{i\\to r})+\\\\ \\sum_{i}\\|\\textbf{Detail}\\left(I_{i}^{LR}-\\textbf{ Pullback}(I_{r}^{LR},F_{i\\to r})\\right)\\|_{1}, \\tag{12}\\] where **Pullback** computes a bicubic warping of \\(I_{r}^{LR}\\) according to a flow, **Detail** applies a high-pass filter, TV is the finite difference discretization of the classic Total Variation regularizer [48], and \\(\\lambda_{1}=0.003\\) is a hyperparameter controlling the regularization strength. **Training.** The self-supervised training of HDR-DSP is done in two stages. We first pretrain the motion estimator on the simulated data to ensure that it produces accurate flows. Then, we train the entire system end-to-end with the pre-trained **MotionEst** using the self-supervised loss (\\(\\lambda_{2}=3\\)) \\[\\text{loss}=\\ell_{self}+\\lambda_{2}\\ell_{me}. \\tag{13}\\] Other training details are in the supplementary material. ## 5 Experiments For our experiments, we use real multi-exposure push-frame images (L1A) acquired by SkySat satellites [41]. For the quantitative evaluations we also simulated a multi-exposure and a single-exposure datasets from L1B products (super-resolved products by Planet with a factor of 1.25). ### Simulated multi-exposure dataset The two simulated datasets were generated from 1371 crops of L1B products (1096 train, 200 test, 75 val). First, we generate the noise-free LR images normalized to an exposure time of 1. Random subpixel translations of \\(\\{\\Delta_{i}\\}_{i=1}^{m}\\) are applied to the ground truth followed by \\(\\times 2\\) subsampling \\[\\mathcal{I}_{r}^{LR} =\\Pi_{2}(\\mathcal{I}^{HR}), \\tag{14}\\] \\[\\mathcal{I}_{i}^{LR} =\\Pi_{2}(\\text{Shift}_{\\Delta_{i}}(\\mathcal{I}^{HR})),\\qquad i \ eq r\\] where \\(\\Pi_{2}\\) is the subsampling operator. The exposure times are simulated as \\(e_{i}=\\alpha^{c_{i}}\\), where \\(c_{i}\\in\\{-5,..,5\\}\\), and \\(\\alpha=\\text{uniform}(1.2,1.4)\\). The noises \\(n_{i}=\\sqrt{ae_{i}\\mathcal{I}_{i}^{LR}+b\\,\\mathcal{N}(0,1)}\\) are then added to all the un-normalized frames to produce the noisy multi-exposure sequence \\(\\tilde{I}_{i}^{LR}=e_{i}\\mathcal{I}_{i}^{LR}+n_{i}\\). The constants \\(a=0.119,b=12.050\\) were estimated from real SkySat images with the Ponomarenko noise curve estimation method [45, 16]. The single-exposure dataset is generated in the same manner but with all \\(e_{i}=1\\). To simulate the exposure inaccuracies, during training and testing the \\(e_{i}\\) values are contaminated with noise within a range of 5%. We use a PSNR score in our evaluation. The SkySat L1A images have a dynamic range of 12 bits, but we observed that the peak signal is at about 3400 DN. Therefore, our PSNR is normalized with a peak of 3400. We denote PSNRME (resp. PSNR SE) as the average PSNRs computed on all the multi-exposure (resp. single-exposure) test sequences. ### Ablation study We study in Table 1 the importance of the base-detail decomposition. We consider simulated multi-exposure (ME) and single-exposure (SE) sequences presenting small exposure errors that match the ones observed in real sequences. If we train HDR-DSP without the proposed base-detail (w/o BD), the performance drops noticeably, which is also visible on real sequences (Figure 3). Even when training specifically for a single-exposure setting, as with DSA [43], the performance with base-detail is superior. In addition, we can see that removing the un-normalized LR frame from the encoder inputs (w/o LR) leads to a large performance drop for both single- and multi-exposure. The experiment shown in Table 2 studies the impact of using multiple feature pooling strategies: average, maximum, and standard deviation. It shows that using the three greatly improves the results: about \\(0.5\\)dB with respect to just using average. We observed that not including the average among the pooling strategies yields much worse results. The aggregation weight feature \\(W^{HR}\\) was added to improve the handling by the decoder of sequences with variable number of input frames. The results in Table 3 confirm the importance of providing these weights. We also compare with networks trained for a fixed number of frames (HDR-DSP 4 and 14) and observe that in this case the performance drops even when testing for those specific configurations. We conclude that the weights become useless if the training does not consider a variable number of frames. Lastly, removing the grid shifting (SS4.2) from the training also reduces the PSNR ME: from 54.70 to 54.49dB. ### Comparison with the state-of-the-art We compare our self-supervised network on the simulated dataset against state-of-the-art MISR methods for satellite images: _DSA_[43], _HighRes-net_ (HR-net) [17], _RAMS_[50], and _ACT_[7]. A weighted _Shift-and-add_[38] with bicubic splatting adapted to multi-exposure sequences (ME S&A) serves as the baseline. HR-net and RAMS are two supervised networks designed to perform super-resolution of multi-temporal PROBA-V satellite images. In the context of push-frame satellites, we use the reference-aware version [42] of HR-net and RAMS rather than the original approaches, as they achieve higher quality results. DSA and ACT are two state-of-the-art super-resolution methods for SkySat imagery. ACT also serves as a proxy for comparison with other interpolation-based methods from the literature [56]. We adapt these methods to multi-exposure sequences. The deep learning approaches are fed with the normalized input images, whereas for ACT method we apply the same base-detail decomposition described in SS4.1 and use ACT to restore the details (denoted BD-ACT). The registration step of ME S&A, BD-ACT, and RAMS are done with the inverse compositional algorithm [10, 13], which is robust to noise and brightness changes. The motion estimator of DSA is also trained with the loss on the details (SS4.2). Table 4 shows a quantitative comparison of the methods over the test set in the case of adding exposure time errors of 5% (as during training) and 20%. These errors are estimated from SkySat data (exposures ranging from 0.5 to 4.5 ms); see the supplementary material for details. Note that even with exact exposure times (row 0%), vignetting or stray light effects still justify the use of the proposed base-detail decomposition. Our self-supervised network ranks first in all cases with a significant gain of more than 1dB over all others (see Figure 4). Interestingly, the performance of most methods degrades quickly for large inaccuracy in exposure times. Only the methods using the base-detail decomposition (BD-ACT and ours) are robust to these inaccuracies. Note that HDR-DSP has never seen errors of 20% during training. \\begin{table} \\begin{tabular}{l c c c c} \\hline \\hline Methods (all HDR-DSP ) & full & w/o BD & w/o BD (trained SE) & w/o LR \\\\ \\hline PSNR(dB) ME & **54.70** & 53.76 & 52.91 & 53.94 \\\\ PSNR(dB) SE & **54.72** & 54.16 & 54.54 & 54.16 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 1: Handling of multi-exposure sequences with base-detail decomposition (BD) and using the un-normalized LR frames \\(I_{i}^{LR}\\) as an additional encoder input. \\begin{table} \\begin{tabular}{l c c c c c} \\hline \\hline Methods & RAMS & ME S&A & HR-net & BD-ACT & DSA & HDR-DSP \\\\ \\hline 0\\% exp. error & 52.05 & 53.33 & 54.30 & 54.24 & 55.55 & **56.00** \\\\ 5\\% exp. error & 51.84 & 52.43 & 54.22 & 54.23 & 54.99 & **55.99** \\\\ 20\\% exp. error & 49.95 & 49.19 & 53.82 & 54.20 & 54.30 & **55.90** \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 4: PSNR ME (dB) over the synthetic test set with 15 images in the case of 0%, 5% and 20% exposure time errors. ### Results on real data The proposed self-supervised training allows to train HDR-DSP on real multi-exposure sequences taken from SkySat satellites. From the L1A product of Planet SkySat, we extracted 2500 sequences (\\(128\\times 128\\) pixels) pre-registered up-to an integer translation. Out of 2500 sequences, 300 are used for testing. Each sequence contains from 4 to 15 frames. In about 75% of the sequences the exposure time varies within each sequence and we used the exposure time information provided in the metadata. Figure 5 compares HDR-DSP against Planet L1B, DSA, and BD-ACT. The top row shows four normalized frames of the sequence, where we can notice the dependence of the noise level on the exposure time. The method used in the Planet L1B product is unknown. It super-resolves by a factor of 1.25 but contains noticeable artifacts and lacks fine details. The result from DSA exhibits a high-frequency pattern due to the imprecise exposure times. BD-ACT is able to cope with the exposure changes thanks to the base-detail decomposition, but the result is still very noisy. In contrast, HDR-DSP shows a clean and detailed reconstruction. Figure 1 also shows a multi-exposure LR sequence along with the results from ME S&A, Planet L1B, ACT, DSA and HDR-DSP. Comparing HDR-DSP with DSA, we see that the former provides a cleaner result thanks to the base-detail decomposition and the proposed improvements over the DSA architecture and training procedure, which is also observed in the synthetic experiments. ## 6 Conclusion and limitations The proposed HDR-DSP method is able to reconstruct high-quality results from multi-exposure bursts, providing fine details, low-noise, and high dynamic range. The proposed base-detail processing allows robustness to errors in the exposure time that are common in practice. In addition, a significant performance improvement is obtained by making the image encoding dependent on the noise variance, and using a new feature pooling designed to capture richer representations. Thanks to its fully self-supervised training, the method requires no ground truth and can thus be applied on real data. We show its effectiveness by training a model that super-resolves multi-exposure SkySat L1A acquisitions, leading to a substantial resolution gain with respect to the state-of-the-art. **Limitations.** The context of remote sensing allows one to make additional assumptions that do not hold in more general settings: 1. The considered noise levels are away from the challenging photon-limited regime; 2. Motion and occlusions are much easier to handle. In particular, the latter point should be improved to apply this method to video or burst super-resolution. Besides, the proposed method does not handle saturation. This will be studied in future work. Acknowledgments.Work supported by a grant from Region Ile-de-France. This work was performed us Figure 4: Super-resolution from a synthetic multi-exposure sequence (5% exp. error) of 15 aliased LR images. Methods are trained on a synthetic dataset and receive as inputs the normalized ME images except BD-ACT and HDR-DSP, which use the base-detail decomposition. Figure 5: Super-resolution from a real multi-exposure sequence of 9 SkySat images. The first line corresponds to 4 normalized LR images in that sequence with different exposure times. The second line shows the reconstructions by Planet (L1B), DSA, BD-ACT and our method HDR-DSP. ing HPC resources from GENCI-IDRIS (grants 2022-AD011012453R1 and 2022-AD011012458R1) and from the \"Mesocentre\" computing center of CentraleSupelec and ENS Paris-Saclay supported by CNRS and Region Ile-de-France ([http://mesocentre.centralesupelec.fr/](http://mesocentre.centralesupelec.fr/)). We thank Planet for providing the L1A SkySat images. ## References * [1] Cecilia Aguerrebere, Andres Almansa, Julie Delon, Yann Gousseau, and Pablo Muse. A bayesian hyperprior approach for joint image denoising and interpolation, with an application to hdr imaging. _IEEE Transactions on Computational Imaging_, 3(4):633-646, 2017. * [2] Cecilia Aguerrebere, Julie Delon, Yann Gousseau, and Pablo Muse. Simultaneous hdr image reconstruction and denoising for dynamic scenes. In _IEEE International Conference on Computational Photography (ICCP)_, pages 1-11. IEEE, 2013. * [3] Cecilia Aguerrebere, Julie Delon, Yann Gousseau, and Pablo Muse. Best algorithms for hdr image generation. a study of performance bounds. _SIAM Journal on Imaging Sciences_, 7(1):1-34, 2014. * [4] Eirikur Agustsson and Radu Timofte. Ntire 2017 challenge on single image super-resolution: Dataset and study. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops_, pages 126-135, 2017. * [5] Miika Aittala and Fredo Durand. Burst image deblurring using permutation invariant convolutional neural networks. In _Proceedings of the European Conference on Computer Vision (ECCV)_, pages 731-747, 2018. * [6] Mohammad S. Alam, John G. Bognar, Russell C. Hardie, and Brian J. Yasuda. Infrared image registration and high-resolution reconstruction using multiple translationally shifted aliased video frames. _IEEE Transactions on instrumentation and measurement_, 49(5):915-923, 2000. * [7] Jeremy Anger, Thibaud Ehret, Carlo de Franchis, and Gabriele Facciolo. Fast and accurate multi-frame super-resolution of satellite images. _ISPRS Annals of Photogrammetry, Remote Sensing & Spatial Information Sciences_, 5(1), 2020. * [8] Jeremy Anger, Thibaud Ehret, and Gabriele Facciolo. Parallax estimation for push-frame satellite imagery: application to super-resolution and 3d surface modeling from skysat products. In _2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS_, pages 2679-2682. IEEE, 2021. * [9] Md Rifat Arefin, Vincent Michalski, Pierre-Luc St-Charles, Alfredo Kalaitzis, Sookyung Kim, Samira E. Kahou, and Yoshua Bengio. Multi-image super-resolution for remote sensing using deep recurrent networks. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops_, pages 206-207, 2020. * [10] Simon Baker and Iain Matthews. Equivalence and efficiency of image alignment algorithms. In _Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. CVPR 2001_, volume 1, pages I-I. IEEE, 2001. * [11] Joshua Batson and Loic Royer. Noise2self: Blind denoising by self-supervision. In _International Conference on Machine Learning_, pages 524-533. PMLR, 2019. * [12] Goutam Bhat, Martin Danelljan, Luc Van Gool, and Radu Timofte. Deep burst super-resolution. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 9209-9218, 2021. * [13] Thibaud Briand, Gabriele Facciolo, and Javier Sanchez. Improvements of the Inverse Compositional Algorithm for Parametric Motion Estimation. _IPOL_, 8:435-464, 2018. * [14] Jianrui Cai, Hui Zeng, Hongwei Yong, Zisheng Cao, and Lei Zhang. Toward real-world single image super-resolution: A new benchmark and a new model. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_, pages 3086-3095, 2019. * [15] Ugur Cogalan, Mojtaba Bemana, Karol Myszkowski, Hans-Peter Seidel, and Tobias Ritschel. Hdr denoising and deblurring by learning spatio-temporal distortion models. _arXiv preprint arXiv:2012.12009_, 2020. * [16] Miguel Colom and Antoni Buades. Analysis and extension of the ponomarenko et al. method, estimating a noise curve from a single image. _Image Processing On Line_, 3:173-197, 2013. * [17] Michel Deudon, Alfredo Kalaitzis, Israel Goytom, Md Rifat Arefin, Zhichao Lin, Kris Sankaran, Vincent Michalski, Samira E Kahou, Julien Cornebise, and Yoshua Bengio. Highres-net: Recursive fusion for multi-frame super-resolution of satellite imagery. arxiv 2020. _arXiv preprint arXiv:2002.06460_, 2020. * [18] Valery Dewil, Jeremy Anger, Axel Davy, Thibaud Ehret, Gabriele Facciolo, and Pablo Arias. Self-supervised training for blind multi-frame video denoising. In _Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision_, pages 2724-2734, 2021. * [19] Thibaud Ehret, Axel Davy, Pablo Arias, and Gabriele Facciolo. Joint demosaicing and denoising by overfitting of bursts of raw images. In _The IEEE International Conference on Computer Vision (ICCV)_, 2019. * [20] Thibaud Ehret, Axel Davy, Jean-Michel Morel, Gabriele Facciolo, and Pablo Arias. Model-blind video denoising via frame-to-frame training. In _The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_, June 2019. * [21] Alessandro Foi, Mejdi Trimeche, Vladimir Katkovnik, and Karen Egiazarian. Practical poissonian-gaussian noise modeling and fitting for single-image raw-data. _IEEE Trans. Image Process._, 17(10):1737-1754, 2008. * [22] Andrew Fruchter and Richard Hook. Drizzle: A method for the linear reconstruction of undersampled images. _Publications of the Astronomical Society of the Pacific_, 114(792):144, 2002. * [23] Miguel Granados, Boris Ajdin, Michael Wand, Christian Theobalt, Hans-Peter Seidel, and Hendrik PA Lensch. Optimal hdr reconstruction with linear digital cameras. In _2010 IEEE computer society conference on computer vision and pattern recognition_, pages 215-222. IEEE, 2010. * [24] Thomas J. Grycewicz, Stephen A. Cota, Terrence S. Lomheim, and Linda S. Kalman. Focal plane resolution and overlapped array TDI imaging. In _Remote Sensing System Engineering_, volume 7087, page 708704. International Society for Optics and Photonics, 2008. * [25] Samuel W. Hasinoff, Dillon Sharlet, Ryan Geiss, Andrew Adams, Jonathan T. Barron, Florian Kainz, Jiawen Chen, and Marc Levoy. Burst photography for high dynamic range and low-light imaging on mobile cameras. _ACM Transactions on Graphics (ToG)_, 35(6):1-12, 2016. * [26] Felix Heide, Markus Steinberger, Yun-Ta Tsai, Mushfiqur Rouf, Dawid Pajak, Dikpal Reddy, Orazio Gallo, Jing Liu, Wolfgang Heidrich, Karen Egiazarian, et al. Flexisp: A flexible camera image processing framework. _ACM Transactions on Graphics (ToG)_, 33(6):1-13, 2014. * [27] Takashi Isobe, Xu Jia, Shuhang Gu, Songjiang Li, Shengjin Wang, and Qi Tian. Video Super-Resolution with Recurrent Structure-Detail Network. In _Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)_, 2020. * [28] Yunwei Jia. Method and apparatus for super-resolution of images, Nov. 6 2012. US Patent 8,306,121. * [29] Emiliano Kargieman, Gerado Gabriel Richarte, and Juan Manuel Vuletich. Imaging device for scenes in apparent motion. U.S. Patent 9813601B2, issued November 7, 2017. * [30] Soo Ye Kim and Munchurl Kim. A multi-purpose convolutional neural network for simultaneous super-resolution and high dynamic range image reconstruction. In _Asian Conference on Computer Vision_, pages 379-394. Springer, 2018. * [31] Soo Ye Kim, Jihyong Oh, and Munchurl Kim. Deep SRITM: Joint learning of super-resolution and inverse tone-mapping for 4K UHD HDR applications. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_, pages 3116-3125, 2019. * [32] Soo Ye Kim, Jihyong Oh, and Munchurl Kim. Jsi-gan: Gan-based joint super-resolution and inverse tone-mapping with pixel-wise task-specific filters for und hdr video. In _Proceedings of the AAAI Conference on Artificial Intelligence_, volume 34, pages 11287-11295, 2020. * [33] Mary Knapp, Sara Seager, Brice-Olivier Demory, Akshata Krishnamurthy, Matthew W. Smith, Christopher M. Pong, Vanessa P. Bailey, Amanda Donner, Peter Di Pasquale, Brian Campuzano, Colin Smith, Jason Luu, Alessandra Babuscia, Robert L. Bocchino, Jr., Jessica Loveland, Cody Colley, Tobias Gedenk, Tejas Kulkarni, Kyle Hughes, Mary White, Joel Krajewski, and Lorraine Fesq. Demonstrating High-precision Photometry with a CubeSat: ASTERIA Observations of 55 Cancri e. _The Astronomical Journal_, 160(1):23, jun 2020. * [34] Bruno Lecouat, Jean Ponce, and Julien Mairal. Lucas-kanade reloaded: End-to-end super-resolution from raw image bursts. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_, pages 2370-2379, 2021. * [35] Jaakko Lehtinen, Jacob Munkberg, Jon Hasselgren, Samuli Laine, Tero Karras, Miika Aittala, and Timo Aila. Noise2Noise: Learning Image Restoration without Clean Data. In _35th International Conference on Machine Learning, ICML 2018_, 2018. * [36] Pengpeng Liu, Michael Lyu, Irwin King, and Jia Xu. Self-low: Self-supervised learning of optical flow. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)_, June 2019. * [37] Marcus Martens, Dario Izzo, Andrej Krzic, and Daniel Cox. Super-resolution of PROBA-V images using convolutional neural networks. _Astrodynamics_, 3(4):387-402, 2019. * [38] Maria Teresa Merino and Jorge Nunez. Super-resolution of remotely sensed images with variable-pixel linear reconstruction. _IEEE TGRS_, 45(5):1446-1457, 2007. * [39] Andrea Bordone Molini, Diego Valsesia, Giulia Fracastoro, and Enrico Magli. Depesum: Deep neural network for super-resolution of unregistered multitemporal images. _IEEE Transactions on Geoscience and Remote Sensing_, 58(5):3644-3656, 2019. * 2020 IEEE International Geoscience and Remote Sensing Symposium_, pages 609-612, 2020. * [41] Kiran Murthy, Michael Shearn, Byron D. Smiley, Alexandra H. Chau, Josh Levine, and Dirk Robinson. SkySat-1: very high-resolution imagery from a small satellite. In _Sensors, Systems, and Next-Generation Satellites XVIII_, volume 9241, page 92411E. International Society for Optics and Photonics, 2014. * [42] Ngoc Long Nguyen, Jeremy Anger, Axel Davy, Pablo Arias, and Gabriele Facciolo. PROBA-V-REF: Repurposing the PROBA-V Challenge for Reference-Aware Super Resolution. In _2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS_, pages 3881-3884. IEEE, jul 2021. * [43] Ngoc Long Nguyen, Jeremy Anger, Axel Davy, Pablo Arias, and Gabriele Facciolo. Self-supervised multi-image super-resolution for push-frame satellite images. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops_, pages 1121-1131, June 2021. * [44] Joan M. Ogden, Edward H. Adelson, James R. Bergen, and Peter J. Burt. Pyramid-based computer graphics. _RCA Engineer_, 30(5):4-15, 1985. * [45] Nikolay N. Ponomarenko, Vladimir V. Lukin, M.S. Zizakhov, Arto Kaarna, and Jaakko Astola. An automatic approach to lossy compression of aviris images. In _2007 IEEE International Geoscience and Remote Sensing Symposium_, pages 472-475. IEEE, 2007. * [46] Dirk Robinson, Jonathan Dyer, Joshua Levine, Brendan Hermalyn, Ronny Votel, and Matt William Messana. Controlling a line of sight angle of an imaging platform. U.S. Patent 10432866B2, issued October 1, 2019. * [47] G Rohith and Lakshmi Sutha Kumar. Paradigm shifts in super-resolution techniques for remote sensing applications. _The Visual Computer_, 37(7):1965-2008, 2021. * [48] Leonid I. Rudin, Stanley Osher, and Emad Fatemi. Nonlinear total variation based noise removal algorithms. _Physica D: nonlinear phenomena_, 60(1-4):259-268, 1992. * [49] Mehdi SM Sajjadi, Raviteja Vemulapalli, and Matthew Brown. Frame-recurrent video super-resolution. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_, pages 6626-6634, 2018. * [50] Francesco Salvetti, Vittorio Mazzia, Aleem Khaliq, and Marcello Chiaberge. Multi-image super resolution of remotely sensed images using residual attention deep neural networks. _Remote Sensing_, 12(14):2207, 2020. * [51] Dev Yashpal Sheth, Sreyas Mohan, Joshua L. Vincent, Ramon Manzorro, Peter A. Crozier, Mitesh M. Khapra, Eero P. Simoncelli, and Carlos Fernandez-Granda. Unsupervised deep video denoising. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_, pages 1759-1768, 2021. * [52] Xin Tao, Hongyun Gao, Renjie Liao, Jue Wang, and Jiaya Jia. Detail-revealing deep video super-resolution. In _Proceedings of the IEEE International Conference on Computer Vision_, pages 4472-4480, 2017. * [53] Yann Traonmilin and Cecilia Aguerrebere. Simultaneous high dynamic range and superresolution imaging without regularization. _SIAM Journal on Imaging Sciences_, 7(3):1624-1644, 2014. * [54] Patrick Vandewalle, Luciano Sbaiz, Joos Vandewalle, and Martin Vetterli. Super-resolution from unregistered and totally aliased signals using subspace methods. _IEEE Transactions on Signal Processing_, 2007. * [55] Subeesh Vasu, Abhijeet Shenoi, and A.N. Rajagopazan. Joint hdr and super-resolution imaging in motion blur. In _2018 25th IEEE International Conference on Image Processing (ICIP)_, pages 2885-2889. IEEE, 2018. * [56] Bartlomiej Wronski, Ignacio Garcia-Dorado, Manfred Ernst, Damien Kelly, Michael Krainin, Chia-Kai Liang, Marc Levoy, and Peyman Milanfar. Handheld multi-frame super-resolution. _ACM Transactions on Graphics (TOG)_, 38(4):1-18, 2019. * [57] Jason J. Yu, Adam W. Harley, and Konstantinos G. Derpanis. Back to basics: Unsupervised learning of optical flow via brightness constancy and motion smoothness. In _European Conference on Computer Vision_, pages 3-10. Springer, 2016. * [58] Songhyun Yu, Bumjun Park, Junwoo Park, and Jechang Jeong. Joint learning of blind video denoising and optical flow estimation. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops_, pages 500-501, 2020. # Self-Supervised Super-Resolution for Multi-Exposure Push-Frame Satellites Supplementary material Ngoc Long Nguyen\\({}^{1}\\) Jeremy Anger\\({}^{1,2}\\) Axel Davy\\({}^{1}\\) Pablo Arias\\({}^{1}\\) Gabriele Facciolo\\({}^{1}\\) \\({}^{1}\\) Universite Paris-Saclay, CNRS, ENS Paris-Saclay, Centre Borelli, France \\({}^{2}\\) Kayrros SAS [https://centreborelli.github.io/HDR-DSP-SR/](https://centreborelli.github.io/HDR-DSP-SR/) ###### Abstract Since the base component only contains low frequencies and cannot be super-resolved, we propose a simple pipeline consisting of _i)_ alignment of the LR base components \\(B_{i}\\) to the reference, _ii)_ temporal fusion via weighted average to attenuate noise, _iii)_ upscaling using bilinear interpolation. For the temporal fusion the weights in the weighted average are simply the exposure times: \\[B^{LR}(x)=\\frac{\\sum_{i}e_{i}\\text{Warp}(B_{i}^{LR}(x))}{\\sum_{i}e_{i}}\\] (S1) In this section we will provide a justification for this choice, which is based on two approximations. Approximate noise model for the base.The base results from the convolution with a Gaussian kernel \\(G\\). At pixel \\(x\\) we have \\[B_{i}^{LR}(x)=\\sum_{h}G(h)I_{i}^{LR}(x+h).\\] Assuming the signal-dependent Gaussian noise model of Eq. (2)1, we have that \\(B_{i}^{LR}(x)\\) also follows a Gaussian distribution with the following mean and variance: Footnote 1: Tables, figures and equations in the supplementary material are labeled S1, S2, to differentiate them from references to the main paper. \\[\\mathbb{E}\\{B_{i}^{LR}(x)\\} =\\sum_{h}G(h)\\mathcal{I}_{i}^{LR}(x+h)\\] \\[\\mathbb{V}\\{B_{i}^{LR}(x)\\} =\\frac{a}{e_{i}}\\sum_{h}G^{2}(h)\\mathcal{I}_{i}^{LR}(x+h)+\\frac{b} {e_{i}^{2}}\\sum_{h}G^{2}(h).\\] We are going to assume that the clean LR image \\(\\mathcal{I}_{i}^{LR}\\) varies smoothly in the filter support, and thus \\[\\mathbb{E}\\{B_{i}^{LR}(x)\\}\\approx\\mathcal{I}_{i}^{LR}(x),\\quad\\mathbb{V}\\{B_ {i}^{LR}(x)\\}\\approx\\frac{\\alpha e_{i}\\mathcal{I}_{i}^{LR}(x)+\\beta}{e_{i}^{2}}.\\] (S2) where \\(\\alpha=a\\sum_{h}G^{2}(h)\\) and \\(\\beta=b\\sum_{h}G^{2}(h)\\). This rough approximation allows us to use a signal-dependent Gaussian noise model like (2). The approximation is only valid in regions where the image is smooth (away from edges, textures, etc.). However, these are the regions in which we are mainly interested, since it is where the low frequency noise present in the base becomes more noticeable. Approximate MLE estimator for the weights.After alignment, for a given pixel \\(x\\) we have different values acquired with varying exposure times, which we are going to denote as \\(z_{i}=\\text{Warp}(B_{i}^{LR}(x))\\) to simplify notation. We also have the corresponding clean \\(LR\\) base images \\(\\mathcal{B}_{i}^{LR}\\), and we are going to assume that they coincide after alignment, i.e. \\(y=\\text{Warp}(\\mathcal{B}_{i}^{LR})(x)\\) for \\(i=1, ,m\\). We would like to estimate \\(y\\) from the series of observations \\[z_{i}\\sim\\mathcal{N}\\left(y,\\sigma_{i}^{2}(y)\\right),\\quad\\sigma_{i}^{2}(y)= \\frac{\\alpha e_{i}y+\\beta}{e_{i}^{2}}.\\] This problem occurs in HDR imaging, when estimating the unknown irradiance given noisy acquisitions with varying exposure times [1, 6]. Each \\(z_{i}\\) is an unbiased estimator of \\(y\\). Therefore, if the variances were known, we can minimize the MSE with the following weighted average, where the weights are the inverse of the variances: \\[\\hat{y}=\\frac{\\sum_{i}w_{i}z_{i}}{\\sum_{i}w_{i}},\\quad w_{i}=\\frac{e_{i}^{2}} {\\alpha e_{i}y+\\beta}.\\] (S3) The problem is that the weights depend on the unknown \\(y\\). In [6] Granados et al. solve this problem with an iterative weighted average: \\[w_{i}^{0}=\\frac{e_{i}^{2}}{\\alpha e_{i}z_{i}+\\beta}.\\] \\[w_{i}^{k}=\\frac{e_{i}^{2}}{\\alpha e_{i}\\hat{y}^{k}+\\beta},\\quad \\hat{y}^{k+1}=\\frac{\\sum_{i}w_{i}^{k}z_{i}}{\\sum_{i}w_{i}^{k}},\\quad k=1,2, \\] It can be shown that this converges to the maximum likelihood estimate. In our case, we are going to simplify expression (S3) by assuming that \\(\\alpha e_{i}y\\gg\\beta\\), and therefore \\(w_{i}\\approx\\frac{e_{i}}{\\alpha y}\\). Under this assumption, we obtain \\[\\hat{y}=\\frac{\\sum_{i}e_{i}z_{i}}{\\sum_{i}e_{i}}.\\] (S4) This assumption holds for brighter pixels and well exposed images [1]. ## Appendix B HDR-DSP architecture Our HDR-DSP architecture has 3 trainable modules: Motion estimator, Encoder, and Decoder. The Feature Shift-and-Pool block does not have any trainable parameters. Our motion estimator follows the work of [17]. Our encoder and our decoder are inspired from the SRResNet architecture [10], and built from the residual blocks (see Table S1). Convolutions of the encoder and decoder are performed using reflection padding. In total, our networks have 2853411 trainable parameters (Table S2). ## Appendix C Training details We train HDR-DSP in two stages, first pretraining the motion estimator and then the end-to-end system. Phase 1: Pre-train the Motion Estimator.Training the motion estimator in the case of images obtained with different exposures is a challenging task. We first pretrain it on the simulated dataset to ensure that it produces accurate flows. We monitor the quality of the estimations by comparing with the ground truth flows, until reaching an averaged error of 0.05 pixel. For training the motion estimator our first choice was to use the \\(L_{1}\\) distance between the reference image and the radiometrically corrected warped image. However, the quality of the estimated flows were not acceptable (with errors above 0.1 pixel). Indeed, since motion estimation relies on the photometric consistency between frames, it is very sensitive to the intensity fluctuations between frames (as it is the case for our normalized LR frames \\(I_{i}^{LR}\\)), which results in imprecise alignments. To prevent this issue we compute the warping loss on the details rather than on the images, which is common in traditional optical flow [11, 17]. The loss is computed for each flow \\(F_{i\\to r}\\) estimated by the **MotionEst** module \\[\\ell_{me}(\\{F_{i\\to r}\\}_{i=1}^{m})=\\\\ \\sum_{i}\\|\\textbf{Detail}(I_{i}^{LR})-\\textbf{Detail}\\left( \\textbf{Pullback}(I_{r}^{LR},F_{i\\to r})\\right)\\|_{1}\\\\ +\\lambda_{1}TV(F_{i\\to r}),\\] (S5) where **Pullback** computes a bicubic warping of \\(I_{r}^{LR}\\) according to a flow, **Detail** applies a high-pass filter to the images, TV is the finite difference discretization classic Total Variation regularizer [16], and \\(\\lambda_{1}=0.003\\) is a hyperparameter controlling the regularization strength. We set the batch size to 32 and use Adam [9] with the default Pytorch parameters and a initialized learning rate of \\(10^{-4}\\) to optimize the loss. The pre-training converges after 50k iterations and takes about 3 hours on one NVIDIA V100 GPU. Phase 2: Train the whole system end-to-end.We then use the pretrained motion estimator and train the entire system end-to-end using the total loss: \\[\\text{loss}=\\ell_{self}+\\lambda_{2}\\ell_{me}.\\] (S6) We set \\(\\lambda_{2}=3\\) in our experiments. Furthermore, to avoid boundary issues, the loss does not consider values at a distance below 2 pixels from the border of the frames. We train our model on LR crops of size \\(64\\times 64\\) pixels and validate on LR images of size \\(256\\times 256\\) pixels. During training, our network is fed with a random number of LR input images (from \\(4\\) to \\(14\\)) in each sequence. We set the batch size to \\(16\\) and optimize the loss using the Adam optimizer with default parameters. The learning rates are initialized to \\(10^{-4}\\), then scaled by 0.3 each 400 epochs. The training takes 20h (1200 epochs) on one NVIDIA V100 GPU. ## Appendix D Trainable feature pooling alternative The feature pooling block FSP described in Section 4.1 of the main article does not have any trainable parameters. In this section we investigate the use of a trainable layer, named PoolNet, for performing this task. We considered a simple trainable network PoolNet that performs feature pooling (Table S3) instead of statistical feature poolings (Avg-Max-Std) as presented in the paper. To this aim, PoolNet takes as input the concatenation of \\(N\\) features \\(J_{i}^{HR}\\) and \\(N\\) weights \\(W_{i}^{HR}\\) (computed by the SPMC [19] module from \\(N\\) LR images) and produces the fused HR features. Then, the Decoder network reconstructs the HR detail image from the fused features. A drawback of PoolNet is that it can only be applied on a fixed number of frames. Table S4 compares the performance of the PoolNet (which replaces the Avg-Max-Std feature pooling) trained on 4 and 14 frames with our original method. We can see that in the case of small number of frames, PoolNet attains a performance comparable to our HDR-DSP method using the Avg-Max-Std feature pooling. However, in the case of 14 frames, there is a big gap of 0.3 dB between our method and PoolNet. It seems that it is more difficult for PoolNet to capture the necessary statistics from many features. ## Appendix E Alternative exposure weighting strategies As discussed in the main paper, the LRs with longer exposure time should contribute more to the reconstruction because of their high signal-to-noise ratio. In our proposed method, we use the un-normalized LR images as additional input to the Encoder so as that the Encoder perceives the noise level in each LR image. Subsequently, the Encoder can decide which features are more important. We also evaluated an alternative strategy to weight the features (WF) based on the exposure times. This simply consists in weighting the features \\(J_{i}^{LR}\\) by the corresponding exposure time in the SPMC module. Actually, this was inspired from the ME S&A method. This strategy leads to slightly worse yet adequate feature encodings (-0.08dB) as shown in Table S5. Moreover, using both feature weighting and LRs encoding (third column) leads to the same performance as only using LRs encoding. This implies that the Encoder already encodes the necessary information about the signal-dependent noise on the features. ## Appendix F Adaptation of existing methods to multi-exposure sequences We detail here the adaptations to the algorithms we used in the comparisons. Me S&A._Multi-exposure Shift-and-add_ is a weighted version of the classic shift-and-add method [5, 7, 8, 12] designed for multi-exposure sequences. Usually, S&A produces the HR image by registering the LR images onto the HR grid using the corresponding optical flows. After the registration step, the intensities of the LR images are splatted to the neighborhood integer-coordinate pixels using some kernel interpolation. Finally, pixel-wised aggregation is done to obtain the HR output image. Therefore a naive method consists of using the classic S&A method on the normalized LR images. However this ignores the different signal-to-noise ratios in the normalized images and fails to greatly reduce the noise. Using the same arguments as in the Sec. A, we propose the weighted S&A for multi-exposure sequence \\[\\widehat{I}^{HR}=\\frac{\\sum_{i=1}^{m}\\textbf{Register}(\\overline{I}_{i}^{LR})}{ \\sum_{i=1}^{m}e_{i}}\\] (S7) where **Register** maps and splats the un-normalized images \\(\\overline{I}_{i}^{LR}\\) onto the HR grid. Base-detail ACT (BD ACT).ACT [2] is a traditional multi-image super-resolution method developed for Planet SkySat single-exposure sequences. It formulates the reconstruction as an inverse problem and solves it by an iterative optimization method. BD ACT extends ACT to support multi-exposure images by adopting the same base-detail strategy as proposed in HDR-DSP: the details of the images are fused by ACT, and the base is reconstructed by the upsampled average of the bases of the input images. HighRes-net (HR-net) and RAMS.HighRes-net [4] and RAMS [18] are two super-resolution methods for multi-temporal PROBA-V satellite images. However in the PROBA-V dataset, the identity of the LR reference image is unavailable. This hinders the true potential of the methods trained on this dataset. As a result we use the reference-aware super-resolution [14] of HighRes-net and RAMS. In HighRes-net, the reference image is used as a shared representation for all LR images. Each LR image is embedded jointly with this reference before being recursively fused. In RAMS, each LR image is aligned to the reference image before being input to the residual attention block. The registration step of RAMS is done with inverse compositional algorithm [3], which is robust to noise and brightness change. As HighRes-net and RAMS are supervised methods, we also use a radiometric correction on the output before computing the loss [4]. Dsa._Deep shift-and-add_[15] DSA is a self-supervised method for super-resolution of push-frame single-exposure satellite images. We adapt DSA to multi-exposure case byusing the normalized LR images as input. We also use the loss on the details to train the motion estimator in DSA. ## Appendix G Execution time Table S6 reports the execution time of the methods studied on the synthetic multi-exposure dataset. Due to its convolutional architecture, HighRes-net is the fastest. HDR-DSP is slightly more costly than DSA since it performs feature pooling instead of a simple average and requires fusing the bases together. ME S&A and BD-ACT are both executed on CPU, the later being quite costly due to the linear spline system inversion. ## Appendix H Additional comparisons using real SkySat sequences Figure S1 presents results obtained on real multi-exposure SkySat images using 9 frames. This is a challenging sequence as it contains moving vehicles. Note how the road markings are better seen in the HDR-DSP result. However, since HDR-DSP does not account for moving objects (the motion estimator only predicts smooth motion within a range of 5 pixels) the cars are blurry. Figure S2 shows another example of reconstruction on a real sequence of 7 SkySat images. Even though there are only 7 images in this sequence and most of them are very noisy, HDR-DSP is able to produce a clean image. The fine details are well restored. ## Appendix I Exposure error analysis We observed a discrepancy between the reported exposure time by Planet and correct normalization ratios. This can be explained by measurement imprecision since the quantities are in sub-milliseconds range, or by local illumination effects such as vignetting. To estimate the correct exposure ratio for a given pair of images, we registered the images using phase correlation, masked saturated pixels and computed the spatial median of the ratio between the two frames. We then validated visually that such exposure ratios were more precise that the reported exposure time (less flicker was observed). Figure S3 shows the relation between the reported ratio, and the estimated one. We find that errors are usually in the order of a few percent, but also observe larger errors. The nominal exposure times range from 0.4ms to 4.5ms. Note that the absolute error in exposure the time measurement is probably constant regardless of the exposure time. However, when computing the ratio of two exposures with errors this might result in a large divergence of the ratio, especially if the exposure in the denominator is a short one. Note that for the proposed super-resolution method, we used the imprecise, reported exposure times and not the estimated one, as the estimation method itself can fail. ## References * [1] Cecilia Aguerrebere, Julie Delon, Yann Gousseau, and Pablo Muse. Best algorithms for hdr image generation. a study of performance bounds. _SIAM Journal on Imaging Sciences_, 7(1):1-34, 2014. * [2] Jeremy Anger, Thibaud Ehret, Carlo de Franchis, and Gabriele Facciolo. Fast and accurate multi-frame super-resolution of satellite images. _ISPRS Annals of Photogrammetry, Remote Sensing & Spatial Information Sciences_, 5(1), 2020. * [3] Thibaud Briand, Gabriele Facciolo, and Javier Sanchez. Improvements of the Inverse Compositional Algorithm for Parametric Motion Estimation. _IPOL_, 8:435-464, 2018. * [4] Michel Deudon, Alfredo Kalaitzis, Israel Goytom, Md Rifat Arefin, Zhichao Lin, Kris Sankaran, Vincent Michalski, Samira E Kahou, Julien Cornebise, and Yoshua Bengio. Highres-net: Recursive fusion for multi-frame super-resolution of satellite imagery. arxiv 2020. _arXiv preprint arXiv:2002.06460_, 2020. * [5] Andrew Fruchter and Richard Hook. Drizzle: A method for the linear reconstruction of undersampled images. _Publications of the Astronomical Society of the Pacific_, 114(792):144, 2002. * [6] Miguel Granados, Boris Ajdin, Michael Wand, Christian Theobalt, Hans-Peter Seidel, and Hendrik PA Lensch. Optimal hdr reconstruction with linear digital cameras. In _2010 IEEE computer society conference on computer vision and pattern recognition_, pages 215-222. IEEE, 2010. * [7] Thomas J. Grycewicz, Stephen A. Cota, Terrence S. Lomheim, and Linda S. Kalman. Focal plane resolution and overlapped array TDI imaging. In _Remote Sensing System Engineering_, volume 7087, page 708704. International Society for Optics and Photonics, 2008. * [8] Yunwei Jia. Method and apparatus for super-resolution of images, Nov. 6 2012. US Patent 8,306,121. * [9] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. _arXiv preprint arXiv:1412.6980_, 2014. * [10] Christian Ledig, Lucas Theis, Ferenc Huszar, Jose Caballero, Andrew Cunningham, Alejandro Acosta, Andrew Aitken, Alykhan Tejani, Johannes Totz, Zehan Wang, et al. Photo-realistic single image super-resolution using a generative adversarial network. In _Proceedings of the IEEE conference on computer vision and pattern recognition_, pages 4681-4690, 2017. * [11] Pengpeng Liu, Michael Lyu, Irwin King, and Jia Xu. Selflow: Self-supervised learning of optical flow. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)_, June 2019. * [12] Maria Teresa Merino and Jorge Nunez. Super-resolution of remotely sensed images with variable-pixel linear reconstruction. _IEEE TGRS_, 45(5):1446-1457, 2007. * [13] Kiran Murthy, Michael Shearn, Byron D. Smiley, Alexandra H. Chau, Josh Levine, and Dirk Robinson. SkySat-1: very high-resolution imagery from a small satellite. In _Sensors, Systems, and Next-GenerationSatellite - XVIII_, volume 9241, page 92411E. International Society for Optics and Photonics, 2014. * [14] Ngoc Long Nguyen, Jeremy Anger, Axel Davy, Pablo Arias, and Gabriele Facciolo. PROBA-V-REF: Repurposing the PROBA-V Challenge for Reference-Aware Super Resolution. In _2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS_, pages 3881-3884. IEEE, jul 2021. * [15] Ngoc Long Nguyen, Jeremy Anger, Axel Davy, Pablo Arias, and Gabriele Facciolo. Self-supervised multi-image super-resolution for push-frame satellite images. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops_, pages 1121-1131, June 2021. * [16] Leonid I. Rudin, Stanley Osher, and Emad Fatemi. Nonlinear total variation based noise removal algorithms. _Physica D: nonlinear phenomena_, 60(1-4):259-268, 1992. * [17] Mehdi SM Sajjadi, Raviteja Vemulapalli, and Matthew Brown. Frame-recurrent video super-resolution. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_, pages 6626-6634, 2018. * [18] Francesco Salvetti, Vittorio Mazzia, Aleem Khaliq, and Marcello Chiaberge. Multi-image super resolution of remotely sensed images using residual attention deep neural networks. _Remote Sensing_, 12(14):2207, 2020. * [19] Xin Tao, Hongyun Gao, Renjie Liao, Jue Wang, and Jiaya Jia. Detail-revealing deep video super-resolution. In _Proceedings of the IEEE International Conference on Computer Vision_, pages 4472-4480, 2017.
Modern Earth observation satellites capture multi-exposure bursts of push-frame images that can be super-resolved via computational means. In this work, we propose a super-resolution method for such multi-exposure sequences, a problem that has received very little attention in the literature. The proposed method can handle the signal-dependent noise in the inputs, process sequences of any length, and be robust to inaccuracies in the exposure times. Furthermore, it can be trained end-to-end with self-supervision, without requiring ground truth high resolution frames, which makes it especially suited to handle real data. Central to our method are three key contributions: i) a base-detail decomposition for handling errors in the exposure times, ii) a noise-level-aware feature encoding for improved fusion of frames with varying signal-to-noise ratio and iii) a permutation invariant fusion strategy by temporal pooling operators. We evaluate the proposed method on synthetic and real data and show that it outperforms by a significant margin existing single-exposure approaches that we adapted to the multi-exposure case.
Give a concise overview of the text below.
arxiv/a06a5d7d_1cbd_4427_ab47_1633dd79ec33.md
# Density Invariant Contrast Maximization for Neuromorphic Earth Observations Sami Arja*,1, Alexandre Marcireau 1, Richard L. Balthazor2 Matthew G. McHarg2, Saeed Afshar1 and Gregory Cohen1 1Western Sydney University 2United States Air Force Academy *[email protected] ## 1 Introduction Neuromorphic event cameras [6, 24] are biology-inspired optical sensors that offer high-speed, high-dynamic range, and low data rate operation, making them extremely well suited for use in the space environment. The sensors are asynchronous and have in-pixel circuitry to produce high-temporal contrast change events only in response to changes in the visual scene. The change events can be represented as event \\(e=(\\mathbf{u},p,t)\\), where \\(\\mathbf{u}=(x,y)\\) is the pixel coordinates of the event occurrence, \\(p\\) is the polarity of the contrast change, corresponding to whether the change in the brightness in increasing or decreasing, \\(t\\) is the timestamp of the Figure 1: Neuromorphic Earth Observations application. The Falcon Neuro project placed two event cameras on the ISS in 2021. These sensors have been used for earth observation and have captured a dataset of a variety of different locations in 2022. To produce panoramic map images from the event-based output of these sensors, a contrast maximization algorithm, such as CMax [8], is needed to compensate for the motion of the ISS. In this paper, we proposed an analytical approach that creates high-contrast panoramic images using CMax [8] by making the algorithm invariant to the density of events by leveraging the physical properties of the events and their geometries. This was performed by introducing a geometric piecewise correction function that adjusts the warped image to prevent the loss landscape from forming multiple extrema. change in the order of \\(\\mu s\\). The asynchronous nature of the events provides numerous advantages over traditional vision sensors, such as superior dynamic range, low latency, high temporal resolution, and significantly lower power consumption. The high temporal resolution removes the effects of motion blur, and the change detection suppresses redundant information at the pixel level. These features make the sensors well suited to tackling challenging machine vision tasks such as recognition [1, 3, 23, 36, 52], tracking [1, 22, 14, 33, 34, 41, 55, 63], SLAM [11, 19, 21, 44, 56, 58], motion estimation [7, 8, 9, 25, 26, 30, 35, 37, 38, 40, 47, 48, 50, 54, 59], and space domain awareness and space imaging [1, 4, 5, 18, 42]. The lack of a frame-based output requires the development of new algorithms and systems, and can allow for modes of operation and sensing not possible with conventional imaging sensors. The recent advances in event-based algorithms and the wide availability of high-resolution event cameras [6] have led to wide adoption in numerous real-world research applications such as in Autonomous Underwater Vehicles (AUV) [60], ground-based mobile telescopes [4, 5], Unmanned Aerial Vehicles (UAV) [32, 43, 57], ground robots and vehicles [17, 28, 31], and even in space onboard the ISS [29]. There is increasing interest in using event cameras in the space environment and further afield. These include the investigation of their use in future lunar spacecraft landing tasks [51, 30] and for underground exploration using the Mars Ingenuity Helicopter [27]. These projects used either simulated events from video sequences or earth-based environments that resemble the Martian and the Lunar surfaces. In this paper, we investigate neuromorphic earth observation using event cameras mounted on the ISS. These sensors were installed in 2021 as part of a collaboration between the United States Air Force Academy and Western Sydney University through the Falcon Neuro project [29]. The Falcon Neuro payload contains two identical neuromorphic sensors, with one pointed in the RAM direction, and the other pointed in the NADIR direction. This work focuses on techniques to process data from the NADIR camera to produce visual maps of the earth through an analytical solution to the original CMax framework. ### Motivation A state-of-the-art approach for motion estimation was first introduced by [8], known as CMax. It works by estimating the camera's relative motion vector, \\(\\mathbf{\\theta}=[v_{x},v_{y}]\\), over a time window \\(\\delta=t_{i}-t_{ref}\\) to align events with the edges or objects that generated them. This involves adjusting the pixel coordinates of individual events to eliminate the motion induced by the sensor or object to create sharp images. Specifically, each event \\(e_{i}=(\\mathbf{u_{i}},p_{i},t_{i})\\) is warped by a shear transformation based on a motion candidate \\(\\mathbf{\\theta}\\) \\[\\mathbf{u^{\\prime}_{i}}=\\begin{bmatrix}x^{\\prime}_{i}\\\\ y^{\\prime}_{i}\\end{bmatrix}=\\begin{bmatrix}x_{i}\\\\ y_{i}\\end{bmatrix}-\\begin{bmatrix}v_{x}\\\\ v_{y}\\end{bmatrix}*\\delta \\tag{1}\\] This works by reversing the motion \\(\\mathbf{\\theta}\\) between \\(t_{i}\\) and the beginning of \\(\\delta\\) and changes the spatial location of the \\(\\mathbf{u_{i}}\\). The new events are then accumulated into an image \\(H\\) or also called Image of Warped Events (IWE) as in (2). Each pixel in (2) sums the values of the warped events \\(u^{\\prime}_{i}\\) that fall within it. \\[H(\\mathbf{u^{\\prime}};\\mathbf{\\theta})\\dot{=}\\sum_{i=1}^{N}b_{k}\\delta(\\mathbf{u}-\\mathbf{u^{ \\prime}_{i}}), \\tag{2}\\] where \\(b_{k}\\) is the number of events along the trajectories as detailed in [8]. The contrast of \\(H\\) is then calculated as a function of \\(\\mathbf{\\theta}\\) \\[C(\\mathbf{\\theta})\\dot{=}\\frac{1}{N}\\sum_{j=1}^{N}(H(\\mathbf{u^{\\prime}_{j}};\\mathbf{ \\theta})-\\mu(\\mathbf{\\theta}))^{2}, \\tag{3}\\] where N is the number of pixels in \\(H\\), and \\(\\mu(\\mathbf{\\theta})\\) is the mean of the pixel intensity of \\(H\\). The strategy is to find the correct \\(\\mathbf{\\theta}\\) that maximises the objective function \\(C(\\mathbf{\\theta})\\), in this case, a higher \\(C(\\mathbf{\\theta})\\) indicates higher events alignment and a sharp motion-corrected image \\(H\\). Despite the recent successes of the CMax algorithm in various applications, the algorithm suffers from a few fundamental weaknesses particularly, in space-based earth mapping applications caused by the increased density of events resulting from the continuous movement of the camera in the orbit. Below we discuss these limitations and use them to motivate the method proposed in this paper. This work makes use of the event camera pointed directly downward (the NADIR direction) as shown in Figure 1. The ISS moves with a consistent speed of 17,900 mph (i.e. 8 km/s) [10] and the movement can be modeled as consisting primarily of translation, except during docking procedures. The number of events recorded by the camera is predominantly influenced by the texture of the surface of the earth and the variations in lighting conditions over the day or night side of the earth. The high texture of the lower atmosphere (e.g. clouds) combined with pixel noise caused by the camera setting and circuit mismatch, reduces the Signal-to-Noise Ratio (SNR). A low SNR scene results in fewer structures captured by the camera, affecting the accuracy of the motion estimation \\(\\mathbf{\\theta}\\). When the density of events in (2) increases, this also leads to higher variance in (3) around the wrong \\(\\mathbf{\\theta}\\). As a consequence, higher contrast does not always imply better events alignment, as illustrated in Figure 2(a), where multiple extrema become visible. This noise intolerance issue was initially identified by [54] but was not further investigated. Recent studies have shown that maximization (3) can be done either by using a conjugate gradient [8, 9] approach or a branch-and-bound [25] method. The former requires a good initialisation of \\(\\mathbf{\\theta}\\) to converge to the correct local minima, as summarised in Figure 2(b), and the latter is a global optimisation and search method better suited for rotation motion estimation. In addition, recent improvements have included refining the objective function to better suit the targeted settings [7, 54]. However, this makes the objective function and optimisation application-dependent and may increase complexity. Guided by these observations, we propose a new approach that corrects (2) by only considering the motion and geometry of the warped events, which modifies the landscape of the objective function in (3) automatically. This modification enables us to keep the optimisation algorithm and objective function unchanged in a way invariant to the nature of the input data. ### Contribution We present a novel approach that enables space-based earth mapping using the CMax algorithm. Our approach makes the following assumptions: the speed of the camera is constant, the time window \\(\\delta\\) is the entire event stream and no motion prior is given to the algorithm. Our method not only provides an optimal solution to this specific application, but it also addresses several fundamental problems, including determining the appropriate objective function to use [7] and deciding how many events to process at once [54]. We demonstrate that the variance can serve as an optimal objective function without any modifications, and all events can be utilised in the case of translation motion. While our method does not address other types of motion, such as rotation and zooming, this eliminates the need to test multiple objective functions and employ several batch sizes to estimate the optimal motion parameters. Our method relies solely on the overall geometry of the warped events and it does not depend on noise density or the size of the time window. It enables CMax to consistently produce high-contrast outputs to equation (3) around the correct \\(\\mathbf{\\theta}\\), even in cases where noise dominates the primary structure of the scene. This is accomplished by increasing the value of specific pixels of the Image of Warped Events - namely, pixels that correspond to parts of the scene in front of which the sensor spent less time. Our approach recovers a single solution for \\(\\mathbf{\\theta}\\). This significantly increases the rate of convergence to the correct solution using a simple optimisation search method such Nelder-Mead, thus overcoming the problem of multiple extrema. We model the noise events of noise-rich scenes as a uniform rectangular cuboid in the 3-dimensional space \\(\\{x,y,t\\}\\). This allows us to analytically calculate the variance of the accumulated image, by shearing the cuboid and integrating the function that describes its height after shearing. Our method was evaluated on a diverse dataset captured from the ISS. The results were assessed both qualitatively and quantitatively using Root Mean Square (RMS) error and the Rate of Convergence (RoC) metrics for evaluation - see Section 3.1 and 3.2. ### Related Work The CMax framework was introduced by [8, 9] and its performance was further investigated in [7, 54]. Different alignment methods have been proposed to leverage the benefit of this algorithm [15, 35, 47] for various tasks such as rotational motion estimation [9, 19, 20, 46], optical flow [16, 49, 64], 3D reconstruction [13, 45], depth estimation [8], motion segmentation [39, 62, 53], and intensity reconstruction [61]. To estimate the motion, existing works either rely on local [7, 54, 53, 8], or global optimisation [25, 40] to facilitate the convergence to the correct motion parameters. [50] is the work most closely related to ours. Their method augments the objective function and applies corrections derived from mathematical models. However, [50] tackles the problem of event collapse, which occurs when objects are not moving parallel to the sensor plane. By con Figure 2: Illustration of CMax problems in low SNR scenes. **Problem 1:** It is trivial to estimate the true motion \\(\\mathbf{\\theta}\\) in high SNR scenes, however, as the SNR decreases the contrast around the wrong \\(\\mathbf{\\theta}\\) increases significantly, overtaking the true value. **Problem 2:** Optimisation algorithm converges only when it is initialised near the true \\(\\mathbf{\\theta}\\) and fails everywhere else, this is a problem because a robust CMax should converge to the correct motion \\(\\mathbf{\\theta}\\) without any prior. trast, we only focus on translational motion and present a method that corrects noise-induced variance. This problem only becomes visible with long time windows and high noise levels. ## 2 Our Approach Our method operates on the Image of Warped Events 2, before calculating the image contrast 3. It applies a multiplicative correction that solely depends on the motion candidate \\(\\boldsymbol{\\theta}\\) and the width and height of the sensor. We first describe the method for one-dimensional sensors 2.1. We then expand it to two-dimensional sensors 2.2. ### One-dimensional case Let us consider an event sensor with a single row of pixels that generates random, uniformly distributed noise events with overall rate \\(\\rho\\) (in events per second). These events can be seen as a dense point cloud in two-dimensional space \\(\\{x,t\\}\\). We approximate this point cloud with a \"solid\" rectangle in \\(\\{x,t\\}\\) that spans the width of the sensor. Under this approximation, warping the events is equivalent to shearing the rectangle and results in a parallelogram in \\(\\{x,t\\}\\). The transformation shifts the top of the rectangle by \\(-v\\delta\\), where \\(v\\) is the candidate speed and \\(\\delta\\) is the considered time window. We denote by \\(f\\) the _Line of Warped Events_ (the one-dimensional equivalent of the Image of Warped Events). Its values \\(f(x)\\) are given by the height of the parallelogram at \\(x\\), hence their plot is a trapezoid without a base (3). The \"contrast\" in the Line of Warped Events can be estimated with the variance of \\(f\\) over the interval \\([0,w-v\\delta]\\) (for \\(v\\leq 0\\)), denoted \\(\\text{var}_{f}\\). Importantly, we are not calculating the variance of a random variable but simply considering a continuous extension of the formula for the variance of a collection of samples. This is similar to the difference between the mean of a random variable and the mean of a function. \\[\\text{var}_{f}(v)=\\frac{1}{w-v\\delta}\\int\\limits_{0}^{w-v\\delta}(f(x)-\\overline {f})^{2}dx\\quad\\text{if }v\\leq 0 \\tag{4}\\] \\(\\overline{f}\\) is the mean of f over the interval \\([0,w-v\\delta]\\). \\[\\overline{f}(v)=\\frac{1}{w-v\\delta}\\int\\limits_{0}^{w-v\\delta}f(x)dx\\quad \\text{if }v\\leq 0 \\tag{5}\\] For \\(v\\geq 0\\), one must consider the interval \\([-v\\delta,w]\\) and divide by \\(w+v\\delta\\). \\(\\text{var}_{f}\\) is zero if and only if \\(f\\) is constant. That is the case for \\(v=0\\), however, all other candidate speeds result in a non-zero variance. This change in variance, which is created by sheared noise, causes the problem described in figure 2. To show that our simple continuous model is sufficient to describe the observations, we calculate below an explicit expression of \\(\\text{var}_{f}\\) as a function of \\(v\\). Without loss of generality, we can restrict the problem to \\(v<0\\) and introduce unit-less variables to simplify the expressions. * Normalised pixel position \\(p=\\frac{x}{w}\\) * Normalised shear \\(s=\\frac{v\\delta}{w}\\) * Normalised event count \\(c=\\frac{\\delta}{\\rho}\\) \\(f(p)\\) is a piecewise linear function made of three segments, with two slightly different expressions for \\(s\\leq 1\\) and \\(s\\geq 1\\) (figure 4). These two expressions correspond (respectively) to shears smaller than the sensor width and shear larger than the sensor width. For \\(s\\leq 1\\), \\(f\\) is defined on \\([0,1+s]\\) by: \\[f(p)=\\begin{cases}\\frac{cp}{s}&0\\leq p\\leq s\\\\ c&s\\leq p\\leq 1\\\\ c\\left(1-\\frac{p-1}{s}\\right)&1\\leq p\\leq 1+s\\end{cases} \\tag{6}\\] Applying the formulas for the mean and variance given for \\(s\\leq 1\\): \\[\\overline{f}=\\frac{c}{s+1}\\quad\\text{and}\\quad\\text{var}_{f}(s)=c^{2}\\frac{s \\cdot(2-s)}{3\\left(s+1\\right)^{2}} \\tag{7}\\] The equations for \\(s\\geq 1\\) are given in the supplementary materials (Section II). Plotting \\(\\text{var}_{f}\\) yields a figure that is very similar to a section of the velocity landscape obtained from real event sensor data (5). The function \\(\\text{var}_{f}\\) admits a maximum at \\(s=\\frac{1}{2}\\) when the velocity multiplied by the time window equals half the sensor width. We want \\(\\text{var}_{f}\\) to be zero since \\(f\\) represents the Line of Warped Events for uniform noise. We thus introduce \\(\\alpha\\), a multiplicative correction function for the non-constant segments of \\(f\\) (the slopes of the trapezoid). \\(\\alpha\\) is defined on \\([0,1+s]\\) by: \\[\\alpha(p)=\\begin{cases}\\frac{s}{p}&0\\leq p\\leq s\\\\ 1&s\\leq p\\leq 1\\\\ \\frac{s}{s+1-p}&1\\leq p\\leq 1+s\\end{cases} \\tag{8}\\] Since \\(\\alpha\\) is a multiplicative correction rather than an additive one, it does not depend on \\(c\\) (and, by extension, \\(\\rho\\)). In other words, the correction can be applied with no prior knowledge of the noise density. The corrected counterpart of \\(f\\), denoted \\(\\lambda\\), is defined by: \\[\\lambda(p)=\\alpha(p)\\cdot f(p) \\tag{9}\\]By construction, \\(\\lambda\\) is equal to the constant \\(c\\) on \\([0,1+s]\\), and its variance is trivially zero. Figure 5 shows a plot of the variance expression and demonstrates the correction on simulated discrete noise events. ### Two-dimensional case In this section, we extend our model to two-dimensional sensors. Let us consider an event sensor with \\(w\\times h\\) pixels that generate random, uniformly distributed noise events with the overall rate \\(\\rho\\) (in events per second), a speed candidate \\(\\theta=[v_{x},v_{y}]\\), and a time window \\(\\delta\\). We model the noise events as a \"solid\" rectangular cuboid in \\(\\{x,y,t\\}\\) space. We define the following normalised variables to express the variance in two dimensions: * Normalised x pixel position \\(p_{x}=\\frac{x}{w}\\) * Normalised y pixel position \\(p_{y}=\\frac{y}{h}\\) * Normalised shear alongside the x-axis \\(s_{x}=\\frac{v_{x}\\delta}{w}\\) * Normalised shear alongside the y-axis \\(s_{y}=\\frac{v_{y}\\delta}{h}\\) * Normalised event count \\(c=\\frac{\\delta}{\\rho}\\) Shearing the rectangular cuboid in two directions yields a parallelepiped. Unlike the one-dimensional case, in which the height of the sheared geometry was another well-known figure, the height of the parallelepiped at every point of the \\(\\{x,y\\}\\) plane forms a \"generic\" polyhedron with 7 faces without counting the base (figure 7). Another complication in two dimensions is the shape of the integration domain. While it would be tempting to use a rectangular integration domain, some parts of the scenes are never in the field of view (specifically, two triangular regions of combined area \\(s_{x}s_{y}\\), as shown in the Houston recording in figure 8). We take this into account in the integration calculation and divide by \\((1+s_{x})(1+s_{y})-s_{x}s_{y}\\) in the mean and variance formulas. For \\(s_{x}\\leq 1\\) and \\(s_{y}\\leq 1\\), the height function \\(f\\) is defined Figure 4: Warping dense noise is similar to shearing a ”solid” rectangle. The operation turns the rectangle into a parallelogram. The height of the parallelogram as a function of the pixel position describes the top of a trapezoid, shown here in red. When \\(s\\) is larger than one (the time window times the speed is larger than the sensor width), the curve’s maximum value decreases. The variance of the red curve as a function of the shear, in blue, follows a surprisingly complex pattern, with a maximum at \\(s=\\frac{1}{2}\\). Figure 5: Plots of the variance of \\(f\\) and \\(\\lambda\\) as a function of the candidate speed, using the analytical formula (7) (left) and a discrete simulation (right). Figure 3: A detailed overview of the proposed approach. **a:** The process of CMax including our method. **b:** The One-dimensional case shows how the trapezoid is formed after accumulating the warped event with the correction function \\(\\alpha\\) that removes it. **c:** The Two-dimensional case takes into account both pixel dimensions and removes the trapezoid, producing a new warped image that is invariant to geometry and capable of removing the multiple extrema from the loss landscape. by: \\[f(p_{x},p_{y})=\\begin{cases}\\frac{cp_{y}}{s_{y}}&p_{x}\\leq s_{x}\\wedge p_{y}\\leq \\frac{s_{y}}{s_{x}p_{x}}\\\\ \\frac{cp_{x}}{s_{x}}&p_{x}\\leq\\frac{s_{x}}{s_{y}p_{y}}\\wedge p_{y}\\leq s_{y}\\\\ \\frac{cp_{y}}{s_{y}}&s_{x}\\leq p_{x}<1\\wedge p_{y}\\leq.\\\\ \\frac{cp_{x}}{s_{x}}&p_{x}\\leq s_{x}\\\\ &\\wedge\\,s_{y}\\leq p_{y}\\leq 1\\\\ c&s_{x}\\leq p_{x}\\leq 1\\\\ &\\wedge\\,s_{y}\\leq p_{y}\\leq 1\\\\ c\\left(\\frac{1-p_{x}}{s_{x}}+\\frac{p_{y}}{s_{y}}\\right)&1\\leq p_{x}\\leq 1+s_{x} \\\\ &\\wedge\\,(p_{x}-1)\\frac{s_{y}}{s_{x}}\\leq p_{y}\\leq s_{y}\\end{cases} \\tag{10}\\] Calculating the mean and variance of \\(f\\) yields: \\[\\overline{f}=\\frac{c}{s_{x}+s_{y}+1} \\tag{11}\\] \\[\\text{var}_{f}(s_{x},s_{y})=c^{2}\\frac{\\left(s_{x}^{2}s_{y}+s_{x}s_{y}^{2}-3s _{x}s_{y}+q_{x}+q_{y}\\right)}{6\\left(s_{x}+s_{y}+1\\right)^{2}} \\tag{12}\\] where \\(q_{x}=-2s_{x}^{2}+4s_{x}\\) and \\(q_{y}=-2s_{y}^{2}+4s_{y}\\). Figure 6 shows a plot of the variance as a function of \\(s_{x}\\) and \\(s_{y}\\). Despite its simplicity, our model predicts quite well the \"ring\" that we observe on real sensor data. Similarly to the one-dimensional case, we introduce a multiplicative correction function \\(\\alpha\\) to \"flatten\" \\(f\\) and ensure that the variance of the corrected height function is zero. For \\(s_{x}\\leq 1\\) and \\(s_{y}\\leq 1\\), \\(\\alpha\\) is defined by: \\[\\alpha(p_{x},p_{y})=\\begin{cases}\\frac{s_{y}}{p_{y}}&p_{x}\\leq s_{x}\\wedge p_{ y}\\leq\\frac{s_{y}}{s_{x}p_{x}}\\\\ \\frac{s_{x}}{p_{x}}&p_{x}\\leq\\frac{s_{x}}{s_{y}p_{y}}\\wedge p_{y}\\leq s_{y}\\\\ \\frac{s_{x}}{p_{y}}&s_{x}\\leq p_{x}\\leq 1\\wedge p_{y}\\leq s_{y}\\\\ \\frac{s_{x}}{p_{x}}&p_{x}\\leq s_{x}\\wedge s_{y}\\leq p_{y}\\leq 1\\\\ 1&s_{x}\\leq p_{x}\\leq 1\\\\ &\\wedge\\,s_{y}\\leq p_{y}\\leq 1\\\\ \\frac{s_{x}s_{y}}{p_{y}s_{x}-s_{y}p_{x}+s_{y}}&1\\leq p_{x}\\leq 1+s_{x}\\\\ &\\wedge\\,(p_{x}-1)\\frac{s_{y}}{s_{x}}\\leq p_{y}\\leq s_{y}\\end{cases} \\tag{13}\\] The equations for \\(s_{x}\\geq 1\\) and \\(s_{y}\\geq 1\\) are given in the supplementary materials (Section III). To obtain a corrected warped image from real event data, we can directly apply the correction \\(\\alpha\\) to the pixels of the accumulated image (2), given only the candidate warp speed \\([v_{x},v_{y}]\\) and the dimensions of the sensor. This minimises the contribution of uniform noise to the variance in the image and produces a new accumulated frame denoted as \\(H(\\mathbf{u^{\\prime}_{i}},\\mathbf{\\theta})_{v}\\), as described in (14). Finally, we can calculate the variance using (3) on the corrected image to solve the problem. \\[H(\\mathbf{u^{\\prime}_{i}},\\mathbf{\\theta})_{v}=H(\\mathbf{u^{\\prime}_{i}},\\mathbf{\\theta})\\times\\alpha \\tag{14}\\] We employed analytical integration techniques based on motion and geometry to ensure that the variance is only high around correct \\(\\mathbf{\\theta}\\). By extending our approach to 2D, we can now apply this corrective technique to real-world data. In the next section, we demonstrate how this method can be used to generate motion-compensated images from data acquired from the ISS. Figure 6: The left graph (**a**) is a plot of the analytical formula for the variance as a function of the candidate speed. The right graph (**b**) is a plot of the corrected variance calculated on discrete simulated noise, and is zero as intended. Figure 7: An illustration of the height of the sheared rectangular cuboid (2) as a function of \\(s_{x}\\) and \\(s_{y}\\) (black geometric figures) and the corresponding variance (red and white background). The problem exhibits several symmetries and can be solved by considering only two sets of conditions (condition 2 and condition 3 are symmetrical about the axis \\(y=x\\)). ## 3 Experiments To show how our method successfully generalises to real event data, we now apply this technique to data in which noise heavily dominates the structure of the events. We evaluate our approach using data captured directly with an event camera on the ISS. The dataset comprises recordings that vary from 30 to 180 seconds and were captured under a diverse set of conditions. These include day/night recordings, different locations on earth, and varying weather conditions. The recordings contain an average of 7 million events per recording - See supplementary materials (Section I). The dataset does not come with an associated evaluation protocol and we, therefore, have defined an evaluation protocol based on RMS and a new metric called the Rate of Convergence (RoC). The RoC shows the rate of success of the optimisation algorithm in converging to the correct motion values. For simplicity, we used the Nelder-Mead optimisation (NMO) algorithm to search for the correct motion parameters and calculated the RoC by initialising the optimisation algorithm at every single point between -30\\(px/s\\) and 30\\(px/s\\) and then calculating the overall percentage of how many times it successfully converges. The ground truth \\(\\mathbf{\\theta}\\) was manually found for each recording. ### Qualitative Results Figure 9 shows qualitative results using our proposed method compared with standard CMax [8]. Our approach always produces a single solution around the correct motion parameters, whereas CMax shows multiple extrema in each case where the global maximum is much more prominent than the correct local maximum. A single correct motion solution indicates that the geometry of the IWE and the density of the events are no longer affecting the variance calculation. Our method also leads to better motion-compensated and sharp maps, which can be used for matching with existing satellite images and other orbital-related applications as in Figure 8. ### Quantitative Results In this section, we thoroughly examine the effectiveness of our proposed approach by comparing its performance with the existing CMax algorithm. We evaluate the results in terms of RMS and RoC. As shown in Table 1, our approach significantly outperforms CMax in every case. The low RoC in CMax can be attributed to the presence of the global maximum, which often causes the NMO to be stuck at the global maximum and produce a high RMS error. Our method guarantees a single local maximum, which enables the NMO to converge successfully, resulting in a higher RoC. Furthermore, the RoC of CMax is usually low since it only succeeds when the NMO is initialised close to the true motion parameters. In contrast, our approach is more robust and works effectively even when the global maximum is not dominant, as demonstrated in the Brittany case. Here, both methods converged since the recording had less noise compared to the rest of the data. This demonstrates the flexibility and general applicability of our approach to handling events with various density levels. ## 4 Conclusion In this paper, we present an analytical solution to the noise-intolerance and the multiple extrema problems of \\begin{table} \\begin{tabular}{l c c c c} \\hline \\hline & \\multicolumn{2}{c}{CMax [8]} & \\multicolumn{2}{c}{Ours} \\\\ \\cline{2-5} & \\(RMS\\,S\\) & \\(RoC\\%\\) & \\(RMS\\,S\\) & \\(RoC\\%\\) \\\\ \\hline \\hline EL Salvador & 14.47 & 2.55 & **0.61** & **75.57** \\\\ \\hline Houston & 13.74 & 2.62 & **0.55** & **81.48** \\\\ \\hline Brittany & 0.08 & 83.13 & **0.01** & **83.57** \\\\ \\hline Mexico & 14.13 & 1.16 & **0.09** & **80.50** \\\\ \\hline Washington & 14.19 & 2.87 & **0.11** & **74.10** \\\\ \\hline Spain & 13.77 & 2.45 & **0.14** & **80.82** \\\\ \\hline Sumatra & 13.41 & 1.62 & **0.22** & **81.60** \\\\ \\hline UK & 12.84 & 2.02 & **0.28** & **82.89** \\\\ \\hline Egypt & 13.53 & 1.95 & **0.01** & **76.51** \\\\ \\hline Panama & 14.50 & 2.72 & **0.04** & **70.61** \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 1: Quantitative results. Final results of the ISS data comparing CMax [8] with our approach with RMS error and Rate of Convergence (RoC) as evaluation metrics. Figure 8: Motion-compensated maps. Top: The \\(H(\\mathbf{u}^{\\prime},\\mathbf{\\theta})\\) image using the motion parameters \\(\\mathbf{\\theta}\\) estimated by our approach. Bottom: An overlay of motion-compensated maps over the earth map. The overlay was performed manually and the scale was a bit exaggerated to better see the motion-compensated maps. the CMax framework. Our solution is purely based on geometrical principles and the physical properties of the events. First, we analysed these problems in 1D and 2D spaces by considering the events as a solid rectangle in 1D, and as a solid rectangular cuboid in 2D to demonstrate the influence of the change in geometries on the variance calculation. We then demonstrated how our analytical solution makes CMax invariant to the changes in the geometry and avoids having high contrast around the wrong motion parameters, without using any prior of the camera motion and regardless of the density of the events. The experimental results demonstrate the superior performance of our method compared to the state-of-the-art CMax when used on extremely noisy data. **Acknowledgement**. This work was supported by Figure 9: Comparing the loss landscape produced by CMax [8] and our approach. Our approach shows a single solution corresponding to the correct motion \\(\\mathbf{\\theta}\\), while CMax shows multiple maxima in each case. X-axis represents \\(v_{x}\\) [\\(px/s\\)] and Y-axis represents \\(v_{y}\\) [\\(px/s\\)] as a function of the variance (3), normalised between 0-1. AFOSR grant FA2386-23-1-4005 and USAFA grant FA7000-20-2-0009. ## References * [1]S. Afshar, N. Ralph, Y. Xu, J. Tapson, A. van Schaik, and G. Cohen (2020-06) Event-Based Feature Extraction Using Adaptive Selection Thresholds. Sensors20 (6), pp. 1600. External Links: Document, ISSN 1573-0168 Cited by: SS1. * [2]I. Alzugaray and M. Chli (2018-04) Asynchronous Corner Detection and Tracking for Event Cameras in Real Time. IEEE Robotics and Automation Letters3 (4), pp. 3177-3184. External Links: Document, ISSN 1573-0168 Cited by: SS1. * [3]Y. Bethi, Y. Xu, G. Cohen, A. van Schaik, and S. Afshar (2022) An optimised deep spiking neural network architecture without gradients. IEEE Access10, pp. 97912-97929. External Links: Document, ISSN 1573-0168 Cited by: SS1. * [4]G. Cohen, S. Afshar, B. Morreale, T. Bessell, A. Wabnitz, M. Rutten, and A. van Schaik (2019-04) Event-based Sensing for Space Situational Awareness. The Journal of the Astronautical Sciences66 (2), pp. 125-141. External Links: Document, ISSN 1573-0168 Cited by: SS1. * [5]G. Cohen, S. Afshar, and A. van Schaik (2018) Approaches for astrometry using event-based sensors. In Advanced Maui Optical and Space Surveillance Technologies Conference, Maui, Hawaii, USA, Cited by: SS1. * State Circuits Conference - (ISSCC), pp. 112-114. External Links: Document, ISSN 1573-0168 Cited by: SS1. * [7]G. Gallego, H. Rebecq, and D. Scaramuzza (2018-05) A unifying contrast maximization framework for event cameras, with applications to motion, depth, and optical flow estimation. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3867-3876. External Links: Document, ISSN 1573-0168 Cited by: SS1. * [8]G. Gallego, H. Rebecq, and D. Scaramuzza (2017-01) A unifying contrast maximization framework for event cameras, with applications to motion, depth, and optical flow estimation. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3867-3876. External Links: Document, ISSN 1573-0168 Cited by: SS1. * [9]G. Gallego and D. Scaramuzza (2017-02) Accurate Angular Velocity Estimation With an Event Camera. IEEE Robotics and Automation Letters2 (2), pp. 632-639. External Links: Document, ISSN 1573-0168 Cited by: SS1. * [10]M. Garcia (2023) International space station facts and figures. External Links: Document, ISSN 1573-0168 Cited by: SS1. * [11]D. Gehrig, M. Gehrig, J. Hidalgo-Carrio, and D. Scaramuzza (2020-06) Video to events: Recycling video datasets for event cameras. In IEEE Conf. Comput. Vis. Pattern Recog. (CVPR), Cited by: SS1. * [12]D. Gehrig, H. Rebecq, G. Gallego, and D. Scaramuzza (2020-06) EKLT: Asynchronous Photometric Feature Tracking Using Events and Frames. International Journal of Computer Vision128 (3), pp. 601-618. External Links: Document, ISSN 1573-0168 Cited by: SS1. * [13]S. Ghosh and G. Gallego (2022-12) Multi-Event-Camera Depth Estimation and Outlier Rejection by Refocused Events Fusion. Advanced Intelligent Systems4 (12), pp. 2200221. External Links: Document, ISSN 1573-0168 Cited by: SS1. * [14]A. Glover and C. Bartolozzi (2012) Robust visual tracking with a freely-moving event camera. In 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 3769-3776. External Links: Document, ISSN 1573-0168 Cited by: SS1. * [15]C. Gu, E. Learned-Miller, D. Sheldon, G. Gallego, and P. Bideau (2021) The spatio-temporal poisson point process: a simple model for the alignment of event camera data. In International Conference on Computer Vision (ICCV), Cited by: SS1. * [16]J. Hagenaars, F. Paredes-Valles, and G. de Croon (2021) Self-supervised learning of event-based optical flow with spiking neural networks. Advances in Neural Information Processing Systems34. External Links: Document, ISSN 1573-0168 Cited by: SS1. * [17]C. Iaboni, H. Patel, D. Lobo, J. Choi, and P. Abichandani (2021) Event Camera Based Real-Time Detection and Tracking of Indoor Ground Robots. IEEE Access9, pp. 166588-166602. External Links: Document, ISSN 1573-0168 Cited by: SS1. * [18]K. Kaminski, G. Cohen, T. Delbruck, M. Zolnowski, and M. Gedek (2019) Observational evaluation of event cameras performance in optical space surveillance. External Links: Document, ISSN 1573-0168 Cited by: SS1. * [19]H. Kim, A. Handa, R. Benosman, S. Ieng, and A. Davison (2021-05) Simultaneous mosaicing and tracking with an event camera. In Proceedings of the British Machine Vision Conference 2014, pp. 26.1-26.12. External Links: Document, ISSN 1573-0168 Cited by: SS1. * [20]H. Kim and H. J. Kim (2021-05) Real-Time Rotational Motion Estimation With Contrast Maximization Over Globally Aligned Events. IEEE Robotics and Automation Letters6 (3), pp. 6016-6023. External Links: Document, ISSN 1573-0168 Cited by: SS1. * [21]H. Kim, M. Leutenegger, and A. J. Davison (2016) Real-time 3d reconstruction and 6-dof tracking with an event camera. In European Conference on Computer Vision, Cited by: SS1. * [22]B. Kueng, E. Mueggler, G. Gallego, and D. Scaramuzza (2021-05) Low-latency visual odometry using event-based feature tracks. In 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 16-23. External Links: Document, ISSN 1573-0168 Cited by: SS1. * [23]X. Lagorce, G. Orchard, F. Galluppi, B. E. Shi, and R. B. Benosman (2017-05) HOTS: A Hierarchy of Event-Based Time-Surfaces for Pattern Recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence39 (7), pp. 1346-1359. External Links: Document, ISSN 1573-0168 Cited by: SS1. * [24]P. Lichtsteiner, C. Posch, and T. Delbruck (2008) A 1288\\(\\backslash\\) times$128 120 dB 15 $\\(\\backslash\\)mu$s Latency Asynchronous Temporal Contrast Vision Sensor. IEEE Journal of Solid-State Circuits43 (2), pp. 566-576. External Links: Document, ISSN 1573-0168 Cited by: SS1. * [25]D. Liu, A. Parra, and T. Chin (2020) Globally optimal contrast maximisation for event-based motion estimation. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 6348-6357. External Links: Document, ISSN 1573-0168 Cited by: SS1. * [26]D. Liu, A. Parra, and T. Chin (2020) Globally optimal contrast maximisation for event-based motion estimation. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 6348-6357. External Links: Document, ISSN 1573-0168 Cited by: SS1. * [27]D. Liu, A. Parra, and T. Chin (2020) Globally optimal contrast maximisation for event-based motion estimation. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 6348-6357. External Links: Document, ISSN 1573-0168 Cited by: SS1. * [28]D. Liu, A. Parra, and T. Chin (2020) Globally optimal contrast maximisation for event-based motion estimation. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 6348-6357. External Links: Document, ISSN 1573-0168 Cited by: SS1. * [29]D. Liu, A. Parra, and T. Chin (2020) Globally optimal contrast maximisation for event-based motion estimation. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 6348-6357. External Links: Document, ISSN 1573-0168 Cited by: SS1. * [30]D. Liu, A. Parra, and T. Chin (2020) Globally optimal contrast maximisation for event-based motion estimation. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 6348-6357. External Links: Document, ISSN 1573-0168 Cited by: SS1. * [31]D. Liu, A. Parra, and T. Chin (2020) Globally optimal contrast maximisation for event-based motion estimation. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 6348-6357. External Links: Document, ISSN 1573-0168 Cited by: SS1. * [32]D. Liu, A. Parra, and T. Chin (2020) Globally optimal contrast maximisation for event-based motion estimation. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 6348-6357. External Links: Document, ISSN 1573-0168 Cited by: SS1. * [33]D. Liu, A. Parra, and T. Chin (2020) Globally optimal contrast maximisation for event-based motion estimation. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 6348-6357. External Links: Document, ISSN 1573-0168 Cited by: SS1. * [34]D. Liu, A. Parra, and T. Chin (2020) Globally optimal contrast maximisation for event-based motion estimation. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 6348-6357. External Links: Document, ISSN 1573-0168 Cited by: SS1. * [35]D. Liu, A. Parra, and T. Chin (2020) Globally optimal contrast maximisation for event-based motion estimation. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 6348-6357. External Links: Document, ISSN 1573-0168 Cited by: SS1. * [36]D. Liu, A. Parra, and T. Chin (2020) Globally optimal contrast maximisation for event-based motion estimation. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 6348-6357. External Links: Document, ISSN 1573-0168 Cited by: SS1. * [37]D. Liu, A. Parra, and T. Chin (2020) Globally optimal contrast maximisation for event-based motion estimation. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 6348-6357. External Links:* [26] Daqi Liu, Alvaro Parra, and Tat-Jun Chin. Spatiotemporal registration for event-based visual odometry. In _2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)_, pages 4935-4944. IEEE. * [27] Florian Mahlknecht, Daniel Gehrig, Jeremy Nash, Friedrich M. Rockenbauer, Benjamin Morrell, Jeff Delaune, and Davide Scaramuzza. Exploring Event Camera-Based Odometry for Planetary Robots. _IEEE Robotics and Automation Letters_, 7(4):8651-8658, Oct. 2022. * [28] Ana I. Maqueda, Antonio Loquercio, Guillermo Gallego, Narciso Garcia, and Davide Scaramuzza. Event-based vision meets deep learning on steering prediction for self-driving cars. In _2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 5419-5427. IEEE. * [29] Matthew G. McHarg, Richard L. Balthazor, Brian J. McReynolds, David H. Howe, Colin J. Maloney, Daniel O'Keefe, Raymond Bam, Gabriel Wilson, Paras Karki, Alexandre Marcireau, and Gregory Cohen. Falcon Neuro: an event-based sensor on the International Space Station. _Optical Engineering_, 61(08), Aug. 2022. * [30] Sofia McLeod, Gabriele Moeni, Dario Izzo, Anne Mergy, Daqi Liu, Yasir Latif, Ian Reid, and Tat-Jun Chin. Globally Optimal Event-Based Divergence Estimation for Ventral Landing. Technical Report arXiv:2209.13168, arXiv, Sept. 2022. arXiv:2209.13168 [cs] type: article. * [31] Moritz B. Milde, Hermann Blum, Alexander Dietmuller, Dora Sumislawska, Jorg Conradt, Giacomo Indiveri, and Yulia Sandamirskaya. Obstacle Avoidance and Target Acquisition for Robot Navigation Using a Mixed Signal Analog/Digital Neuromorphic Processing System. _Frontiers in Neurorobotics_, 11:28, July 2017. * [32] Elias Mueggler. _Event-based vision for high-speed robotics_. PhD thesis, University of Zurich, 2017. * [33] Elias Mueggler, Basil Huber, and Davide Scaramuzza. Event-based, 6-DOF pose tracking for high-speed maneuvers. In _2014 IEEE/RSJ International Conference on Intelligent Robots and Systems_, pages 2761-2768. IEEE. * [34] Zhenjiang Ni, Sio-Hoi Ieng, Christoph Posch, Stephane Regnier, and Ryad Benosman. Visual Tracking Using Neuromorphic Asynchronous Event-Based Cameras. _Neural Computation_, 27(4):925-953, Apr. 2015. * [35] Urbano Miguel Nunes and Yiannis Demiris. Robust Event-Based Vision Model Estimation by Dispersion Minimisation. _IEEE Transactions on Pattern Analysis and Machine Intelligence_, 44(12):9561-9573, Dec. 2022. * [36] Garrick Orchard, Cedric Meyer, Ralph Etienne-Cummings, Christoph Posch, Nitish Thakor, and Ryad Benosman. HFirst: A Temporal Approach to Object Recognition. _IEEE Transactions on Pattern Analysis and Machine Intelligence_, 37(10):2028-2040, Oct. 2015. * [37] Takehiro Ozawa, Yusuke Sekikawa, and Hideo Saito. Accuracy and Speed Improvement of Event Camera Motion Estimation Using a Bird's-Eye View Transformation. _Sensors_, 22(3):773, Jan. 2022. * [38] Takehiro Ozawa, Yusuke Sekikawa, and Hideo Saito. Recursive Contrast Maximization for Event-Based High-Frequency Motion Estimation. _IEEE Access_, 10:125376-125386, 2022. * [39] Chethan M. Parameshwara, Nitin J. Sanket, Chahat Deep Singh, Cornelia Fermuller, and Yiannis Aloimonos. 0-MMS: Zero-shot multi-motion segmentation with a monocular event camera. In _2021 IEEE International Conference on Robotics and Automation (ICRA)_, pages 9594-9600. IEEE. * [40] Xin Peng, Ling Gao, Yifu Wang, and Laurent Kneip. Globally-Optimal Contrast Maximisation for Event Cameras. _IEEE Transactions on Pattern Analysis and Machine Intelligence_, pages 1-1, 2021. * [41] Nicholas Ralph, Damien Joubert, Andrew Jolley, Saeed Afshar, Nicholas Tothill, Andre van Schaik, and Gregory Cohen. Real-Time Event-Based Unsupervised Feature Consolidation and Tracking for Space Situational Awareness. _Frontiers in Neuroscience_, 16:821157, May 2022. * [42] Nicholas Owen Ralph, Alexandre Marcireau, Saeed Afshar, Nicholas Tothill, Andre van Schaik, and Gregory Cohen. Astrometric Calibration and Source Characterisation of the Latest Generation Neuromorphic Event-based Cameras for Space Imaging. Technical Report arXiv:2211.09939, arXiv, Nov. 2022. arXiv:2211.09939 [astro-ph] type: article. * [43] Bharath Ramesh, S. Zhang, Zhi Wei Lee, Zhi Gao, G. Orchard, and Cheng Xiang. Long-term object tracking with a moving event camera. In _British Machine Vision Conference_, 2018. * [44] Henri Rebecq, Guillermo Gallego, Elias Mueggler, and Davide Scaramuzza. EMVS: Event-based multi-view stereo--3D reconstruction with an event camera in real-time. _Int. J. Comput. Vis._, 126:1394-1414, Dec. 2018. * [45] Henri Rebecq, Guillermo Gallego, and Davide Scaramuzza. EMVS: Event-based Multi-View Stereo. In _Proceedings of the British Machine Vision Conference 2016_, pages 63.1-63.11, York, UK, 2016. British Machine Vision Association. * [46] Christian Reinbacher, Gottfried Munda, and Thomas Pock. Real-time panoramic tracking for event cameras. In _2017 IEEE International Conference on Computational Photography (ICCP)_, pages 1-9. IEEE. * [47] Hochang Seok and Jongwoo Lim. Robust feature tracking in DVS event stream using bezier mapping. In _2020 IEEE Winter Conference on Applications of Computer Vision (WACV)_, pages 1647-1656. IEEE. * [48] Shintaro Shiba, Yoshimitsu Aoki, and Guillermo Gallego. Event Collapse in Contrast Maximization Frameworks. _Sensors_, 22(14):5190, July 2022. arXiv:2207.04007 [cs, math]. * [49] Shintaro Shiba, Yoshimitsu Aoki, and Guillermo Gallego. Secrets of event-based optical flow. In _European Conference on Computer Vision (ECCV)_, pages 628-645, 2022. * [50] Shintaro Shiba, Yoshimitsu Aoki, and Guillermo Gallego. A Fast Geometric Regularizer to Mitigate Event Collapse in the Contrast Maximization Framework. _Advanced Intelligent Systems_, 5(3):2200251, Mar. 2023. * [51] Olaf Sikorski, Dario Izzo, and Gabriele Moeni. Event-based spacecraft landing using time-to-contact. In _2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)_, pages 1941-1950. IEEE. * [52] Amos Sironi, Manuele Brambilla, Nicolas Bourdis, Xavier Lagorce, and Ryad Benosman. HATS: Histograms of averaged time surfaces for robust event-based object classification. In _2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 1731-1740. IEEE. * [53] Timo Stoffregen, Guillermo Gallego, Tom Drummond, Lindsay Kleeman, and Davide Scaramuzza. Event-based motion segmentation by motion compensation. In _\"Int. Conf. Comput. Vis. (ICCV)\"_, 2019. * [54] Timo Stoffregen and Lindsay Kleeman. Event cameras, contrast maximization and reward functions: An analysis. In _2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)_, pages 12292-12300. IEEE. * [55] David Tedaldi, Guillermo Gallego, Elias Mueggler, and Davide Scaramuzza. Feature detection and tracking with the dynamic and active-pixel vision sensor (DAVIS). In _2016 Second International Conference on Event-based Control, Communication, and Signal Processing (EBCCSP)_, pages 1-7. IEEE. * [56] Antoni Rosinol Vidal, Henri Rebecq, Timo Horstschaefer, and Davide Scaramuzza. Ultimate SLAM? Combining Events, Images, and IMU for Robust Visual SLAM in HDR and High-Speed Scenarios. _IEEE Robotics and Automation Letters_, 3(2):994-1001, Apr. 2018. * [57] Antonio Vitale, Alpha Renner, Celine Nauer, Davide Scaramuzza, and Yulia Sandamirskaya. Event-driven vision and control for uavs on a neuromorphic chip. In _2021 IEEE International Conference on Robotics and Automation (ICRA)_, pages 103-109. IEEE, 2021. * [58] David Weikersdorfer, Raoul Hoffmann, and Jorg Conradt. Simultaneous localization and mapping for event-based vision systems. In Mei Chen, Bastian Leibe, and Bernd Neumann, editors, _Computer Vision Systems_, volume 7963, pages 133-142. Springer Berlin Heidelberg. Series Title: Lecture Notes in Computer Science. * [59] Christian E. Wilbert and Joachim Klinner. Event-based imaging velocimetry: an assessment of event-based cameras for the measurement of fluid flows. _Experiments in Fluids_, 63(6):101, June 2022. * [60] Feihu Zhang, Yaohui Zhong, Liyuan Chen, and Zhiliang Wang. Event-Based Circular Detection for AUV Docking Based on Spiking Neural Network. _Frontiers in Neurroobotics_, 15:815-144, Jan. 2022. * [61] Zelin Zhang, Anthony Yezzi, and Guillermo Gallego. Formulating Event-based Image Reconstruction as a Linear Inverse Problem with Deep Regularization using Optical Flow. _IEEE Transactions on Pattern Analysis and Machine Intelligence_, pages 1-18, 2022. * [62] Yi Zhou, Guillermo Gallego, Xiuyuan Lu, Siqi Liu, and Shaojie Shen. Event-based motion segmentation with spatio-temporal graph cuts. _IEEE Transactions on Neural Network and Learning Systems_, 2021. * [63] Alex Zihao Zhu, Nikolay Atanasov, and Kostas Daniilidis. Event-based feature tracking with probabilistic data association. In _2017 IEEE International Conference on Robotics and Automation (ICRA)_, pages 4465-4470. IEEE. * [64] Alex Zihao Zhu, Liangzhe Yuan, Kenneth Chaney, and Kostas Daniilidis. Unsupervised event-based learning of optical flow, depth, and egomotion. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 989-997, 2019. # Density Invariant Contrast Maximization for Neuromorphic Earth Observations -Supplemental Materials- Sami Arja(c)1, Alexandre Marcireau1, Richard L. Balthazor2(c), Matthew G. McHarg2(c), Saeed Afshar1 and Gregory Cohen1\\(\\copyright\\) Footnote 1: S. Arja, A. Marcireau, S. Afshar, and G.Cohen are with the International Centre for Neuromorphic Systems, Western Sydney University, Australia, s.elarja at westernysdney.edu.au Footnote 2: R. L. Balthazar and M. G. McHarg are with the U.S. Air Force Academy, Space Physics and Atmospheric Research Center, Department of Physics and Meteorology, United States Air Force Academy, Colorado, United States ###### Abstract In this document, we describe our novel dataset and provide an additional detailed explanation of our mathematical approach in the paper \"Density Invariant Contrast Maximization for Neuromorphic Earth Observations\". ## I ISS Dataset The ISS-based event camera setup consists of two DAVIS cameras, each with a resolution of 240x180 pixels, located at the Columbus module. One camera referred to as the \"Ram camera,\" points forward toward the Earth's limb in the direction of the ISS's travel. The other camera, known as the \"Nadir camera,\" points down towards the Earth at a 20-degree angle to starboard to observe lightning and the Earth's surface directly from above the upper atmosphere. The camera field-of-view (FOV) is depicted in Figure 1(a) with yellow shades representing both the Nadir and Ram FOV and the blue shade representing the ISS centre FOV at its midpoint. Figure 1(b) illustrates the FOV for both cameras from the Nadir and Ram perspectives, showcasing the Ram's zooming ability and Nadir's translation motion. The Ram is primarily used to observe sprites and lightning from different angles, while the Nadir captures lightning and Earth's surface. We carefully selected ten recordings from the Nadir camera dataset, considering a variety of conditions such as weather, location, time, and noise. Table I provides additional details about each recording. We focused on recordings that exhibit rich textures and clear outlines of the Earth, disregarding recordings over the ocean, as they only generate noise without features or clear outlines, rendering them unusable. Below are some snapshots of each data we used in this work (figure 2) with the output maps (figure 3) which are similar to the ones presented in the paper but with a larger size: ## II Further explanations on the One-dimensional case In Section 2.1, we introduced the one-dimensional correction and explained the mathematical details of the continuous noise variance for condition \\(1\\). In this section, we shall explain the details of condition 2 (figure 4). Condition 2 starts when \\(s\\geq 1\\). It is also a piecewise linear function made of three segments, however, its height is influenced by the change of the candidate speed \\(s\\) when it becomes larger than the pixel width. In this case, the height of the polyhedron reduces as \\(s\\) increases. It is described as follows: \\[f(p)=\\begin{cases}\\frac{cp}{s}&0\\leq p\\leq 1\\\\ \\frac{s}{1}&1\\leq p\\leq s\\\\ \\frac{s-p+1}{s^{2}}&s\\leq p\\leq 1+s\\end{cases} \\tag{1}\\] Applying the formulas for the mean and variance given for \\(s\\geq 1\\): \\[\\overline{f}=\\frac{c}{s+1}\\quad\\text{and}\\quad\\text{var}_{f}(s)=c^{2}\\frac{s (2-s)}{3\\left(s+1\\right)^{2}} \\tag{2}\\] \\(\\overline{f}\\) for both conditions (\\(s\\leq 1\\) and \\(s\\geq 1\\)) yield the same value, showing that the consistency of the geometrical shape is preserved even at a large speed candidate. In this case, we also want \\(var_{f}\\) to be zero. We thus introduced \\(\\alpha\\) as a multiplicative correction function for the same non-constant segments. \\[\\lambda(p)=\\alpha(p)\\cdot f(p) \\tag{3}\\] \\[\\alpha(p)=\\begin{cases}\\frac{s}{p}&0\\leq p\\leq 1\\\\ s&1\\leq p\\leq s\\\\ \\frac{s^{2}}{-p+s+1}&s\\leq p\\leq 1+s\\end{cases} \\tag{4}\\] This shows that the piecewise functions for conditions 1 and 2 correctly model the changes in the geometry of the proposed solid rectangle. ## III Further explanations on the Two-dimensional case In Section 2.2, we introduced the two-dimensional correction function for \\(s_{x}\\geq 1\\) and \\(s_{y}\\geq 1\\). In this section, we shall describe the entire two-dimensional space for conditions where \\(s_{x}\\leq 1\\) and \\(s_{y}\\leq 1\\). Similarly to the one-dimensional case, once the candidate speed is greater than the width or the height of the sensor, the height of the trapezoid will start to reduce with respect to the speed parameters. As shown in figure 6, the geometry in every condition exhibits several symmetries and can be solved by considering only two sets of conditions for conditions 2 and 3. Therefore, we define the height function \\(f\\) for both for \\(s_{x}\\geq 1\\) and \\(s_{y}\\geq 1\\) as follows: \\[f(p_{x},p_{y})=\\begin{cases}\\frac{cp_{x}}{s_{y}}&p_{x}\\leq 1\\wedge p_{y}\\leq \\frac{s_{y}}{s_{x}+p_{x}}\\\\ \\frac{cp_{x}}{s_{x}}&p_{x}\\leq\\frac{s_{x}}{s_{y}p_{y}}\\wedge p_{y}\\leq\\frac{s_ {y}}{s_{x}}\\\\ c\\left(\\frac{1}{s_{x}}-\\frac{p_{x}}{s_{x}}+\\frac{p_{y}}{s_{y}}\\right)&1\\leq p_{ x}<s_{x}\\\\ \\wedge\\frac{(p_{x}-1)s_{x}}{s_{x}}\\leq p_{y}\\leq\\frac{p_{x}s_{y}}{s_{x}}\\\\ \\frac{cp_{x}}{s_{x}}&p_{x}\\leq 1\\wedge\\frac{s_{y}}{s_{x}}\\leq p_{y}\\leq 1\\\\ \\frac{c}{s_{x}}&1\\leq p_{x}\\leq s_{x}\\\\ \\wedge\\frac{p_{x}s_{y}}{s_{x}}\\leq p_{y}\\leq(\\frac{p_{x}-1)s_{y}}{s_{x}+1}\\\\ c\\left(\\frac{1}{s_{x}}-\\frac{p_{x}}{s_{x}}+\\frac{p_{y}}{s_{y}}\\right)&s_{x} \\leq p_{x}\\leq 1+s_{x}\\\\ \\wedge\\frac{(p_{x}-1)s_{y}}{s_{x}}\\leq p_{y}\\leq s_{y}\\end{cases} \\tag{5}\\] \\begin{table} \\begin{tabular}{l c c c c c} \\hline **Geo Location** & **Date (UTC)** & \\(\\Delta t\\) (\\(s\\)) & **Location\\({}^{\\circ}\\)** & **\\# Events** & **Earth Side** \\\\ \\hline Washington & 2022-02-01 20:15:58 & 30 & 43.79 -100.18 & 3,280,126 & - \\\\ \\hline Houston & 2022-02-17 20:28:02 & 59 & 20.75 -84.59 & 4,996,157 & Night \\\\ \\hline Sumatra & 2022-02-17 21:20:49 & 180 & -48.63 13.163 & 1,073,675 & Night \\\\ \\hline Egypt & 2022-02-03 17:28:03 & 180 & 32.94 30.96 & 6,426,732 & - \\\\ \\hline United Kingdom & 2023-01-19 20:25:10 & 60 & 8.157 51.637 & 3,712,626 & - \\\\ \\hline \\end{tabular} \\end{table} TABLE I: General characteristics of the ISS event dataset. Due to the consistent symmetry, calculating the mean of \\(f\\) yields a similar value to condition 1: \\[\\overline{f}=\\frac{c}{s_{x}+s_{y}+1} \\tag{6}\\] \\[\\text{var}_{f}(s_{x},s_{y})=\\frac{c^{2}\\left(4s_{x}^{2}s_{y}+4s_{x}^{2}-2s_{x}s_ {y}^{2}-3s_{x}s_{y}-2s_{x}+s_{y}^{2}+s_{y}\\right)}{6s_{x}^{3}\\left(s_{x}+s_{y}+ 1\\right)^{2}} \\tag{7}\\] Fig. 2: Selected frames from each recording in sequential order. To cancel out the effect of noise variance, we introduce \\(\\alpha\\) that is specific to \\(f\\) with the same properties as the previously introduced \\(\\alpha\\). That's to be multiplicative and flatten \\(f\\) to ensure that the variance of the corrected height function is zero. \\[\\alpha(p_{x},p_{y})=\\begin{cases}\\frac{s_{x}}{p_{y}}&p_{x}\\leq 1\\wedge p_{y} \\leq\\frac{s_{y}}{s_{x}p_{y}}\\\\ \\frac{s_{x}}{p_{x}}&p_{x}\\leq\\frac{s_{x}}{s_{y}p_{y}}\\wedge p_{y}\\leq\\frac{s_{y }}{s_{x}}\\\\ \\frac{s_{x}s_{y}}{s_{x}p_{y}-s_{y}p_{x}+s_{y}}&1\\leq p_{x}<s_{x}\\\\ \\wedge\\frac{(p_{x}-1)s_{x}}{s_{x}}\\leq p_{y}\\leq\\frac{p_{x}s_{y}}{s_{x}}\\\\ \\frac{s_{x}}{p_{x}}&p_{x}\\leq 1\\wedge~{}\\frac{s_{y}}{s_{x}}\\leq p_{y}\\leq 1 \\\\ s_{x}&1\\leq p_{x}\\leq s_{x}\\\\ &\\wedge~{}\\frac{p_{x}s_{y}}{s_{x}}\\leq p_{y}\\leq\\frac{(p_{x}-1)s_{y}}{s_{x}+1 }\\\\ \\frac{s_{x}s_{y}}{s_{x}p_{y}-s_{y}p_{x}+s_{y}}&s_{x}\\leq p_{x}\\leq 1+s_{x}\\\\ &\\wedge~{}\\frac{(p_{x}-1)s_{y}}{s_{x}}\\leq p_{y}\\leq s_{y}\\\\ \\end{cases} \\tag{8}\\] In figure 7, we compared the output variance using the analytical formula as a function of the candidate speed and the Fig. 4: The changes in the geometry of the _Line of Warped Events_\\(f\\) across different \\(s\\). Fig. 3: ISS motion-compensated maps. discrete simulated noise. Both show an exact match. Fig. 5: Plot of the variance of \\(f\\) combining \\(var_{f}(s)\\) for \\(s\\leq 1\\) and \\(s\\geq 1\\). Fig. 6: An illustration of the height of the sheared rectangular cuboid (accumulated warped events) as a function of \\(s_{x}\\) and \\(s_{y}\\) (black geometric figures) and the corresponding variance (red and white background). The problem exhibits several symmetries and can be solved by considering only two sets of conditions (condition 2 and condition 3 are symmetrical about the axis \\(y=x\\)). Fig. 7: Comparison of the variance results between both analytical and discrete variance equation. The discrete analytical variance was performed by simulating dense noise events of dimension \\(50\\times 50\\) pixels, while the analytical variance was calculated using \\(\\text{var}_{f}(s_{x},s_{y})\\).
Contrast maximization (CMax) techniques are widely used in event-based vision systems to estimate the motion parameters of the camera and generate high-contrast images. However, these techniques are noise-intolerance and suffer from the multiple extrema problem which arises when the scene contains more noisy events than structure, causing the contrast to be higher at multiple locations. This makes the task of estimating the camera motion extremely challenging, which is a problem for neuromorphic earth observation, because, without a proper estimation of the motion parameters, it is not possible to generate a map with high contrast, causing important details to be lost. Similar methods that use CMax addressed this problem by changing or augmenting the objective function to enable it to converge to the correct motion parameters. Our proposed solution overcomes the multiple extreme and noise-intolerance problems by correcting the warped event before calculating the contrast and offers the following advantages: it does not depend on the event data, it does not require a prior about the camera motion, and keeps the rest of the CMax pipeline unchanged. This is to ensure that the contrast is only high around the correct motion parameters. Our approach enables the creation of better motion-compensated maps through an analytical compensation technique using a novel dataset from the International Space Station (ISS). Code is available at [https://github.com/neuromorphicsystems/event_warping](https://github.com/neuromorphicsystems/event_warping)
Write a summary of the passage below.
arxiv/a6063ac9_4b35_45e0_87b0_d962d9fc87e2.md
# Nanostructure-modulated planar high spectral resolution spectro-polarimeter L. P. Stoevelaar 1,2,+,* Jonas Berzins 1,3,+,* Fabrizio Silvestri 1 Stefan Fasold 3 Khosro Zangeneh Kamali 4 Heiko Knopf 3,5,6 Falk Eilenberger 3,5,6 Frank Setzpfandt 3 Thomas Pertsch 3,5 Stefan M.B. Baumer 1 and Giampiero Gerini1,2 1 The Netherlands Organization for Applied Scientific Research, TNO, Optics Department, 2628CK Delft, The Netherlands 2 Eindhoven University of Technology, TU/e, Electromagnetics Group, 5600MB Eindhoven, The Netherlands 3 Institute of Applied Physics, Abbe Center of Photonics, Friedrich Schiller University Jena, 07745 Jena, Germany 4 Nonlinear Physics Centre, The Australian National University, Canberra ACT 2601, Australia 5 Fraunhofer Institute for Applied Optics and Precision Engineering, 07745 Jena, Germany 6 Max Planck School of Photonics, 07745 Jena, Germany * These authors contributed equally to this paper. * [email protected], [email protected] ## 1 Introduction Multifunctional imaging has emerged as a new generation of digital imaging. Techniques such as polarimetry and hyperspectral imaging provide substantially more information on the imaged scene or the object of interest than the conventional color imaging [1, 2, 3]. Information on the polarization state of the collected light enables a better understanding of surface topography and scattering, thus is widely used for target detection in defense and biomedical applications [1], while a high-resolution spectral information provides details on the material composition for the assessment of food quality, artwork authentication and many other applications [2]. All these techniques fall under the term of spectro-polarimetry [3]. One of the main applications of spectro-polarimetry is Earth observation [1], where the properties of aerosols, e.g. their size, shape, and refractive index, can be identified remotely using polarization and spectral information. In general, astronomical instruments on board of satellites have stringent constraints in terms of mass and volume. Therefore, any innovative solutions that enable compact instruments is highly desirable, especially with the increased number of nano- and cube-satellites in recent years [4]. Accordingly, there is an increasing demand for optical components to be miniaturized. However, despite many efforts, current spectro-polarimetric systems in space still consist of multiple thick optical elements and result in cumbersome payloads [5, 6]. The size limitation of the conventional optical components motivated the search for alternative implementations, where the concept of metasurfaces has emerged as one of the most promising technologies. A metasurface is a two-dimensional nanostructure array, which enables control of amplitude, phase, and polarization of the incident light [7]. It has been successfully used in the realizations of polarimeters [8, 9, 10, 11, 12], spectral filters and spectrometers [13, 14, 15, 16, 17, 18], and even spectro-polarimeters [19, 20, 21, 22]. However, there are still several obstacles to overcome and challenges to be addressed. First, the majority of multifunctional nanostructure-based devices consist of a few transversely-variant layers on top of each other. This poses quite a challenge in terms of alignment, as even small misalignment causes errors in the detected information [21]. Furthermore, for the maximum gain in the spectral information, it is important to obtain a high spectral resolution, but up to now, the spectral resolution of spectro-polarimetric systems has been limited to the bands of RGB filters [21], or required a connection to an external spectrometer [19]. Last but not least, imaging systems, ideally, should be integrated on a compact sensor, thus aspects as compatibility, thickness and height uniformity of the pixels are of high importance and have to be addressed. In this work, we introduce a planar spectro-polarimeter concept, which is based on a set of polarization-sensitive silicon (Si) nanostructures embedded in a Fabry-Perot (FP) cavity and which could be directly integrated on a sensor [23]. Such a system enables the reconstruction of the first three Stokes parameters (\\(S_{0}\\), \\(S_{1}\\), \\(S_{2}\\)) with a high spectral resolution. In this paper, we present its systematic design, experimental demonstration, and its comparison to an ideal case. Furthermore, we discuss the polarization reconstruction algorithm, its limitations, and optimization towards multi-band spectro-polarimetric designs. ## 2 Methods ### Concept A FP cavity consists of two mirrors separated by an optical length \\(L\\), as shown in Fig. 1(a). For simplicity, we assume that both mirrors are planar, their reflectivities are equal \\(R_{\\text{m}}=R_{1}=R_{2}\\), and the losses generated by scattering and absorption are negligible. The FP cavity provides a Lorentzian-shape peak in transmission centered, at the wavelengths \\(\\lambda_{\\text{c}}\\), for which the following condition holds [24]: \\[\\lambda_{\\text{c}}=\\frac{2L}{q}, \\tag{1}\\] where \\(q\\) is an integral number, representing the order of the resonant mode. Ideally, the transmission exhibits a periodic sequence of peaks, but may also be limited to one by the reflectivity band of the mirrors, as illustrated in Fig. 1(b). The central wavelengths of the peaks \\(\\lambda_{\\text{c}}\\) can be tailored by the optical distance of the mirrors \\(L=L_{0}\\cdot n_{\\text{eff}}\\), which rely on the geometrical path length between the mirrors \\(L_{0}\\) and the effective refractive index of the cavity medium \\(n_{\\text{eff}}\\). The most common approach to tailor the transmission is to change the physical length of the cavity \\(L_{0}\\)[25]. However, it has been recently demonstrated that inclusions, such as arrays of high-index nanostructures, might be used to modulate the effective index \\(n_{\\text{eff}}\\)[14]. The inclusion of high-index nanostructures adds a phase shift to the cavity and a subsequent red-shift of the resonant peak, see Fig. 1(b). Furthermore, if the diameter in one axis \\(D_{x}\\) of the nanostructures is chosen differently than the diameter of other \\(D_{y}\\), the cavity will produce two different transmission peaks for two different linear polarizations: one corresponding to the case where the electric-field of the incident light aligns with the width of the nanostructure (\\(x\\)-polarized), the other - when the field aligns with the length of the nanostructure (\\(y\\)-polarized) [23], as shown in Fig. 1(b). The spectral distance between the peaks can be controlled by altering the aspect ratio (\\(D_{x}/D_{y}\\)) of the nanostructures, while the largest spectral separation of the peaks can be achieved with a linear grating, since it has an infinite aspect ratio, see Appendix. In any case, it is important that the optical size of the nanostructures is significantly smaller than the wavelength of operation (\\(nD_{x,y}<\\lambda\\)). In fact in this case, the nanostructures are non-resonant and the modulation of the cavity is based only on the change of the effective index [26]. Using different-size polarization-sensitive nanostructures, resonance peaks for different wavelength and polarization combinations can be obtained. By measuring a set of 6 pixel intensities (\\(I_{1},\\cdots,I_{6}\\)), illustrated in Fig. 2(a,b), we can retrieve both spectral and polarization information in the form of discrete intensities of two linear polarization states (\\(I_{x}\\) and \\(I_{y}\\)) at three wavelengths \\(\\{\\lambda_{1},\\lambda_{2},\\lambda_{3}\\}\\)[23]. For simplicity, it is assumed that the pixels have a transmission \\(\\mathrm{T}=\\alpha_{m}^{n}\\) at their peaks, where \\(n\\) is the number corresponding to the detector pixel and \\(m\\) indicates the polarization state and \\(\\mathrm{T}=0\\) elsewhere. By using this assumption, the following system of equations is formed: \\[\\left(\\begin{array}{c}I_{1}\\\\ I_{2}\\\\ I_{3}\\\\ I_{4}\\\\ I_{5}\\\\ I_{6}\\end{array}\\right)=\\left[\\begin{array}{cccccc}0&0&\\alpha_{x}^{1}&\\alpha _{y}^{1}&0&0\\\\ 0&\\alpha_{x}^{2}&0&0&0&\\alpha_{y}^{2}\\\\ \\alpha_{x}^{3}&0&0&0&\\alpha_{y}^{3}&0\\\\ \\alpha_{x}^{4}&0&0&0&0&\\alpha_{y}^{4}\\\\ 0&0&\\alpha_{x}^{5}&0&\\alpha_{y}^{5}&0\\\\ 0&\\alpha_{x}^{6}&0&\\alpha_{y}^{6}&0&0\\end{array}\\right]\\left(\\begin{array}{ c}I_{x}^{\\lambda_{1}}\\\\ I_{x}^{\\lambda_{2}}\\\\ I_{x}^{\\lambda_{3}}\\\\ I_{y}^{\\lambda_{1}}\\\\ I_{y}^{\\lambda_{2}}\\\\ I_{y}^{\\lambda_{3}}\\end{array}\\right). \\tag{2}\\] Figure 1: Concept of FP cavity modulated by high-index nanostructures. (a) Illustration of FP cavity constituted of mirrors with reflectivity \\(R_{\\mathrm{m}}=R_{1}=R_{2}\\) spaced by medium with optical length \\(L=L_{0}\\cdot n_{\\mathrm{eff}}\\). Cavity is modulated by inclusion - high-index nanostructure. (b) Regimes of FP cavity depending on the shape of the nanostructure: I) empty cavity (without inclusion) producing a single peak at in the spectral range of interest; II) cavity modulated by polarization-insensitive nanostructure, single peak remains but is red-shifted; III) cavity modulated by polarization-insensitive nanostructure, two different peaks at \\(\\lambda_{x}\\) and \\(\\lambda_{y}\\) (\\(\\lambda_{x}<\\lambda_{y}\\)) for two linear polarization states, \\(x\\) and \\(y\\), respectively; IV) cavity modulated by grating, two different peaks at \\(\\lambda_{x}\\) and \\(\\lambda_{y}\\) (but \\(\\lambda_{x}\\gg\\lambda_{y}\\)) for two linear polarization states, \\(x\\) and \\(y\\), respectively. Transmittance of \\(x\\)-polarized light is depicted by red solid line, transmittance of \\(y\\)-polarized – red dashed line, reflectivity band of the mirrors – grey line. Since each of the measured values (intensities) is a combination of two unknowns, this system of equations is not full rank. Therefore, it cannot be inverted, making the spectro-polarimetric reconstruction impossible. However, it can be made invertable by removing at least one of the transmission peaks from the system of equations, e.g. by setting \\(\\alpha_{x}^{6}=0\\). The resulting matrix will be full rank and, subsequently, can be inverted making the spectro-polarimetric reconstruction possible. This results in the set of spectro-polarimetric pixels shown in Fig. 2(a) with their spectral functions illustrated in Fig. 2(b). Furthermore, by including a second set of 6 pixels with the nanostructures rotated by an angle of \\(45^{\\circ}\\), the intensity of light polarized along angles of \\(45^{\\circ}\\) and \\(135^{\\circ}\\) can also be measured. This allows the retrieval of the first three Stokes parameters for different wavelengths: \\[S_{0} =I_{x}+I_{y}=I_{45^{\\circ}}+I_{135^{\\circ}}, \\tag{3}\\] \\[S_{1} =I_{x}-I_{y},\\] (4) \\[S_{2} =I_{45^{\\circ}}-I_{135^{\\circ}}. \\tag{5}\\] The system is not able to retrieve all Stokes parameters [27], as it is unable to distinguish circularly polarized light from unpolarized light (the fourth Stokes parameter (\\(S_{3}\\)) can not be retrieved), and the presence of circularly polarized light may disrupt the estimation of the first Stokes parameter. However, for some applications this is not critical, e.g. in Earth observation, where the amount of circularly polarized light is negligible. ### Design As the spectro-polarimeter is based on nanostructure-modulated FP cavities, its design starts from the mirrors. An important aspect of the FP cavity is its spectral full width at half maximum (FWHM), which relates to the spectral resolution of the system, see Section 4. Following the assumption that both mirrors have the same reflectivity \\(R_{\\mathrm{m}}\\), the FWHM is given by the following equation [28]: \\[\\mathrm{FWHM}=\\frac{\\lambda(1-R_{\\mathrm{m}})}{q\\pi\\sqrt{R_{\\mathrm{m}}}}, \\tag{6}\\] thus the higher the reflectivity of the mirrors, the higher the spectral resolution. Because of that, distributed Bragg reflectors (DBRs) are used. If the DBR layers are made of lossless materials, there is no strict limitation to the number of layers and, subsequently, the FWHM [29]. However, Figure 2: Concept of spectro-polarimeter for linear-polarization. (a) Illustration of a set of 6 pixels, a super-pixel, integrated directly on a sensor. (b) Exemplary Lorentzian-shape spectral functions for 3 wavelengths (\\(\\lambda_{1}\\), \\(\\lambda_{2}\\), \\(\\lambda_{3}\\)) and two polarization states (\\(x\\) and \\(y\\)) using set of 6 pixels (P1, \\(\\cdots\\), P6). A single pixel provides two peaks for the two linear-polarizations, while one peak (P6) is cut-out for inversion of the matrix. Transmittance of \\(x\\)-polarized light is depicted by solid line, while transmittance of \\(y\\)-polarized light is shown in dashed line. we restrained ourselves to DBRs of 7 pairs of alternating SiO\\({}_{2}\\) (\\(n=1.45\\)) and TiO\\({}_{2}\\) (\\(n=2.285\\)[30]) layers, with reflection band centered at \\(\\lambda=1500\\) nm. This configuration resulted in reflectance \\(R_{\\rm m}\\approx 99.5\\%\\) and FWHM \\(\\approx 1\\) nm (cavity quality factor \\(Q>1000\\)). The mirrors were separated by an optical length \\(L\\) equal to the central wavelength \\(\\lambda=1500\\) nm. The physical length of the cavity was set to \\(L_{0}=985\\) nm. The arrays of polarization-sensitive Si nanostructures were made from amorphous hydrogenated Si (a-Si:H), due to its high refractive index and no intrinsic losses (\\(n=3.7\\) and \\(k=0\\) at \\(\\lambda=1500\\) nm). The nanostructures were defined as extruded ellipsis with a certain diameter in \\(x\\)-axis (\\(D_{x}\\)), a diameter in \\(y\\)-axis (\\(D_{y}\\)), and a height \\(H\\). The nanostructures were distributed in a square lattice with a period \\(P=500\\) nm. The height of the nanostructures was set to \\(H=300\\) nm. The diameters of the elliptical nanostructures, \\(D_{x}\\) and \\(D_{y}\\), were optimized in the range 140-360 nm to obtain three spectral bands centered at the following wavelengths: \\(\\lambda_{1}=1480\\) nm, \\(\\lambda_{2}=1500\\) nm, and \\(\\lambda_{3}=1520\\) nm. The optimized nanostructures were placed in the middle of the cavity in order not to alter the performance of the mirrors. SiO\\({}_{2}\\) (\\(n=1.45\\)) was selected as a low-index material of the cavity medium. Moreover, to remove a transmission peak in the matrix, as mentioned in Sec. 2.1 and shown in Fig. 2(b), one of the pixels with elliptical Si nanostructures was replaced by a subdiffractive Si grating. The grating was defined with an infinite aspect ratio, meaning a certain width \\(D_{x}\\), but \\(D_{y}=\\infty\\). Regardless of the shape of the nanostructures inside the cavity, their optical size has restriction to be relatively smaller than \\(\\lambda=1500\\) nm for the nanostructures to be non-resonant and provide transmission close to unity. The design spectral range and the FWHM of the peaks was based on the extended requirements of the spectropolarimeter for planetary exploration (SPEX) [31]. The design was obtained using a finite-difference time-domain (FDTD) method (Lumerical, Inc.). ### Fabrication The sample fabrication was carried out in several steps: the deposition of the bottom mirror, the structuring of the cavity, and the deposition of the top mirror. First, the bottom mirror was based on alternating SiO\\({}_{2}\\) and TiO\\({}_{2}\\) layers. The layers were deposited on top of a glass substrate by plasma-ion-assisted deposition (PIAD), described in a previous publication [29]. In total, seven pairs of SiO\\({}_{2}\\)/TiO\\({}_{2}\\) layers were deposited for high reflectivity (\\(R_{\\rm m}\\approx 98\\%\\)) at \\(\\lambda=1500\\) nm, see Appendix. The thicknesses of the layers were initially calculated as quarter-wavelength layers, but later tuned to \\(t_{\\rm 1L}=320\\) nm and \\(t_{\\rm 1H}=120\\) nm for SiO\\({}_{2}\\) and TiO\\({}_{2}\\), respectively, to reduce the thickness of TiO\\({}_{2}\\) layers in attempt to avoid its polycrystalline growth, thus retaining smooth surfaces with low scattering and absorbance [32]. The nanostructures were made from a 300 nm layer of a-Si:H, deposited by a plasma-enhanced chemical vapor deposition (PECVD) using SiH\\({}_{4}\\) gas as a source (Oxford Plasmalab 100 Dual Frequency, Oxford Instruments). Afterwards, a 30 nm chromium (Cr) layer was deposited by ion beam deposition (Oxford Ionfab 300, Oxford Instruments) and a 100 nm layer of electron beam resist (EN038, Tokyo Ohka Kogyo Co., Ltd.) was spin-coated on top. Such sample was exposed by a variable-shaped electron-beam lithography system (Vistec SB 350, Vistec Electron Beam GmbH). First, the resist was developed and the mask was transferred in the Cr layer by ion beam etching (Oxford Ionfab 300, Oxford Instruments). Then, the Cr mask was transferred in the Si layer by inductively coupled plasma reactive ion etching (SI-500 C, Sentech Instruments GmbH) with CF\\({}_{4}\\) as reactive gas. Finally, the remaining resist and Cr mask was supposed to be removed by acetone and Cr etchant, but during the etching some of Cr mixed with other materials and was not fully removed. Cr has a high absorption (\\(n=3.675\\) and \\(k=4.072\\) at 1500 nm [33]). Any amount of it in the cavity is undesirable. As determined by simulations of Cr inclusions, volume of 0.3% of the total cavity volume decreases the amplitude of the resonant peak by 55%, while higher concentration completely destroys the resonance, see Appendix. In comparison, Si nanostructures constitute \\(\\sim 6\\%\\) of the total cavity volume. The nanostructured Si was embedded in a SiO\\({}_{2}\\) layer by atomic layer deposition (ALD) at a low growth rate of 1.19 A/cycle ensuring an air-gap-free cavity (Silayo ICP 330, Sentech Instruments GmbH). The Si nanostructures induce waviness in the upper layers, thus the deposited embedding layer was planarized by ion-beam etching (Oxford Ionfab 300, Oxford Instruments). The waviness was reduced to \\(A_{\\mathrm{w}}\\approx 15\\) nm, which is significantly less than the operational wavelength and is not expected to deter the performance of the FP cavity, see Appendix. During the process, the physical length of the cavity was reduced to 910 nm, less than the original design value of 950 nm. Before the deposition of the top mirror, the surface of the SiO\\({}_{2}\\) cladding was pre-treated with Ar-ion plasma to create chemically active sites for better cross-link [34]. Similarly to the bottom, the top mirror was optimized for high reflectivity at \\(\\lambda=1500\\) nm. However, due to different exposure conditions, the thicknesses changed. In particular, the top DBR was constituted of 7 pairs of SiO\\({}_{2}\\) and TiO\\({}_{2}\\) layers with thicknesses of \\(t_{\\mathrm{2L}}=250\\) nm and \\(t_{\\mathrm{2H}}=170\\) nm, respectively, and the last layer of SiO\\({}_{2}\\) set to 100 nm. Because of the different thicknesses, the reflection band of the top mirror shifted, but high reflectivity (\\(R_{\\mathrm{m}}\\approx 98\\%\\), comparable to the initial design) in the spectral range of interest was maintained, see Appendix. ## 3 Results A focused ion beam (FIB) and a scanning electron microscope (SEM) were used to obtain vertical and horizontal cross-sections of the fabricated spectro-polarimeter, see Fig. 3(a,b). The total thickness of the optical device is 7 um. Moreover, the horizontal cross-sections of all pixels (P1,\\(\\cdots\\), P6) were imaged to evaluate the sizes of the nanostructures, see Fig. 4(a). All of the nanostructures had a height of \\(H=300\\) nm, but varied in both diameters, \\(D_{x}\\) and \\(D_{y}\\). The measured transverse dimensions of the nanostructures are provided in Tab. 1. As can be seen from the standard deviation (\\(\\sigma\\)) of the measured diameters, the proximity effect during electron beam exposure leads to some errors. However, it should be noted that as long as the error is homogeneously distributed in the whole measured area of \\(20\\times 20\\)\\(\\mu\\)m\\({}^{2}\\), it contributes mainly to the effective length of the cavity and not the amplitude or width of the transmission peaks. The spectral measurements of the fabricated sample were carried out using a broadband halogen light source (SLS301, Thorlabs, Inc.). Its light was delivered to the sample via an optical system emulating the conditions of a plane-wave illumination. A linear polarizer was mounted on a rotational stage (PR50CC, Newport Corp.) in front of the sample to control the polarization state of the incident light. The sample was positioned and the position angle was calibrated using a 5-axis positioning system (Aerotech, Inc.). The transmitted light was collected via 20 \\begin{table} \\begin{tabular}{|c|c|c|c|c|} \\hline & \\(D_{x}\\), nm & \\(D_{y}\\), nm & \\(\\lambda_{x}\\), nm & \\(\\lambda_{y}\\), nm \\\\ \\hline Pixel 1 & \\(335\\pm 8\\) & \\(145\\pm 6\\) & \\(1463\\) & \\(1431\\) \\\\ \\hline Pixel 2 & \\(202\\pm 5\\) & \\(282\\pm 5\\) & \\(1445\\) & \\(1462\\) \\\\ \\hline Pixel 3 & \\(162\\pm 4\\) & \\(247\\pm 4\\) & \\(1430\\) & \\(1446\\) \\\\ \\hline Pixel 4 & \\(145\\pm 6\\) & \\(335\\pm 8\\) & \\(1431\\) & \\(1463\\) \\\\ \\hline Pixel 5 & \\(282\\pm 5\\) & \\(202\\pm 5\\) & \\(1462\\) & \\(1445\\) \\\\ \\hline Pixel 6 & - & \\(134\\pm 5\\) & - & \\(1432\\) \\\\ \\hline \\end{tabular} \\end{table} Table 1: Measured transverse dimensions (\\(D_{x,y}\\pm\\sigma\\)) of Si nanostructures in the set of 6 pixels and measured central wavelengths of their transmission peaks for \\(x\\) and \\(y\\) polarizations. aperture and a lens system to an optical spectrum analyzer with subnanometer spectral resolution (AQ6370B, Yokogawa). The measured transmittance of the six individual pixels, constituting elements of the super-pixel configuration, as shown in Fig. 4(a), is presented and compared to simulation in Fig. 4(b,c). The measured peaks have a very good agreement with the simulation results regarding their central wavelengths. Accounting for the reduction of the cavity length compared to the initial design, the anticipated spectral positions of the peaks blue-shifted: \\(\\lambda_{1}=1430\\) nm, \\(\\lambda_{2}=1446\\) nm, \\(\\lambda_{3}=1462\\) nm. The central wavelengths of the measured peaks are given in Tab. 1, the standard deviation is less than 1 nm. Moreover, the measured peaks reach a transmission of 46%, c, in the simulation the peak transmission is close to 92%. The width of the measured peaks are slightly broader, in average FWHM \\(=3.6\\) nm compared to FWHM \\(=1.6\\) in the case of the simulation. Such behavior is induced by the contamination of the cavity, as discussed in Sec. 2.3. Its impact on transmittance is discussed in Sec. 4.1. Now, in order to estimate the performance of the pixels, three spectral bands were selected, as described in Sec. 2.1. Since the measured spectral peaks are of a Lorentzian-shape, which has long tails, they were spaced \\(\\Delta\\lambda=16\\) nm apart to minimize the cross coupling. Accordingly, the three spectral bands of the system were: 1422-1438 nm, 1438-1454 m, and 1454-1470 nm. As can be seen in Fig. 4(c), the measurements have a high noise floor, which comes from the detector of the spectrometer and is not inherent to the fabricated structures. Thus, a prior analytical knowledge is used, and the intensity for a given band and polarization state is computed from the Lorentzian fit. This enables a significant reduction in noise. The elements of the reconstruction matrix are then obtained using the following integral: \\[\\alpha=\\int_{\\lambda_{min}}^{\\lambda_{max}}T_{pol}(\\lambda)d\\lambda, \\tag{7}\\] where \\(\\lambda_{max}\\) and \\(\\lambda_{min}\\) are the boundaries of the spectral band, and \\(T_{pol}(\\lambda)\\) is the transmittance of a pixel for a given polarization state. Computing these integrals for the measured data at all the Figure 3: Exemplary pixel from the fabricated spectro-polarimetric system: (a) side view, (b) top view. Colored SEM images of vertical and horizontal cross-sections of a pixel based on a FP cavity modulated by an array of Si nanostructures. Si is colored red, TiO\\({}_{2}\\) - blue, SiO\\({}_{2}\\) - grey. Si nanostructures are defined by their height (\\(H\\)) and diameter in \\(x\\) and \\(y\\) axis (\\(D_{x}\\) and \\(D_{y}\\), respectively), and are distributed in a square lattice with a period (\\(P\\)). Scale bars are equal to 500 nm. spectral bands and polarization states results in the following matrix: \\[A=\\left[\\begin{array}{cccccc}0.017&0.076&2.002&2.229&0.112&0.021\\\\ 0.170&2.457&0.124&0.029&0.136&2.478\\\\ 2.877&0.160&0.034&0.147&2.577&0.126\\\\ 2.229&0.112&0.021&0.017&0.076&2.002\\\\ 0.029&0.136&2.478&0.170&2.457&0.124\\\\ 0&0&0&2.432&0.239&0.042\\\\ \\end{array}\\right]. \\tag{8}\\] This matrix has a condition number of \\(k(A)=8.43\\), compared to ideal case \\(k(A)=8.0552\\) Figure 4: Pixels and their transmission spectra. (a) SEM images of horizontal cross-sections of all pixels (P\\(1,\\cdots,\\)P\\(6\\)), Si is colored red. Scale bar is equal to 500 nm. (b) Simulated transmission of 6 pixels, introduced in Fig. 2, with Lorentzian peaks centered at 1430 nm, 1446 nm, and 1462 nm and distributed in 3 spectral bands of \\(\\Delta\\lambda=16\\) nm. Peaks are (c) Measured transmission of the corresponding pixels. Spectra for \\(x\\) polarized light is shown in solid lines, for \\(y\\) polarized - dashed lines. (Eq. (2) \\(\\alpha_{m}^{n}=1,\\alpha_{x}^{6}=0\\)). The importance of this condition number is discussed in detail in the following section, see Sec. 4. In addition, we measured the systems response to a change of the azimuthal angle \\(\\phi\\) and polar angle \\(\\theta\\) of the incident light, see Fig. 5(a,b). First, as two different peaks centered at \\(\\lambda_{x}\\) and \\(\\lambda_{y}\\) are produced for two linear orthogonal polarization states, \\(x\\) and \\(y\\), respectively, we measured the intensity at \\(\\lambda_{x}\\) and \\(\\lambda_{y}\\) as a function of azimuthal angle \\(\\phi\\). As can be seen from Fig. 5(a), the intensity follows the analytically predicted \\(\\cos^{2}\\phi\\) and \\(\\sin^{2}\\phi\\) functions, respectively. Second, multi-layer systems are known for high angle dependence. In Fig. 5(b) we observe a blue-shift of the central peak of the transmission of an empty cavity with the increase of the polar angle \\(\\theta\\) in \\(x\\)-axis with respect to the surface normal for \\(x\\) and \\(y\\) polarization in both experimental and numerical data. Quantitatively comparable dependence was numerically observed for all of the pixels. Accordingly, with the increase of numerical aperture (NA) of the imaging system in front of the spectro-polarimeter, the peak is expected to broaden [28]. Further discussion on impact of incidence angle and subsequent assumptions are given in Section 4.5. ## 4 Discussion In this section, we will discuss the performance of the presented spectro-polarimetric sensor, and extend the concept to an optimal system. ### Signal-to-noise considerations In order to assess the performance of our spectro-polarimetric system, it is important to take a look at the noise propagation through the system. This allows an estimation of the SNR of the Stokes parameters as function of the pixel SNR of the sensor. For this analysis the condition number \\(k(A)\\) is used, since it is a measure of the sensitivity to variations for a standard system of equations \\(A\\mathbf{x}=\\mathbf{b}\\). The condition number \\(k(A)\\) is defined by: \\[\\frac{||\\delta\\mathbf{x}||}{||\\mathbf{x}||}=k(A)\\frac{||\\delta\\mathbf{b}||}{|| \\mathbf{b}||}, \\tag{9}\\] where \\(\\delta\\mathbf{x}\\) and \\(\\delta\\mathbf{b}\\) are small variations on the corresponding vectors \\(\\mathbf{x}\\) and \\(\\mathbf{b}\\). A common way to compute the condition number of matrix \\(A\\), is to take the ratio of the largest singular value Figure 5: Dependence on polarization angle \\(\\phi\\) and incident angle \\(\\theta\\). (a) Normalized measured intensity at \\(\\lambda_{x}\\) and \\(\\lambda_{y}\\) as a function of polarization angle \\(\\phi\\) in case of arbitrarily selected pixel with nanostructure inclusion. The intensity follows analytically predicted \\(\\cos^{2}\\phi\\) and \\(\\sin^{2}\\phi\\) functions, respectively. (b) Blue-shift of the central wavelength of the peak \\(\\lambda_{\\text{c}}\\) for \\(x\\)- and \\(y\\)-polarization in case of an empty cavity. Simulations are shown in solid and dashed line, while the experimental points are depicted by blue and red squares, respectively. and the smallest singular value of the matrix \\(A\\). This method is used to compute the condition number in the paper. As can be seen from Eq. (9), the condition number \\(k(A)\\) expresses the proportionality between any variations of the known vector \\(\\mathbf{b}\\) and the unknown vector \\(\\mathbf{x}\\). Thus, in the presented system, it relates the noise of the measured intensities (shot noise, read-out noise, etc.) to the noise of the reconstructed spectro-polarimetric intensities \\(I_{pol}^{\\lambda_{n}}\\). Using the relation between the polarized intensities and Stokes parameters (Eqs. (3-5)) the expected value (\\((E)(x)\\)) and standard deviation (\\(\\sigma(x)\\)) of the reconstructed Stokes parameters can be derived as reported here (seen Appendix for derivation): \\[\\mathrm{E}(\\vec{S_{0}}) =S_{0} \\tag{10}\\] \\[\\mathrm{E}(\\vec{S_{1}}) =S_{1}\\] (11) \\[\\mathrm{E}(\\vec{S_{2}}) =S_{2}\\] (12) \\[\\sigma(\\vec{S_{0}}) =\\left(\\frac{k(A)}{\\mathrm{SNR}}\\right)\\] (13) \\[\\sigma(\\vec{S_{1}}) =\\sqrt{2}\\left(\\frac{k(A)}{\\mathrm{SNR}}\\right)\\] (14) \\[\\sigma(\\vec{S_{2}}) =\\sqrt{2}\\left(\\frac{k(A)}{\\mathrm{SNR}}\\right)\\,. \\tag{15}\\] From Eqs. (10-12) it can be seen that the reconstruction method is bias-free, since the expected values are equal to the true values. Note that the standard deviation of \\(\\vec{S_{0}}\\) is smaller than of other Stokes parameters due to the fact that \\(\\vec{S_{0}}\\) is computed/measured twice according to Eq. (3). Since the standard deviation is inversely proportional to the SNR, the SNR after reconstruction will be reduced by a factor \\(k(A)\\). For the system of equations shown in Eq. (2), using \\(\\alpha_{x}^{6}=0\\) and setting all others to \\(\\alpha_{p}^{\\lambda}=1\\), one obtains \\(k(A)=8.0552\\). Thus the noise in the reconstructed signal will be increased by at most this factor. In comparison, the polarimeter in [9] uses a metasurface as polarization sensitive lens. The different polarization states are focused on different parts of the sensor, thus the metasurface is spatially separated from the sensor by the focal length. This system has a condition number of \\(k(A)=3.6581\\). Another example is the spectro-polarimeter in [35], which achieves a condition number of \\(k(A)=2.082\\). However, an external spectrometer is needed in their setup. Despite the higher condition number, the nanostructure-modulated spectro-polarimeter presented in this paper can be directly integrated on top of a sensor and it does not require an external spectrometer, resulting in a very compact system. Another factor that must be taken into account in the evaluation of the system SNR is the reduced transmittance and the presence of shot noise. The latter is dependent on the total number of photons reaching the detector \\(N_{\\mathrm{phot}}\\). In particular, the SNR of a shot noise limited pixel is given by: \\(\\mathrm{SNR}_{\\mathrm{shot}}=\\sqrt{N_{\\mathrm{phot}}}\\). Thus, reducing the number of photons that reaches the detector by a factor of 2 (50% transmission), the SNR is reduced by a factor of \\(\\sqrt{0.5}\\). Therefore, a system based on the measured structures would have a SNR reduced by a -1.681 dB, compared to an ideal system. However, the effect of this low transmission can be reduced by doubling the measurement duration (the integration time of the detector). ### Bandwidth considerations Other than the SNR of the system, the achievable total bandwidth is also of importance for the applications of interest. The bandwidth of this presented spectro-polarimetric system depends on the reflection band of the mirrors and the maximum achievable separation of the transmission peaks. The relative bandwidth (\\(\\frac{\\Delta f}{\\delta}\\)) of a DBR-mirror using quarter-wavelength sections is given by [36]: \\[\\frac{\\Delta f}{f_{0}}=\\frac{4}{\\pi}\\arcsin(\\rho), \\tag{16}\\] with \\(\\rho\\) being the Fresnel reflection coefficient. From this equation, it is clear that the bandwidth of the mirror depends only on the difference between the refractive indices of the two materials used in the construction of DBRs. In our case, the mirrors limit the bandwidth to \\(\\sim 400\\) nm, see Appendix, but using more advanced mirror designs [37], it is possible to obtain a wider reflection band. Therefore, bandwidth limitations due to the mirrors can be mitigated. In practise, the total bandwidth is limited by the maximum achievable separation of two linear orthogonal polarization peaks. As illustrated in Appendix, the maximum separation of the two peaks is equal to 127 nm. It can be achieved with a subdiffractive grating of 200 nm width. This separation between the peaks ultimately determines the maximum achievable total bandwidth of the system, since one of the transmission peaks has to be outside of the reconstruction bands. This results in the one row of the matrix having only one peak, which is the necessary condition for the spectro-polarimetric retrieval. The maximum number of bands can now be computed using: \\[N=\\left\\lfloor\\frac{\\text{BW}}{\\Delta\\lambda}\\right\\rfloor-1, \\tag{17}\\] where BW is the total bandwidth of the system and \\(\\Delta\\lambda\\) is the width of a single spectral band. ### Spectral resolution In the presented system, the spectral positions and the width of the spectral bands have to be chosen carefully to obtain the optimal performance. By decreasing the width of a single band, the number of bands can be increased, and thus its spectral resolution. However, as can be seen from Fig. 6, reducing the width of the spectral band \\(\\Delta\\lambda\\) below the FWHM of the transmission peaks greatly increases the condition number and in turn the SNR of the system. This increase of the condition number \\(k(A)\\) can be attributed to the spectral overlap of the neighboring peaks. This translates to smaller differences between the rows of the reconstruction matrix, making the system of equations more linear dependent. It is recommended to use \\(\\Delta\\lambda=\\text{FWHM}\\), limiting the effective spectral resolution to FWHM. ### Handling a large number of bands Another effect that must be taken into account in selecting the number of bands, is the rank of the reconstruction matrix. The matrix without the grating will have a rank deficiency of 1 for an odd Figure 6: Condition number \\(k(A)\\) vs spectral resolution. Condition number \\(k(A)\\) of a reconstruction matrix with 3 bands is shown in cases of different ratios between the single bandwidth \\(\\Delta\\lambda\\) and the transmission peak width FWHM, assuming a Lorentzian-shape peaks. number of bands while the rank deficiency for an even number of bands is 2. Thus, for an even number of bands replacing a pixel with a grating cannot make the matrix invertable. Accordingly, to retrieve the polarization state for all bands, the number of bands must be segmented in sets of an odd number of bands. Since for a system of equations with an odd number of bands the condition number \\(k(A)\\) changes linearly with the number of bands, see Fig 7(a), the reconstruction matrix should be constructed from as small as possible submatrixes. Any number of bands greater than 4 can be written as a sum of 3s, 5s and 7s, so is possible to segment the reconstruction matrix in submatrixes with these numbers of bands. In that case, the worst case condition number of the total matrix would be that of the 7 band inversion, \\(k(A)=18.36\\) (under the assumption of the matrix with all unit \\(\\alpha\\)'s except for the one that is 0). The behavior of such an optimized matrix is shown in Fig. 7(a). Still, it is recommended to use a system with a number of bands that is divisible by 3, since such a system will have a SNR ratio that is more than double of that of the presented worst case (7 bands) scenario. In Fig. 7(b), the condition number \\(k(A)\\) is plotted for segmented systems (block of 3 spectral bands) when Lorentzian peaks are used to compute the transmission values in the matrix, thus also taking into account the spectral cross-coupling between all pixels. Here, a single bandwidth was chosen to be equal to the FWHM of the transmission peaks, since it is the limit for a low condition number \\(k(A)\\), as shown in Fig. 6. Because the tails of the Lorentzian functions leak into the bands of adjacent submatrixes, they become partially dependent. The condition number of the matrix increases, but the segmenting still significantly reduced the condition number \\(k(A)\\) compared to the unsegmented initial matrix in Fig. 7(a). The plotted data of Fig. 7(b) is fitted by a curve: \\[k(A)=-15.69N^{-0.8}+14.90\\;, \\tag{18}\\] resulting in fit with a coefficient of determination of \\(R^{2}=0.9975\\). Based on this fit, the condition number \\(k(A)\\) will converge to a value of 14.90 for a large number of bands. ### Optimal design Using the design limitations discussed previously, as an example of ideal design, we can consider a spectro-polarimetric sensor with 126 bands (42 x 3), the bandwidth of 127 nm and a spectral resolution of 1 nm, centered around the \\(\\lambda=1500\\) nm. Then the system that can retrieve the first three Stokes parameters (\\(S_{1}\\), \\(S_{2}\\), \\(S_{3}\\)) would consist of 504 pixels and would have a condition Figure 7: Condition number \\(k(A)\\) vs number of spectral bands \\(N\\). (a) \\(k(A)\\) of a reconstruction matrix for N bands, assuming all elements of a unit magnitude at the peaks and zeros elsewhere. Red squares represent a direct inversion using a certain number of pixels and only a single pixel with a grating. Empty blue squares represent an inversion after segmenting the matrix in submatrixes. (b) \\(k(A)\\) of a reconstruction matrix with \\(N\\) bands for \\(\\Delta\\lambda\\) equal to the FWHM, assuming a Lorentzian-shape peaks, as measured. number of 14.57. This results in a reconstruction of the \\(S_{0}\\) with a SNR that is -11.63 dB below the shot noise limited SNR. The \\(S_{1}\\) and \\(S_{2}\\) would be reconstructed with a SNR that is -13.14 dB below the noise shot limited SNR. When implementing this design in an imaging device the chief ray angle (CRA) across the detector surface has to be taken into consideration. For a conventional optical system this angle will increase radially from the center of the detector. This increasing incident angle leads to a blue-shift of the transmission peaks. In principle, this spectral shift can be compensated by scaling the effective refractive index of the cavity, e.g. scaling the size of the high-index inclusions [38]. Alternatively, a telecentric optical system can be used. In such a system the CRA is constant over the entire detector and thus no blue-shift occurs. Furthermore, a system with a low NA is recommend since a high NA will increase the transmission peak width [28], thus reducing the spectral resolution of the system. The presence of a focused beam will influence the polarization reconstruction. When a fully polarized beam is focused by an optical system some of the light will be cross polarized. For example, the far-field of a focused beam arising from a microscope objective, illuminated by a fully x-polarized incident beam, is given by [39]: \\[\\mathbf{E}_{\\infty}=E_{inc}\\frac{1}{2}\\left[\\begin{array}{c}(1+\\cos\\theta)- (1-\\cos\\theta)\\cos{(2\\phi)}\\\\ -(1-\\cos\\theta)\\sin{(2\\phi)}\\\\ -2\\cos\\phi\\sin\\theta\\end{array}\\right]\\sqrt{\\frac{n_{1}}{n_{2}}}\\left(\\cos \\theta\\right)^{\\frac{1}{2}}, \\tag{19}\\] where \\(\\mathbf{E}_{\\infty}\\) is the electric far-field of the focused beam in Cartesian coordinates, \\(E_{inc}\\) the amplitude of the incident x polarized beam, \\(n_{1}\\) and \\(n_{2}\\) the refractive indices of the media before and after the lens and \\(\\phi\\), \\(\\theta\\) the azimuthal and polar angles of the far-field. As can be seen from this equation, the cross polarized component increases with the polar angle.This change in polarization can be interpreted as a depolarization effect that occurs before the planar spectro-polarimeter. Thus, the reconstructed polarization state will have a lower degree of polarization compared to the incident beam. Based on Eq. (19), a maximum coupling from the x to the y component of 0.7% is expected, at a polar angle of 10\\({}^{\\circ}\\). From a more advanced analysis that takes into account the full coupling within the structure, a higher value of 2% is obtained. This detailed theoretical analysis is not reported here for sake of brevity and will be the topic of another extended paper. From these considerations, it is clear that in order to minimize this effect, a telecentric and low NA optical system is recommended. Finally, we present a few considerations on the impact of the proposed concept on the spatial resolution achievable in a system with a given number of pixels. As discussed in the previous example, in order to retrieve the first three Stokes parameters over 3 spectral bands, 12 pixels have to be used. Thus, for comparisons sake, we can say that, from the spatial resolution point of view, 4 pixels are required per spectral band. In typical imaging multi-spectral systems, only one pixel per spectral band is used. Therefore, for the same number of sensor pixels, the spatial resolution of the proposed system is a quarter of that of a conventional multi-spectral camera, without any polarimetric functionality. On the contrary, in a typical polarimeter, 4 pixels are used to determine the polarization state. As a consequence, we can conclude that the effect on the spatial resolution of the presented system is comparable to that of a typical spectro-polarimeter obtained by combining \"conventional\" spectral and polarimetric components. ## 5 Conclusion In this work, we have shown the feasibility of a planar spectro-polarimeter. As a proof of concept, we have experimentally demonstrated a set of 6 pixels with transmission peaks of 50% and \\(\\mathrm{FWHM}=3.6\\) nm. The peaks were separated in three spectral bands of \\(\\Delta\\lambda=16\\) nm and sorted by their polarization state. Using the measured data in the reconstruction matrix, we obtained a condition number of \\(k(A)=8.43\\), which is extremely close to the theoretical limit of \\(k(A)=8.06\\). Such experimental results permit the reconstruction of the first 3 Stokes parameters up to a level of -10.76 dB below the shot noise limited SNR. In addition, the limits of the proposed spectro-polarimetric design were analyzed with respect to the highest number of bands possible and the highest obtainable spectral resolution. The total system bandwidth of the current architecture is limited to 127 nm. The maximum condition number, limiting the SNR of the reconstruction, is estimated to be in the order of 14.57, given that the transmission peaks are separated by their FWHM. Having a spectral resolution of 1 nm, such system could have a bandwidth of 127 nm separated in 126 bands. The SNR of the Stokes parameters would be -13.14 dB below the shot noise limited SNR. In perspective, the spectral resolution of the system is limited only by the reflectivity of the mirrors. Thus, subnanometer resolution is possible. Moreover, the design could be scaled to other spectral ranges with respective changes in material selection, e.g. using TiO\\({}_{2}\\) instead of Si in visible spectral range to minimize intrinsic losses. Using several different cavities at once, this would allow a very broadband sensor. Also, the system would benefit from a more anisotropic inclusion inside the cavity in place of the grating in order to increase the bandwidth. Finally, the system could be extended with additional layers of retarders to enable the measurements of circularly polarized light, enabling the retrieval of the full Stokes vector. ## Appendix ### Modulation via transverse dimensions The central wavelength \\(\\lambda_{\\mathrm{c}}\\) changes with respect to the size of the inclusion. We show the control of \\(\\lambda_{\\mathrm{c}}\\) for \\(x\\)- and \\(y\\)-polarization by changing the diameter in \\(x\\)-axis for two different cases: a polarization-sensitive inclusion and a grating, see Fig. 8(a) and Fig. 8(b), respectively. ### Reflectivity and bandwidth of DBRs The fabricated DBR mirrors have slightly different properties due to different deposition conditions, as discussed in Sec. 2.3. Regardless of that, both mirrors have a high-reflectivity at \\(\\lambda\\) = 1500 nm, see Fig. 9(a). Despite the fact that mirrors are different from each other and the ideal \\(\\lambda/4\\) case, their impact on the central wavelength \\(\\lambda_{\\mathrm{c}}\\) is negligible, see Fig. 9(b). Figure 8: Spectral positions of transmission peaks for \\(x\\) and \\(y\\) polarization as a function of width \\(D_{x}\\) of the inclusions. (a) A case of an array constituted of elliptical nanostructures with \\(D_{y}=250\\) nm, while \\(D_{x}\\) was varied from 0 nm to 500 nm. (b) A case of a linear grating with its width varied from 0 nm to 500 nm (equal to period \\(P\\)). The maximum separation of the peaks is 127 nm. Figure 9: Reflectivity of the fabricated mirrors and their impact on the FP resonance. (a) Measured reflection spectra of the bottom and the top DBRs. Bandwidth and reflectance in the spectral range of interest is highlighted. Due to the mismatch of mirrors, the bandwidth is smaller than of an individual mirror. (b) Simulated central positions of the FP resonance depending on the effective length of the cavity, when different mirrors are used: ideal (\\(\\lambda/4\\), for 1500 nm), and fabricated. For comparison, a dashed line represents a case for which mirrors and effective length of the cavity are \\(\\lambda/4\\) and \\(\\lambda/2\\), respectively. ### Evaluation of contamination and waviness The fabrication endured several challenges, as discussed in Sec. 2.3. Here we show the numerical simulation results of the impact of Cr contamination of the cavity, see Fig. 10, and the waviness of the top mirror, see Fig. 11. As illustrated in Fig. 10(a), some amount of Cr was left in the cavity, approx. 0.3 % of the total cavity volume. Cr has a high absorption in the visible spectral range, thus the transmittance of the cavity rapidly decreases with increase of the Cr volume in the cavity, see Fig. 10(b), while the FWHM increases, see Fig. 10(c). The nanostructures induce waviness of the layers on top. Even after the planarization, the waviness remain of an amplitude \\(A_{\\mathrm{w}}=15\\) nm, see Fig. 11(a). In general, our simulations show that waviness may decrease the transmittance and increase the FWHM, as shown in Fig. 11(b) and Fig. 11(c), respectively. Figure 11: Impact of top mirror waviness. (a) Colored SEM image of vertical cross-section of a single high-index DBR layer. TiO\\({}_{2}\\) is blue, SiO\\({}_{2}\\) is grey. The uneven surface can be described as sinusoidal surface with \\(A_{\\mathrm{w}}=15\\) nm. Scale bars is equal to 100 nm. (b) Simulated relative peak transmittance (intensity) as a function of waviness amplitude. (c) Simulated relative peak width (FWHM) as a function of waviness amplitude. Circle highlights anticipated value based on experimentally obtained \\(A_{\\mathrm{w}}=15\\) nm. Figure 10: Impact of cavity contamination with Cr. (a) Colored SEM image of horizontal cross-section of a single elliptical nanostructure inside the cavity. Si\\({}_{2}\\) is red, SiO\\({}_{2}\\) is grey, and Cr is yellow. (b) Simulated relative peak transmittance (intensity) as a function of volume of Cr in the cavity. Cr is considered elliptical for simplicity of the model. (c) Simulated relative peak width (FWHM) as a function of volume of Cr in the cavity. Circle highlights anticipated value. ### Derivation of signal-to-noise ratio of Stokes parameters In order to derive the SNR of the Stokes parameters, the noise on the measured intensities first has to be defined. A convenient choice is to assume unit magnitude intensity on the pixels with some additive zero mean Gaussian noise. Using this normalized intensity, the SNR is inversely proportional to the normalised standard deviation (\\(\\sigma\\)) of the Gaussian distribution (\\(\\mathcal{N}(\\mu,\\sigma^{2})\\)). This normalization aids to simplify the derivation. Thus, the intensity of the pixel is given by: \\[I=I_{s}+\\delta I=1+\\mathcal{N}\\left(0,\\frac{1}{\\text{SNR}^{2}}\\right), \\tag{20}\\] where \\(I_{s}\\) is the signal intensity with a value of 1 and \\(\\delta I\\) is the noise intensity equal to the Gaussian noise. If it is now assumed that all pixels receive equal intensity (uniform assumption on polarization state and spectral signal), the expected length of the measured intensity vector \\(\\mathbf{I}\\left(E(||\\mathbf{x}||)\\right)\\) with \\(||\\mathbf{x}||\\) being the 2-norm of the vector) can be computed, it is needed later in the derivation. The intensity vector is once again split in a signal part \\(\\mathbf{I_{s}}\\) and a noise part \\(\\delta I\\), and can be written as: \\[\\mathbf{I}=\\mathbf{I_{s}}+\\delta\\mathbf{I}=\\mathbf{1}+\\mathcal{N}\\left( \\mathbf{0},\\frac{1}{\\text{SNR}^{2}}\\right), \\tag{21}\\] where \\(\\mathbf{1}\\), \\(\\mathbf{0}\\) and \\(\\text{SNR}^{2}\\) are vectors with ones, zeros or the \\(\\frac{1}{\\text{SNR}^{2}}\\) as elements. In order to compute the expected length of the measured intensity vector, the expected length of the signal part and noise part are computed separately. For the signal intensity vector the length is simply given by: \\[||\\mathbf{I_{s}}||=\\sqrt{N}, \\tag{22}\\] where \\(N\\) is the number of elements in the vector. The expected length of the noise intensity vector is given by: \\[E\\left(||\\delta\\mathbf{I}||\\right)=\\sqrt{E\\left(\\delta\\mathbf{I}^{\\top}\\delta \\mathbf{I}\\right)}=\\sqrt{E\\left(\\delta\\mathbf{I}^{\\top}U\\delta\\mathbf{I}\\right)}, \\tag{23}\\] with \\(U\\) the unit matrix. In Eq. (23) the expected value of the quadratic form of a random vector (\\(\\mathbf{X}\\)) can be recognised, this form can be rewritten in the following way: \\[E\\left(\\mathbf{X}^{\\top}A\\mathbf{X}\\right)=E\\left(\\mathbf{X}^{\\top}\\right)AE \\left(\\mathbf{X}\\right)+\\mathcal{T}\\left(AK_{\\mathbf{XX}}\\right), \\tag{24}\\] where \\(\\mathcal{T}\\) is trace of a matrix and \\(K_{\\mathbf{XX}}\\) is the auto-covariance matrix of the random vector. Using this property Eq. (23) can rewritten as: \\[E\\left(||\\delta\\mathbf{I}||\\right)=\\sqrt{E\\left(\\delta\\mathbf{I}^{\\top} \\right)UE\\left(\\delta\\mathbf{I}\\right)+\\mathcal{T}\\left(UK_{\\delta\\mathbf{I} \\delta\\mathbf{I}}\\right)}. \\tag{25}\\] Since the mean of each element in the random vector is zero, the first term in the square root vanishes and diagonal of the auto-covariance matrix will be equal to the variance of each element of the vector, resulting in: \\[E\\left(||\\delta\\mathbf{I}||\\right)=\\sqrt{\\sum_{n=1}^{N}\\sigma_{n}^{2}}=\\sqrt {\\sum_{n=1}^{N}\\frac{1}{\\text{SNR}^{2}}}=\\frac{\\sqrt{N}}{\\text{SNR}}, \\tag{26}\\] where \\(n\\) is the element number of the vector. Now the expected length of both the signal and noise vector of the measured intensities is determined, the condition number of the reconstruction matrix (\\(k(A)\\)) as defined in Eq. (9) can be use to determine the noise of the reconstructed intensities (\\(I_{pol}^{\\lambda}\\)), resulting in: \\[\\frac{||\\delta\\mathbf{I}_{pol}^{\\lambda}||}{||\\mathbf{I}_{pol}^{\\lambda}||}=k (A)\\frac{||\\delta\\mathbf{I}||}{||\\mathbf{I_{s}}||}=\\frac{k(A)}{SNR}, \\tag{27}\\]where \\(||\\delta\\mathbf{I}_{pol}^{\\lambda}||\\) is the expected length of the noise vector of the reconstructed intensities and \\(||\\mathbf{I}_{pol}^{\\lambda}||\\) is the length of the actual reconstructed intensities. Since Gaussian function remain Gaussian under linear transformations, the noise in the reconstructed intensities will still be Gaussian. If now it assumed that the noise is spread equally over all reconstructed intensities, a reconstructed intensity for a single polarization and wavelength is given by: \\[\\tilde{I}_{pol}^{\\lambda}=I_{pol}^{\\lambda}+\\mathcal{N}\\left(0,\\left(\\frac{k(A) }{\\text{SNR}}\\right)^{2}\\right), \\tag{28}\\] with the tilde differentiating the true value \\(I_{pol}^{\\lambda}\\) from the obtain value with noise \\(\\tilde{I}_{pol}^{\\lambda}\\). When using the definition of the Stokes parameters Eqs. (3-5) and the summing properties of the Gaussian functions: \\[\\mathcal{N}\\left(\\mu_{1}+\\mu 2,\\sigma_{1}^{2}+\\sigma_{2}^{2}\\right)=\\mathcal{N} \\left(\\mu_{1},\\sigma_{1}^{2}\\right)+\\mathcal{N}\\left(\\mu_{2},\\sigma_{2}^{2} \\right), \\tag{29}\\] it can be found that the second and the third estimated Stokes parameters are: \\[\\tilde{S}_{1} =I_{x}-I_{y}+\\mathcal{N}\\left(0,\\left(\\frac{\\sqrt{2}k(A)}{\\text{ SNR}}\\right)^{2}\\right) \\tag{30}\\] \\[\\tilde{S}_{2} =I_{135^{\\circ}}-I_{135^{\\circ}}+\\mathcal{N}\\left(0,\\left(\\frac{ \\sqrt{2}k(A)}{\\text{SNR}}\\right)^{2}\\right). \\tag{31}\\] The zeroth Stokes parameter is measured twice according to both definition in Eq. (3) and is averaged between the two measurements, so its given by: \\[\\tilde{S}_{0} =\\frac{I_{x}+I_{y}+\\mathcal{N}\\left(0,\\left(\\frac{\\sqrt{2}k(A)}{ \\text{SNR}}\\right)^{2}\\right)}{2}+\\frac{I_{135^{\\circ}}+I_{135^{\\circ}}+ \\mathcal{N}\\left(0,\\left(\\frac{\\sqrt{2}k(A)}{\\text{SNR}}\\right)^{2}\\right)}{2} \\tag{32}\\] \\[=\\frac{I_{x}+I_{y}}{2}+\\frac{I_{135^{\\circ}}+I_{135^{\\circ}}}{2}+ \\mathcal{N}\\left(0,\\left(\\frac{k(A)}{\\text{SNR}}\\right)^{2}\\right). \\tag{33}\\] From these results Eqs. (10-12) are easily obtained. ## Funding Part of the activities have been supported by the TNO internal program: SMO 2018/19 Space Scientific Instruments. L.P.S. was funded by TNO (Netherlands Organisation for Applied Scientific Research) under the TU/e (Eindhoven University of Technology) program: 10022593 IMPULS II: Metrology 4 3D nano. J.B. has received funding from European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No. 675745. F.S. acknowledges support by German Ministry of Education and Research (FKZ 03ZZ0434, FKZ 13N14877). F.E. and H.K. acknowledge support by German Ministry of Education and Research (ID 13XP5053A), and the support by the Max Planck School of Photonics. ## Acknowledgments The authors are grateful to Dennis Arslan and Isabelle Staude for access to spectroscopy setup and related technical assistance, Michael Steinert for SEM images, and Pallabi Paul and Adriana Szeghalmi for ALD of SiO\\({}_{2}\\) cladding. The authors acknowledge Daniel Voigt, Holger Schmidt, Thomas Kasebier and Jorg Fuchs for technical assistance in nanostructuring of Si, Zuzanna Deutschman for initial simulations of the polarization-sensitive FP cavities, and Tiberiu Ceccotti for extensive discussions and helpful suggestions. ## Disclosures The authors declare no conflicts of interest related to this article. ## References * [2] M. J. Khan, H. S. Khan, A. Yousst, K. Khurshid, and A. Abbas, \"Modern trends in hyperspectral image analysis: A review,\" IEEE Access **6**, 14118-14129 (2018). * [3] D. Hillier, \"Spectropolarimetry and imaging polarimetry,\" in _Ultraviolet-Optical Space Astronomy Beyond HST_, vol. 164 (1999), p. 90. * [4] E. Kulu, \"Nanosats database,\" (2019). * [5] P.-Y. Deschamps, F.-M. Breon, M. Leroy, A. Podaire, A. Bricaud, J.-C. Buriez, and G. Seze, \"The POLDER mission: instrument characteristics and scientific objectives,\" IEEE Transactions on Geosci. Remote. Sens. **32**, 598-615 (1994). * [7] A. Arbabi, Y. Horie, M. Bagheri, and A. Faraon, \"Dielectric metasurfaces for complete control of phase and polarization with subwavelength spatial resolution and high transmission,\" Nat. Nanotechnol. **10**, 937-943 (2015). * [8] W. Yue, S.-S. Lee, and E.-S. Kim, \"Angle-tolerant polarization-tuned color filter exploiting a nanostructured cavity,\" Opt. Express **24**, 17115 (2016). * [9] S. Wei, Z. Yang, and M. Zhao, \"Design of ultracompact polarimeters based on dielectric metasurfaces,\" Opt. Lett. **42**, 1580 (2017). * [10] N. A. Rubin, A. Zaidi, M. Juhl, R. P. Li, J. B. Mueller, R. C. Devlin, K. Leosson, and F. Capasso, \"Polarization state generation and measurement with a single metasurface,\" Opt. Express **26**, 21455 (2018). * [11] E. Arbabi, S. M. Kamali, A. Arbabi, and A. Faraon, \"Full-stokes imaging polarimetry using dielectric metasurfaces,\" ACS Photonics **5**, 3132-3140 (2018). * [12] C. Yan, X. Li, M. Pu, X. Ma, F. Zhang, P. Gao, K. Liu, and X. Luo, \"smidinfrared real-time polarization imaging with all-dielectric metasurface,\" Appl. Phys. Lett. **114**, 161904 (2019). * [13] M. Faraji-Dana, E. Arbabi, A. Arbabi, S. M. Kamali, H. Kwon, and A. Faraon, \"Compact folded metasurface spectrometer,\" Nat. Commun. **9**, 4196 (2018). * [14] Y. Horie, A. Arbabi, E. Arbabi, S. M. Kamali, and A. Faraon, \"Wide bandwidth and high resolution planar filter array based on DBR-metasurface-DBR structures,\" Opt. Express **24**, 11677 (2016). * [15] W. Yue, Y. Li, C. Wang, Z. Yao, S.-S. Lee, and N.-Y. Kim, \"Color filters based on a nanoporous Al-AAO resonator featuring structure tolerant color saturation,\" Opt. Express **23**, 27474 (2015). * [16] W. Yue, S.-S. Lee, E.-S. Kim, and B.-G. Lee, \"Uniformly thick tri-color filters capitalizing on an etalon with a nanostructured cavity,\" Appl. Opt. **54**, 5866 (2015). * [17] A. M. Shaltout, J. Kim, A. Boltasseva, V. M. Shalaev, and A. V. Kildishev, \"Ultrathin and multicolour optical cavities with embedded metasurfaces,\" Nat. Commun. **9**, 1-7 (2018). * [18] J. Berzins, S. Fasold, T. Pertsch, S. M. Baumer, and F. Setzpfandt, \"Submicrometer nanostructure-based RGB filters for CMOS image sensors,\" ACS Photonics **6**, 1018-1025 (2019). * [19] W. T. Chen, P. Torok, M. R. Foreman, C. Y. Liao, W.-Y. Tsai, P. R. Wu, and D. P. Tsai, \"Integrated plasmonic metasurfaces for spectropolarimetry,\" Nanotechnology **27**, 224002 (2016). * [20] F. Ding, A. Pors, Y. Chen, V. A. Zenin, and S. I. Bozhevolnyi, \"Beam-size-invariant spectropolarimeters using gap-plasmon metasurfaces,\" ACS Photonics **4**, 943-949 (2017). * [21] X. Tu, D. J. Spires, X. Tian, N. Brock, R. Liang, and S. Pau, \"Division of amplitude RGB full-Stokes camera using micro-polarizer arrays,\" Opt. Express **25**, 33160-33175 (2017). * [22] J. Li, H. Wu, and C. Qi, \"Ultracompact focal plane snapshot spectropolarimeter,\" Appl. optics **58**, 7603-7608 (2019). * [23] F. Silvestri, J. Berzins, Z. Deutschmann, G. Gerini, and S. M. B. Baumer, \"Optical device and spectrometer comprising such a device,\" (2019). EP3543665A1, WO2019182444A1. * [24] N. Hodgson and H. Weber, _Optical resonators: fundamentals, advanced concepts, applications_ (Springer Science & Business Media, 2005). * [25] Y. Wang, M. Zheng, Q. Ruan, Y. Zhou, Y. Chen, P. Dai, Z. Yang, Z. Lin, Y. Long, Y. Li, N. Liu, C.-W. Qiu, J. K. W. Yang, and H. Duan, \"Stepwise-nanocavity-assisted transmissive color filter array microprints,\" Research **2018**, 1-10 (2018). * [26] T. C. Choy, _Effective medium theory: principles and applications_, vol. 1 (Oxford University Press, 2015). * [27] R. M. A. Azzam, \"Stokes-vector and Mueller-matrix polarimetry,\" J. Opt. Soc. Am. A **33**, 1396 (2016). * [28] P. Atherton, N. K. Reay, J. Ring, and T. Hicks, \"Tunable fabry-perot filters,\" Opt. Eng. **20**, 206806 (1981). * [29] H. Knopf, N. Lundt, T. Bucher, S. Hofling, S. Tongay, T. Taniguchi, K. Watanabe, I. Staude, U. Schulz, C. Schneider, and F. Eilenberger, \"Integration of atomically thin layers of transition metal dichalcogenides into high-q, monolithic bragg-cavities: an experimental platform for the enhancement of the optical interaction in 2d-materials,\" Opt. Mater. Express **9**, 598-610 (2019). * [30] T. Siefke, S. Kroker, K. Pfeiffer, O. Puffky, K. Dietrich, D. Franta, I. Ohlidal, A. Szeghalmi, E.-B. Kley, and A. Tunnermann, \"Materials pushing the application limits of wire grid polarizers further into the deep ultraviolet spectral range,\" Adv. Opt. Mater. **4**, 1780-1786 (2016). * [31] A. van Amerongen, J. Rietjens, M. Smit, D. van Loon, H. van Brug, W. van der Meulen, M. Esposito, and O. Hasekamp, \"SPEX the Dutch roadmap towards aerosol measurement from space,\" in _International Conference on Space Optics-ICSO 2016,_ vol. 10562 (International Society for Optics and Photonics, 2017), p. 105621O. * [32] J. M. Bennett, E. Pelletier, G. Albrand, J. P. Borgogno, B. Lazarides, C. K. Carniglia, R. A. Schmell, T. H. Allen, T. Tuttle-Hart, K. H. Guenther, and A. Saxer, \"Comparison of the properties of titanium dioxide films prepared byvarious techniques,\" Appl. Opt. **28**, 3303-3317 (1989). * [33] P. Johnson and R. Christy, \"Optical constants of transition metals: Ti, V, Cr, Mn, Fe, Co, Ni, and Pd,\" Phys. review B **9**, 5056 (1974). * [34] K. Meyer, H.-J. Tiller, E. Welz, and W. Kuhn, \"Modifizierung von SiO\\({}_{2}\\)-Oberflachen mit Hilfe von Plasmen Teil I-EPR-Spektroskopische Untersuchung der Defektzentren und der Einfluss des Plasmatragergases auf deren Bildung,\" Zeitschrift fur Chemie **14**, 146-150 (1974). * [35] H.-T. Chen, A. J. Taylor, and N. Yu, \"A review of metasurfaces: physics and applications,\" Reports on Prog. Phys. **79**, 076401 (2016). * [36] S. J. Orfanidis, \"Electromagnetic waves and antennas,\" (2016). [online] Available: [http://eceweb1.rutgers.edu/](http://eceweb1.rutgers.edu/) orfanidi/ewa/. * [37] Y. K. Zhong, S. M. Fu, S. L. Yan, P. Y. Chen, and A. Lin, \"Arbitrarily-wide-band dielectric mirrors and their applications to size solar cells,\" IEEE Photonics J. **7**, 1-12 (2015). * [38] L. Frey, L. Masarotto, M. Armand, M.-L. Charles, and O. Lartigue, \"Multispectral interference filter arrays with compensation of angular dependence or extended spectral range,\" Opt. Express **23**, 11799-11812 (2015). * [39] L. Novotny and B. Hecht, _Propagation and focusing of optical fields_ (Cambridge University Press, 2006), p. 45-88.
We present a planar spectro-polarimeter based on Fabry-Perot cavities with embedded polarization-sensitive high-index nanostructures. A 7 um-thick spectro-polarimetric system for 3 spectral bands and 2 linear polarization states is experimentally demonstrated. Furthermore, an optimal design is theoretically proposed, estimating that a system with a bandwidth of 127 nm and a spectral resolution of 1 nm is able to reconstruct the first three Stokes parameters with a signal-to-noise ratio of -13.14 dB with respect to the the shot noise limited SNR. The pixelated spectro-polarimetric system can be directly integrated on a sensor, thus enabling applicability in a variety of miniaturized optical devices, including but not limited to satellites for Earth observation. osurmurm
Summarize the following text.
arxiv/bd4d25a7_a0fe_40c6_8655_40e62b899c74.md
# Challenges in data-based geospatial modeling for environmental research and practice Diana Koldasbayeva Sokolos Institute of Science and Technology, Bolshoy Boulevard 30, bld. 1, 121205 Moscow, Russia Polina Tregubova Sokolos Institute of Science and Technology, Bolshoy Boulevard 30, bld. 1, 121205 Moscow, Russia Mikhail Gasanov Sokolos Institute of Science and Technology, Bolshoy Boulevard 30, bld. 1, 121205 Moscow, Russia Alexey Zaytsev Sokolos Institute of Science and Technology, Bolshoy Boulevard 30, bld. 1, 121205 Moscow, Russia Anna Petrovskaia Sokolos Institute of Science and Technology, Bolshoy Boulevard 30, bld. 1, 121205 Moscow, Russia Evgeny Burnaev ###### To date, obtaining spatial predictions is an essential step in the monitoring, assessment, and prognosis tasks applicable to all kinds of Earth systems on both local and global scales (Figure 1). Regional spatial analysis for areas of interest now plays a crucial role in risk-sensitive land use and vulnerability assessment facing environmental sustainability threats, climate change urgency, and disasters occurrences such as fires [1, 2, 3], floods [4, 5, 6], and droughts [7, 8], in biodiversity conservation prioritisation and actions planning [9, 10, 11], natural resources inventory [12, 13, 14], land cover inventory and change detection [15, 16], ecosystems functioning assessment [17, 18], and other environment-related tasks. Spatial modelling results could be not only the final expected outcome but an intermediate step and required base for the following system analysis. For instance, forest maps can be used to estimate how vulnerable vegetation is to events contributing to climate change, such as cycles of forest damage and forest succession after fires [23, 24], and to assess the long-term sustainability of forest carbon sinks [25]. Another example is a prediction of the quality of resources such as soil [26] based on environmental predictor maps. One of the most common use cases is applying land use and land cover (LULC) map products for a wide range of research and practical issues. Land cover maps can be used to estimate environment-related phenomena, such as ecosystem services [17, 18], assess spatiotemporal resource changes, and distinguish influencing factors [26, 27]. Apart from that, LULC products serve to enhance prediction -- for example, to stratify modelling solutions (ensembling) in order to raise forecast precision [28]. The products can also serve as label data to develop new prediction approaches--for example, to classify single-date images in order to obtain large area cover maps [29]. The expectations about mapping usefulness for developing decision-making tools have been quite high since at least the beginning of the century [30]. Being not only a tool for purely increasing our knowledge about the environment, geospatial predictions have already been included as an essential base for policy and coordinated action support. For instance, fire mapping supports The Monitoring Trends in Burn Severity (MTBS) program [31], catching burn severity and extent of large fires for monitoring the effectiveness of the National Fire Plan. Invasive species habitat suitability mapping informs decision-making by identifying high-risk species and pathways, increasing information exchange, action efficiency, and cost-savings within the U.S. Department of the Interior Invasive Species Strategic Plan [32]. Another example is the geospatial assessment and management of flood risks as an information tool to plan and prioritise technical, financial, and political decisions regarding flood risk management within Directive 2007/60/EC (2007) [33]. It is highlighted that Earth observation global maps play a crucial rolein supporting the key aspects of the Paris Agreement, such as making nationally determined contributions, enhancing the transparency of national GHG (greenhouse gas) reporting, managing GHG sinks and reservoirs, and developing market-based solutions [34]. On a global scale, spatial mapping results can serve as both inputs for integrated assessment models (IAMs) and target output data to forecast and understand postponed consequences of changing socioeconomic development and climate change scenarios, which helps to plan climate change actions considering other sustainable development goals [35]. Additionally, information from global mapping products can fill the blind spots where domestic land cover inventories are poorly organised and impede coordinating responses to global challenges [34]. At the same time, the question of the quality of spatial predictions and possible struggles to achieve trustworthy results has been drawing much attention recently. One of the most important concerns lies in the very nature of data-based modelling--that is, the belief that knowledge can be obtained through observation [36]. Thus, proper techniques for managing data from geospatial observations are of great question. Another issue related to efficient and fair data handling is the existing gap between domain specialists and applied data scientists, both underrepresented in each other's fields. In recent work [37] it was emphasised that ignoring the spatial nature of the data led to the misleading high predictive power of the model, while appropriate spatial model validation methods revealed poor relationships between the target characteristic--aboveground forest biomass--and selected predictors. On the contrary, in [38] the idea of spatial validation is critically discussed, while other approaches to overcome biases in the data are proposed instead. The importance of spatial dependence between training and test sets and its influence on the model generalisation capabilities in the Earth observation data classification is addressed in [39]. Other examples of issues in global environmental spatial mapping are the distribution shift, data concentration, and predictions' accuracy assessment, which are discussed in the latest comment article [40]. Thus, given the confusion about the modelling process and quality estimation of results and in light of the rising demand for spatial predictions, an overview of common struggles in geospatial modelling and relevant approaches and tools to address the issues are of both scientific and practical use. In addition to the existing literature background [40, 41, 42, 43, 44, 45, 46, 47], this review aims to comprehensively address the limitations of Figure 1: Examples of geospatial mapping performed for different tasks of environmental monitoring and assessment a) maps of forest disturbance regimes of Europe [19]; b) land cover and mapping of losses for different types of forest in Indonesia [20]; c) maps of soil organic carbon (SOC) fractions contribution to SOC for selected depths of 0–5, 5–15, and 15–30 cm obtained for Australia [21]; d) maps of chlorophyll-A estimation derived from Sentinel-2 data in the Barents Sea [22]. data-driven geospatial mapping at each step of predicting the spatial distribution of target features. Here we provide a practical guide, discussing the challenges associated with using nonuniformly distributed real-world data from various domains in environmental research, including those from open sources. These challenges include dealing with limited observations and imbalanced and autocorrelated data, maintaining the model training process, and assessing prediction quality and uncertainty (Figure 2). Throughout the review, we provide examples from recent environmental geospatial modelling research to illustrate the identified problems, highlight the underlying theoretical concepts, and present approaches to evaluating and overcoming each specific limitation. ## 1 Data-driven approaches to forecasting spatial distribution of environmental features In this review, we analyse geospatial modelling based on data-driven approaches, meaning that models are built with parameters learned from observations' data, thus simulating new data minimally different from the \"ground truth\" under the same set of descriptive features. Among the standards guiding the implementation of data-based model applications, CRISP-DM is the most well-known. There are, however, other workflows with more nuanced guidelines tailored to specific problems or more mature fields of data-based modelling [48, 49]. Recently, guidelines and checklists have been proposed for environmental modelling tasks to help address common problems and improve the reliability of outputs [45, 50]. For instance, a checklist [45] for ecological niche modelling suggests using a standardised format for reporting the modelling procedure and results to ensure research reproducibility. It emphasises the importance of disclosing details of each prediction-obtaining step, from data collection to model application and result evaluation. In general, the main steps to solve the applied problems using data-driven algorithms can be the following [51]: 1. Understanding the problem and the data. This step depends on the specific domain, such as conservation biology and ecology, epidemiology, spatial planning, natural resource management, climate monitoring, and predicting hazardous events. 2. Data collection and feature engineering. Pre-processing data from different domains involves collecting ground-truth data from specific locations and combining it with relevant environmental features such as, for instance, Earth observation images, weather and climate patterns. 3. Model selection. The choice of model depends on the characteristics of the target feature, the specificity of the task, and available resources. 4. Model training. Training the model involves optimizing hyperparameters to fit the data type and shape. 5. Accuracy evaluation. Appropriate accuracy scores are selected based on the task, with a focus on controlling overfitting. The model's performance is better to be evaluated using \"gold standard\" data with expert annotations. 6. Model deployment and inference. This involves building maps with spatial predictions for the region of interest and determining the level of certainty of the model's estimations. For data-based modelling tasks, including mapping, various classic machine learning (ML) algorithms [52] and deep learning (DL) algorithms [15, 16, 53, 54] are used. The choice of algorithm depends on the type of target variable. Classification algorithms are employed for predicting categorical target variables, which could be land cover and land change mapping [15, 16], cropland and crop type mapping [52, 55], identification of pollution sources [56], mapping pollutant impact to distinguish free and affected lands [57], the landslide [53] and wildfire [28] susceptibility mapping, and habitat suitability mapping [58]. Regression algorithms are used to forecast the distribution of continuous target variables - for instance, the prediction of the geospatial distribution of important soil features, such as soil carbon characteristics [27], groundwater potential, and quality assessment [59, 60], and vegetation characteristics such as forest height [61] and biomass [62]. Handling data and interpreting results at each step of obtaining spatial predictions can be complex, leading to low-quality predictions and misleading interpretations. Therefore, careful control using adopted approaches and metrics is necessary. ## 2 Imbalanced data ### Problem statement The problem of imbalanced data is one of the most relevant issues in environment-related research with a focus on spatial capturing of target events or features. Imbalance occurs when the number of samples belonging to one class or classes (majority class[es]) significantly surpasses the number of objects in another class or classes (minority class[es]) [63, 64]. Although being highly imbalanced is one of the basic properties of the real world, most models assume uniform data distribution and complete input information. Thus, a nonuniform input data distribution poses difficulties when training models. The minority class occurrences are infrequent, and classification rules that predict the small classes are usually rare, overlooked, or ignored. As a result, test samples belonging to the minority classes are misclassified more frequently compared with test samples from the predominant classes. In geospatial modelling, one of the most frequent challenges is dealing with sparse or nonexistent data in certain regions or classes [65, 66, 67, 68, 69]. This issue arises from the high cost of data collection and storage, methodological challenges, or the rarity of certain phenomena in specific regions. For instance, forecasting habitat suitability for species -- species distribution modelling (SDM) -- is a common task in conservation biology, and it relies on ML methods, often involving binary classification of species abundance. Although well-known sources such as the GBIF (the Global Biodiversity Information Facility) database [70] provide numerous species occurrence records, absence records are few, while it is additionally difficult to establish such locations from the methodological point of view [71]. For instance, anomaly detection and mapping, particularly relevant for ecosystem degradation monitoring, often involves the challenge of overcoming imbalanced data--for example, in pollution cases, such as oil spills occurring on both land and water surfaces. Accurate detection and segmentation of oil spills with image analysis is vital for effective leak cleanup and environmental protection. But, despite the regular collection of Earth surface images by various satellite missions, there are significantly fewer scenes of oil spills compared with images of clean water [72, 73]. Similarly, detection and mapping of Figure 2: General workflow for the tasks including geospatial modelling process and common issues relevant for each stage. hazardous events, such as wildfires, struggles from the same problem [66]. In classic research, Weiss and Provost [74] examined the relationship between decision trees' classification abilities and the class distribution of training data and demonstrated that a relatively balanced distribution generally yields better results compared with an imbalanced one. The sample size plays a critical role in assessing the accuracy of a classification model considering the class imbalance. When the imbalance degree remains constant, the limited sample size raises concerns about discovering inherent patterns in the minority class. Experimental findings suggest that the significant error rate caused by imbalanced class distribution decreases as the training set size increases [75]. This observation aligns logically, because having more data provides the classification model with a better understanding of the minority class, enabling differentiation between rare samples and the majority. According to Japkowicz [75], if a sufficiently large dataset is available and the training time for such a dataset is acceptable, the imbalanced class distribution may not hinder the construction of an accurate classification model [76]. ### Approaches to measuring the problem of imbalanced data Various approaches quantify class imbalance. One method is to examine the class distribution ratio directly, which can be as extreme as 1:100, 1:1000, or even more in real-world scenarios. The minority class percentage (MCP) calculates the percentage of instances in the minority class. Gini index (GI) measures inequality or impurity among classes, indicating imbalance [77]. Shannon entropy (SE) is another way to measure non-uniformity or data substance and can be linked to imbalance through the entropy of the class distribution [77]. The Kullback-Leibler (KL) divergence measures the contrast between probability distributions. Thus, it shows how close the observed class distribution is to a hypothetical balanced distribution [78]. Higher values of GI, SE, and KL indicate a higher imbalance. In dealing with class imbalance, it is crucial to use appropriate quality metrics to reflect model performance accurately. Standard accuracy may mislead, especially when there is a significant class imbalance -- for example, a model that always predicts the major class yielding a high accuracy but performs poorly for the minority class [79]. The F1 score, combining precision and recall, is a better alternative and is commonly used for imbalanced data, particularly for the minority class. Another useful metric is the G-mean, which balances sensitivity and specificity and provides a more reliable performance assessment, especially in imbalanced datasets [75, 80]. ### Solutions to improve geospatial modelling for imbalanced data Various reviews address imbalanced data in ML, in general [64, 76, 77, 81], while approaches relevant to geospatial modelling are also worth to be discussed. Approaches to tackling imbalanced data problems in geospatial prediction tasks can be divided into data-level, model-level and combined techniques. #### 2.3.1 Data-level approaches Numerical dataIn terms of working with the data itself, the class imbalance problem can be addressed by modifying the training data through resampling techniques. There are two main ideas: oversampling the minority class and undersampling the majority class [76, 78, 82, 83]. These techniques can be applied randomly or in an informative way. For instance, in SDM, random oversampling is often a choice to create new minority class samples (e.g., species absence) [84], while random undersampling is used to balance the class distribution, particularly for species occurrence [85]. Informative oversampling may involve generating artificial minority samples based on geographic distance. For instance, in SDM, pseudoabsence generation can be performed using the biomod2 R package [84] with a 'disk' option based on geographic distance. Informative undersampling can involve thinning the majority class by deleting geographically close points, which can be done with spThin R package [85]. In Figure 3, we illustrate the issue of imbalanced data and present solutions, including oversampling and undersampling techniques. More complex methods for handling imbalanced data involve adding artificial objects to the minority class or modifying samples in a meaningful way. One popular approach is the synthetic minority oversampling technique (SMOTE) [78], which combines both oversampling of the minority class and undersampling of the majority class. SMOTE creates new samples by linearly interpolating between minority class samples and their K-nearest neighbour minority class samples. Various has recently seen various modifications [89, 90]. Since there are more than 100 SMOTE variants in total [91], here we focus on those relevant to geospatial modelling. One widely used method for oversampling the minority class is the Adaptive synthetic sampling approach for imbalanced learning (ADASYN) [92, 93, 94, 95]. ADASYN uses a weighted distribution that considers the learning difficulties of distinct instances within the minority class, generating more synthetic data for challenging instances and fewer for less challenging ones [96].To address potential overgeneralization in SMOTE [77, 89], Borderline-SMOTE is proposed. It concentrates on minority samples that are close to the decision boundary between classes. These samples are considered to be more informative for improving the performance of the classification model on the minority class. Two techniques, Borderline-SMOTE1 and Borderline-SMOTE2, have been proposed, outperforming SMOTE in terms of suitable model performance metrics, such as the true-positive rate and an F-value [97]. Another approach is the Majority Weighted Minority Oversampling Technique (MWMOTE), which assigns weights to hard-to-learn minority class samples based on their Euclidean distance from the nearest majority class samples [98]. The algorithm involves three steps: selecting informative minority samples, assigning selection weights, and generating synthetic samples using clustering. MWMOTE consistently outperformed other techniques such as SMOTE, ADASYN, and RAMO in various performance metrics, including accuracy, precision, F-score, G-mean, and the Area Under the Receiver Operating Characteristic (ROC) Curve (AUC) [98]. As for the limitations of discussed data-level approaches, oversampling and undersampling may, given that they are widely used, lead to overfitting and introduce bias in the data [77, 91]. Additionally, these techniques do not address the root cause of class imbalance and may not generalise well to unseen data [99, 76]. Image dataComputer vision techniques applied to Earth observation tasks have gained their popularity and now play a pivotal role in the analysis of remote sensing data [100, 101, 102, 103, 104, 105, in the data even after applying these oversampling techniques [112]. #### 2.3.2 Model-level approaches **Cost-sensitive learning** Cost-sensitive learning involves considering the different costs associated with classifying data points into various categories. Instead of treating all misclassifications equally, it takes into account the consequences of different types of errors. For example, it recognises that misclassifying a rare positive instance as negative (more prevalent) is generally more costly than the reverse scenario. In cost-sensitive learning, the goal is to minimise both the total cost resulting from incorrect classifications and the number of expensive errors. This approach helps prioritise the accurate identification of important cases, such as rare positive instances, in situations where the class imbalance is a concern [113]. Cost-sensitive learning finds application in spatial modelling, scenarios involving imbalanced datasets, or situations where the impact of misclassification varies among different classes or regions. Several studies have shown it is effective in this context [114, 115, 116]. **Boosting** Boosting algorithms are commonly used in geospatial modelling because they are superior in handling tabular spatial data and addressing imbalanced data [117, 118, 119, 120, 121, 120]. They effectively manage both bias and variance in ensemble models. Ensemble methods such as Bagging or Random Forest reduce variance by constructing independent decision trees, thus reducing the error that emerges from the uncertainty of a single model. In contrast, AdaBoost and gradient boosting train models consecutively and aim to reduce errors in existing ensembles. AdaBoost gives each sample a weight based on its significance and, therefore, assigns higher weights to samples that tend to be misclassified, effectively resembling resampling techniques. In cost-sensitive boosting, the AdaBoost approach is modified to account for varying costs associated with different types of errors. Rather than solely aiming to minimise errors, the focus shifts to minimising a weighted combination of these costs. Each type of error is assigned a specific weight, reflecting its importance in the context of the problem. By assigning higher weights to errors that are more costly, the boosting algorithm is guided to prioritise reducing those particular errors, resulting in a model that is more sensitive to the associated costs [76]. This modification results in three cost-sensitive boosting algorithms: AdaC1, AdaC2, and AdaC3. After each round of boosting, the weight update parameter is recalculated, incorporating the cost items into the process [121, 122]. In cost-sensitive AdaBoost techniques, the weight of False Negative is increased more than that of False Positive. AdaC2 and AdaCost methods can, however, decrease the weight of True Positive more than that of True Negative. Among these methods, AdaC2 was found to be superior for its sensitivity to cost settings and better generalisation performance with respect to the minor class [76]. #### 2.3.3 Combining model-level and data-level approaches Modifications of the discussed techniques could be used as well. For instance, several techniques combine boosting, and SMOTE approaches to address imbalanced data. One such method is SMOTEBoost, which synthesises samples from the underrepresented class using SMOTE and integrates it with boosting. By increasing the representation of the minority class, SMOTEBoost helps the classifier learn better decision boundaries and boosting emphasises the significance of minority class samples for correct classification [123, 120, 78]. As for limitations, SMOTE is a complex and time-consuming data sampling method. Therefore, SMOTEBoost exacerbates this issue as boosting involves training an ensemble of models, resulting in extended training times for multiple models. Another approach is RUSBoost, which combines RUS (Random Under-Sampling) with boosting. It reduces the time needed to build a model, which is crucial when ensembling is the case, and mitigates the information loss issue associated with RUS [124]. Thus, the data that might be lost during one boosting iteration will probably be present when training models in the following iterations. Despite being a common practice to address the class imbalance, creating ad-hoc synthetic instances of the minority class has some drawbacks. For instance, in high-dimensional feature spaces with complex class boundaries, calculating distances to find nearest neighbours and performing interpolation can be challenging [77, 78]. To tackle data imbalances in classification, generative algorithms can be beneficial. For instance, a framework combining generative adversarial networks and domain-specific fine-tuning of CNN-based models has been proposed for categorising disasters using a series of synthesised, heterogeneous disaster images [125]. SA-CGAN (Synthetic Augmentation with Conditional Generative Adversarial Networks) employs conditional generative adversarial networks (CGANs) with self-attention techniques to create high-quality synthetic samples [126]. By training a CGAN with self-attention modules, SA-CGAN creates synthetic samples that closely resemble the distribution of the minority class, successfully capturing long-range interactions. Another variation of GANs, EID-GANs (Extremely Imbalanced Data Augmentation Generative Adversarial Nets), focus on severely imbalanced data augmentation and employ conditional Wasserstein GANs with an auxiliary classifier loss [127]. ## 3 Autocorrelation ### Problem statement Autocorrelation is a statistical phenomenon where the value at a data point is influenced by the values at its neighbouring data points. In the context of environmental research, autocorrelation is frequently observed resulting from the spatial continuity of natural phenomena, such as temperature, precipitation, or species occurrence patterns. However, the data-driven approaches applied for the tasks of spatial predictions assume independence among observations. If spatial autocorrelation (SAC) is not properly addressed, the geospatial analysis may result in misleading conclusions and erroneous inferences. Consequently, the significance of research findings may be overestimated, potentially affecting the validity and reliability of predictions[128, 129]. On the contrary, there could be environment-related tasks where autocorrelation is explored as the interdependence pattern between spatially distributed data not to be mitigated. For instance, based on an assessment of SAC catching regional spatial patterns in the LULC changes, a decision-support framework considering both land protection schemes, adapted financial investment and greenway construction projects supporting habitats was developed[130]. Other examples are the enhancement of a landslide early warning system introducing susceptibility-related areas based on catching autocorrelation of landslide locations with rainfall variables[131], and an approach to assessing the spatiotemporal variations of vegetation productivity based on the SAC indices valuable for integrated ecosystem management[132]. Spatial autocorrelationWhile the definition of SAC varies, in general it integrates the principle that geographic elements are interlinked according to how close they are to one another, with the degree of connectivity fluctuating as a function of proximity, echoing the fundamental law of geography[133, 134]. Essentially, SAC outlines the extent of similarity among values of a characteristic at diverse spatial locations, providing a foundation for recognising and interpreting patterns and connections throughout different geographic areas 4. Spatial processes exhibit characteristics of spatial dependence and spatial heterogeneity, each bearing significant implications for spatial analysis: * Spatial dependence. This phenomenon denotes the autocorrelation amidst observations, which contradicts the conventional assumption of residual independence seen in methods such as linear regression. One approach to circumvent this is through spatial regression. * Spatial heterogeneity. Arising from non-stationarity in the processes generating the observed variable, spatial heterogeneity undermines the effectiveness of constant linear regression coefficients. Geographically weighted regression offers a solution to this issue[135, 136]. Numerous studies have ventured into exploring SAC and its mitigation strategies in spatial modelling. There exists a consensus that spatially explicit models supersede non-spatial counterparts in most scenarios by considering spatial dependence[137]. However, the mechanisms driving these disparities in model performance and the conditions that exacerbate them warrant further exploration[138, 139, 140, 141]. A segment of the academic community contests the incorporation of autocorrelation in mapping, attributing potential positive bias in estimates as a consequence and advocating its application only for significantly clustered data[38]. Residual spatial autocorrelation (rSAC) manifests itself not only in original data but also in the residuals of a model. Residuals quantify the deviation between observed and predicted values within the modelling spectrum. Consequently, rSAC evaluates the spatial autocorrelation present in the variance that the explanatory variables fail to account for. Grasping the distribution of residuals is vital in regression modelling, given that it underpins assumptions such as linearity, normality, equal variance (homoscedasticity), and independence, all of which hinge on error behavior[137]. ### Approaches to measuring the problem of spatial autocorrelation Logically, the first step is to determine whether SAC is likely to affect the planned analysis -- that is, if the model residuals display SAC, before considering modelling techniques that account for geographical autocorrelation. Checking for SAC has become commonplace in geography and ecology[43, 143]. Among the methods used are 1) Moran's correlogram, 2) Geary's correlogram, and 3) variogram (semi-variogram)[144]. The main idea of checking SAC underlies the investigation and tests whether nearby locations tend to be more clustered than randomness alone[145]. Moran's I and Geary's C are measures used to analyze spatial autocorrelation in data[146]. Moran's I, ranging from -1 to +1, identifies general patterns within the entire dataset: values near +1 indicate clusters of similar values, -1 suggests adjacent dissimilar values, and 0 represents a random pattern. In contrast, Geary's C, ranging from 0 to +2, is sensitive to local variations, with 0 indicating positive, 2 showing negative autocorrelation, and 1 denoting a random pattern. While Moran's I is preferred for analyzing global patterns, Geary's C is useful for detecting local patterns[147]. Correlograms based on Moran's I typically exhibit a decline from a certain level of SAC to a value of 0 or even lower, signifying an absence of SAC at specific distances between locations. Essentially, a value of 0 or below suggests no observableSAC or a random spatial distribution of the variable under consideration. Similarly, for Geary's C, a value near 0 indicates an absence of SAC or spatial randomness, suggesting that the spatial distribution of the variable is akin to what might be expected if it were randomly distributed. On the other hand, higher values of Geary's C, especially those greater than 1, suggest positive SAC. This means that the variable's distribution shows similarity or clustering at different locations, highlighting a distinct spatial pattern in the data [143]. One of the crucial mathematical tools to assess the spatial variability and dependence of a stochastic variable is a variogram. Its primary purpose is to measure how the values of a variable alter as the spatial separation between sampled locations increases. The variogram is mathematically defined as one-half of the variance of the dissimilarities observed between pairs of random variables at distinct locations, expressed as a function of the spatial separation between those locations. In precise terms, the variogram represents the variance of the difference between the values of the spatial variable at two points, which are separated by the vector. In simpler terms, it quantifies the extent of dissimilarity or variation between pairs of observations at different spatial distances. The shape of the variogram cloud brings valuable insights into the spatial structure of the studied variable. Commonly employed variogram models, such as spherical, exponential, or Gaussian models, can be fitted to the scatter plot to estimate the parameters that characterize the spatial dependence. The variogram holds significant importance in geostatistics and finds diverse applications, including spatial interpolation, prediction, and mapping of environmental variables such as soil properties, pollutant concentrations, and geological features. By comprehending the spatial structure through variogram analysis, researchers and practitioners can make more informed decisions and accurate predictions in fields such as geology, hydrology, environmental science, and related disciplines. ### Solutions to overcome SAC and rSAC The most common ways to eliminate the influence of SAC in the data on the prediction quality are the following: 1. proper sampling design 2. careful feature selection method 3. model selection 4. spatial cross-validation #### 3.3.1 Sampling design SAC influences occur in its capacity to delineate significance levels, demarcate discernible disparities in attribute measures across diverse populations, and elucidate attribute variability [148]. An amplified presence of SAC in georeferenced datasets invariably leads to an augmentation in redundant or duplicate information [149]. This redundancy stems from two primary sources: geographic patterns informed by shared variables or the consequences of spatial interactions, typically characterised as geographic diffusion. Exploring the details of sampling in relation to SAC reveals many layers of understanding: Figure 4: The difference in spatial autocorrelation in Geochemical maps from USGS Open-File Report [142]. A) There appears to be a strong positive spatial autocorrelation with high concentrations (in red) and low concentrations (in blue) clustered together. B) The Bismuth map shows more scattered and less distinct clustering, indicating weaker spatial autocorrelation. The central and eastern regions show interspersed high and low values, suggesting a negative or weaker spatial autocorrelation. * The employment of diverse stratification criteria elicits heterogeneous impacts upon the amplitude of SAC [150]. * The soil sampling density and SAC critically influence the veracity of interpolation methodologies [151]. * Empirical findings suggest that sampling paradigms characterised by heterogeneous sampling intervals -- notably random and systematic-cluster designs -- demonstrate enhanced efficacy in discerning spatial structures, compared with purely systematic approaches [152]. The size of the sample also plays a key role in spatial modeling. In quantitative studies, it affects how broadly the results can be applied and how the data can be handled. In qualitative studies, it's crucial to establish that results can be applied in other contexts and for discovering new insights [153]. The relationship between SAC and the best sample size in quantitative research has been a popular topic, leading to many studies and discussions [154, 149, 155]. In remote sensing, the main goal is often to use spectral data to guess attributes of places that have not been sampled. Regular sampling methods are usually best for this. Using close pairs of points in a regular design may make our predictions more accurate. But these designs do not work as well in different situations. Spatially detailed models are good for places with clear spatial patterns. They do not adapt well, however, to places with different patterns. Importantly, if our sampling design creates distances that match the natural spacing in the area, our predictions might be less certain [156]. #### 3.3.2 Variable selection Spatial autocorrelation can be influenced significantly by selecting and treating variables within a dataset. Several traditional methodologies, encompassing feature engineering, mitigation of multicollinearity, and spatial data preprocessing, present viable avenues to address SAC-related challenges. One notable complication arises from multicollinearity amongst the selected variables, which can potentiate SAC [157]. Indications of multicollinearity are discernable through various diagnostic tools such as correlation matrices and variance inflation factors. To counteract multicollinearity, strategies encompassing the elimination of variables with high correlations and the application of dimensionality reduction techniques such as principal component analysis (PCA) can be employed. A judicious selection of pertinent variables, complemented by the development of novel variables hinged on domain expertise and exploratory data analysis, may further attenuate the manifestation of SAC. Another approach for addressing this challenge is the consideration of rSAC across diverse variable subsets, followed by the deployment of classical model selection criteria like the Akaike information criterion [158]. It is, however, imperative to recognise that the Akaike information criterion retains its efficacy in the context of rSAC when the independent variables do not exhibit spatial autocorrelation [159]. In ML and DL, emerging methodologies have embraced spatial autocorrelation as an integral component. For instance, while curating datasets for training Long Short-Term Memory (LSTM) networks, an optimal SAC variable was identified and integrated into the dataset [160]. Furthermore, spatial features, namely spatial lag and eigenvector spatial filtering (ESF), have been introduced to the models to account for spatial autocorrelation [161]. A novel set of features, termed the Euclidean distance field (EDF), has been innovatively designed based on the spatial distance between query points and observed boreholes. This design aims to seamlessly weave spatial autocorrelation into the fabric of ML models, further underscoring the significance of variable selection in spatial studies [162]. #### 3.3.3 Model selection Selecting or enhancing models to mitigate SAC impact is crucial. Spatial autoregressive models (SAR), especially simultaneous autoregressive models, are effective in this regard [163]. SAR may stand for either spatial autoregressive or simultaneous autoregressive models. Regardless of terminology, SAR models allow spatial lags of the dependent variable, spatial lags of the independent variables, and spatial autoregressive errors. Spatial errors model (SEM), incorporate spatial dependence either directly or through error terms. SEMs handle SAC with geographically correlated errors. Other approaches include auto-Gaussian models for fine-scale SAC consideration [164]. Spatial Durbin models further improve upon these by considering both direct and indirect spatial effects on dependent variables [165]. Additionally, Geographically Weighted Regression (GWR) offers localised regression, estimating coefficients at each location based on nearby data [166]. In the context of SDM, six statistical methodologies were described to account for SAC in model residuals for both presence/absence (binary response) and species abundance data (Poisson or normally distributed response) [143]. These methodologies include autocovariate regression, spatial eigenvector mapping, generalised least squares (GLS), (conditional and simultaneous) autoregressive models, and generalised estimating equations. Spatial eigenvector mapping creates spatially correlated eigenvectors to capture and adjust for spatial autocorrelation effects [167]. GLS extends ordinary least squares by considering a variance-covariance matrix to address spatial dependence [168]. The use of spatial Bayesian methods has grown in favour of overcoming SAC. Bayesian Spatial Autoregressive (BSAR) models and Bayesian Spatial Error (BSEM) models explicitly account for SAC by incorporating a spatial dependency term and a spatially structured error term, respectively, to capture indirect spatial effects and unexplained spatial variation [169]. In recent years, the popularity of autoregressive models for spatial modelling as a core method has slightly decreased, while classical ML and DL methods have been extensively employed for spatial modelling tasks. Consequently, various techniques have been developed to leverage SAC's influence effectively. The common approach is to incorporate SAC with the usage of autoregressive models during the stages of dataset preparation and variable selection. This approach is presented in greater detail in the previous subsection 3.3.2. On the other hand, combining geostatistical methods with ML is gaining popularity. For example, the usage of an artificial neural network (ANN) and the subsequent modelling of the residuals by geostatistical methods to simulate a nonlinear large-scale trend [170]. #### 3.3.4 Spatial cross-validation Spatial cross-validation is a widely-used technique to account for SAC in various research studies [171, 172, 173, 128]. Neglecting the consideration of SAC for spatial data can introduce an optimistic bias in the results. This issue has been highlighted in the research, emphasising the importance of accounting for spatial dependence to obtain more accurate and unbiased assessments of model performance [174, 175, 176, 145]. For instance, it was shown [171] that random cross-validation could yield estimates up to 40 percent more optimistic than spatial cross-validation. The main idea of spatial cross-validation is to split the data into blocks around central points of the dependence structure in space in space [175]. This ensures that the validation folds are statistically independent of the training data used to build a model. By geographically separating validation locations from calibration points, spatial cross-validation techniques effectively achieve this independence [177]. Various methods are commonly employed in spatial cross-validation, including buffering, spatial partitioning, environmental blocking, or combinations thereof [175, 37]. These techniques aim to strike a balance between minimising SAC and avoiding excessive extrapolation, which can significantly impact model performance [175]. Buffering involves defining a distance-based radius around each validation point, excluding observations within this radius from model calibration. Environmental blocking groups data into sets with similar environmental conditions or clusters spatial coordinates based on input covariates [178]. Spatial partitioning, known as spatial K-fold cross-validation, divides the geographic space into K spatially distinct subsets through spatial clustering or using a coarse grid with K cells [175]. However, it's worth mentioning an alternative discussion [38] showing that both standard and spatial cross-validation procedures may not be considered unbiased solutions for estimating the accuracy of mapping results, while the very concept of spatial cross-validation is heavily criticised. According to the results, neither standard nor spatial cross-validation provided satisfying results: map accuracy was overestimated for clustered data in the case of standard cross-validation or severely underestimated in the case of chosen spatial cross-validation strategies. Instead, probability sampling and design-based inference are suggested to obtain unbiased estimates of map accuracy in large-scale studies. Another concern is the request for better articulation of the meaning of validating a mapping model while examples of model validation and validation of the map are discussed. In summary, spatial cross-validation techniques could be suitable to address SAC in data-based spatial modelling tasks while providing a transparent and precise description of the methodology of the model accuracy assessment and inference obtaining in a step-by-step manner is of high importance. Selecting the most suitable technique and its corresponding parameters should result from thoughtful consideration of the specificity of the research problem and the corresponding dataset. ## 4 Uncertainty quantification ### Problem statement Geospatial predictions using machine learning have become convenient for routine decision-making workflows. To ensure these predictions are reliable and sufficient, assessing the uncertainty associated with the model's forecasts is crucial. Uncertainty quantifies the level of confidence the model has in its predictions (Figure 5). Two primary types of uncertainty exist: aleatory uncertainty, which arises from data uncertainty, and epistemological uncertainty, which originates from knowledge limitations [180]. Sources of uncertainty may stem from incomplete or inaccurate data, inaccurately specified models, inherent stochasticity in the simulated system, or gaps in our understanding of the underlying processes. Assessing aleatoric uncertainty caused by noise, low spatial or temporal resolution, or other factors which cannot be taken into account can be challenging. For that reason, the most of research is focused on epistemic uncertainty. Reducing uncertainty in ML models is essential for improving their reliability and accuracy. ### Solutions for uncertainty quantification #### 4.2.1 Classical ML approaches One of the common approaches for uncertainty quantification (UQ) in geospatial modelling is quantile regression [181]. It allows one to understand not only the average relationship between variables but also how different quantiles (percentiles) of the dependent variable change with the independent variables. In other words, it helps to analyze how the data is distributed across the entire range, rather than just focusing on the central tendency. Quantile regression is particularly useful when dealing with data that may not follow a normal distribution or when there are outliers in the data that could heavily influence the results. For instance, to quantify the uncertainty of models for nitrate pollution of groundwater, quantile regression and uncertainty estimation based on local errors and clustering (UNEEC) [182] were used [183]. Quantile regression was also used for the UQ of four conventional ML models for digital soil mapping: to estimate UQ the authors analysed mean prediction intervals (MPI) and prediction interval coverage probability (PICP) [184]. Another widely used technique for UQ is bootstrap, which is a statistical resampling technique that involves creating multiple samples from the original data to estimate the uncertainty of a statistical measure [185, 186]. One more metric is mean-variance estimation (MVE), which is used to simultaneously estimate both the mean (average) and variance (spread) of a dataset. It helps to describe the central tendency and variability of the data [187]. #### 4.2.2 Gaussian process regression Gaussian Process Regression, also known as kriging, is commonly used for UQ in geospatial applications, providing a natural way to estimate the uncertainty associated with spatial predictions. In a study focused on spatiotemporal modelling of soil moisture content using neural networks, the authors utilised sequential Gaussian simulations to estimate uncertainty and reduced RMSE by 18% in comparison with the classical approach [188]. Another approach, known as Lower Upper Bound Estimation, was applied to estimate sediment load prediction intervals generated by neural networks [189]. For soil organic mapping, researchers compared different methods, including sequential Gaussian simulation (SGS), quantile regression forest (QRF), universal kriging, and kriging coupled with random forest. They concluded that SGS and QRF provide better uncertainty models based on accuracy plots and G-statistics [190]. However, Random Forest demonstrated better performance of prediction uncertainty in comparison with kriging in soil mapping in another study [191], although predictions of regression kriging were found to be more accurate, that can be related to the architecture of these models. #### 4.2.3 Bayesian techniques Another approach to estimating uncertainty in ML models is through Bayesian inference [192]. In Bayesian methods, model parameters are treated as random variables with prior distributions, allowing for uncertainty modelling. However, with its complex relationships and spatial dependencies, geospatial modelling poses challenges in uncertainty quantification. To estimate uncertainty in model predictions, the posterior distribution of parameters given the data and priors is used. Bayesian techniques have been applied to various models, including neural networks, Gaussian processes, and spatial autoregressive models, to estimate uncertainty in predictions of variables such as temperature, air quality, and land use. Some main methods for uncertainty quantification using Bayesian neural networks include Monte Carlo (MC) dropout [193], sampling via Markov chain Monte Carlo (MCMC) [194], and Variational autoencoders [195]. However, it should be noted that most of these methods are specifically used for uncertainty quantification in DL and at the moment they are not widely implemented in geospatial modeling [180]. Figure 5: Example of uncertainty quantification (UQ) for spatial mapping provided within the project SoilGrids [179] a) maps of one of the target variables – soil pH(water) in the topsoil layer; b) maps of associated uncertainty estimated using prediction interval coverage probability (PICP) index for the same territory. For instance, Bayesian techniques have been used in weather modelling, particularly wind speed prediction and hydrogeological calculations, to analyse the risk of reservoir flooding [196]. Probabilistic modelling was employed to assess the uncertainty of spatial-temporal wind speed forecasting, with models based on spatial-temporal neural networks using convolutional GRU and 3D CNN. Variational Bayesian inference was also utilised [197]. Similarly, Bayesian inference has been applied to estimate uncertainty in soil moisture modelling [198]. Another study used Bayesian inference to model the spread of invasive species [199]. #### 4.2.4 Ensemble techniques Model ensembling is a powerful technique used in geospatial modelling to address uncertainty. Geospatial models often deal with complex systems where uncertainty arises from various sources, including input data, parametrisation, and modelling assumptions. Ensembles can help both during the reduction and estimation of uncertainty. The diversity of predictions from different members of an ensemble serves as a natural way to estimate uncertainty. On the other hand, more robust and reliable estimates can be obtained by combining predictions from multiple models through ensembling methods like weighted averaging, stacking, or Bayesian model averaging. Ensembling helps mitigate uncertainties associated with individual models and provides a way to estimate uncertainty by computing the variance of predictions across the ensemble [200]. To solve the problems of spatial mappings, such as equifinality, uncertainty and conditional bias, ensemble modelling and bias correction framework were proposed. The method was developed for mapping soils using the XGBoost model and environmental covariances as predictors. It was shown that ensemble modelling helped solve the equifinality problem in the data set while demonstrating better performance [201]. Another example is the comparison of regional and global ensemble models for soil mapping [202]. It was found that the performance of an ensemble of regional models was the same as global models, but regional model ensembles had less uncertainty than global models. Ensembling approaches to UQ were also applied in the DL modelling tasks [203]. In another study, authors proposed a system that combines ML models within a spatial ensemble framework to reduce uncertainty and enhance the accuracy of site index predictions [204]. For soil clay content mapping, authors estimated the uncertainty of seven ML models and their ensembles [205]. Ensembling proves to be a valuable technique in geospatial modelling, as it leverages the collective knowledge of multiple models to improve predictions and provide more comprehensive uncertainty estimates. ### Solutions to address the uncertainty in spatial predictions Several approaches can be used to reduce uncertainty, falling into two groups. The first group is related to input data and involves increasing data quality, using more data from different domains, and adding feature engineering to select predictors highly relevant to the problem. This can help the model focus on the most informative aspects of the data, reducing uncertainty caused by irrelevant or redundant features. The second group is devoted to the modelling step and includes spatial and temporal cross-validation, model regularisation techniques to prevent overfitting, combining multiple models through techniques such as bagging or boosting, and more complex approaches such as Bayesian methods, Gaussian techniques, and transfer learning, which are described above. Visualisation methods for UQ in geospatial modelling hold a distinct place compared to other areas of ML. Researchers emphasize the significance of visually analyzing maps with uncertainty estimates, especially for biodiversity and policy conversation tasks [206]. Visualisation techniques such as bivariate choropleth maps, map pixelation, and glyph rotation to represent spatial predictions with uncertainty can be used [207]. ## 5 Practical tools In summary, it is crucial to consider the specificity of environmental data and its outcomes when selecting appropriate approaches for analysis. Table 5 presents various methods to address the challenges discussed, including packages and libraries for geospatial analysis. Considering the dominant spread of Python and R as programming environments for data-based geospatial modelling, most of the tools to be implemented within these languages are provided. It is worth mentioning that although discussing libraries and packages are widely used in both academia and industry, common ML tools available in Python and R cover most of their functionality, being a replacement of these more specialized instruments with little change in quality and utility if used by advanced data scientists. \\begin{table} \\begin{tabular}{p{113.8pt} p{113.8pt} p{113.8pt}} \\hline \\hline **Solutions** & **Environment** & **Package/library** & **Description** \\\\ \\hline \\multirow{6}{*}{Geospatial analysis: general tools} & & Reading and writing spatial data represented by points, lines, polygons and grids, producing spatial objects, and performing spatial operations, e.g. plotting data as maps, spatial selection, retrieving coordinates, subsetting, print, summary. \\\\ \\cline{1-1} & & Reading and writing spatial data represented by points, lines, polygons and grids, producing spatial objects, and performing spatial operations, e.g. plotting data as maps, spatial selection, retrieving coordinates, subsetting, print, summary. \\\\ \\cline{1-1} & & Reading and writing spatial data represented by points, lines, polygons and grids, producing spatial objects, and performing spatial operations, e.g. plotting data as maps, spatial selection, retrieving coordinates, subsetting, print, summary. \\\\ \\cline{1-1} & & Reading and writing spatial data represented by points, lines, polygons and grids, producing spatial objects, and performing spatial operations, e.g. plotting data as maps, spatial selection, retrieving coordinates, subsetting, print, summary. \\\\ \\cline{1-1} & & Reading and writing spatial data represented by points, lines, polygons and grids, producing spatial objects, and performing spatial operations, e.g. plotting data as maps, spatial selection, retrieving coordinates, subsetting, print, summary. \\\\ \\cline{1-1} & & Reading and writing spatial data represented by points, lines, polygons and grids, producing spatial objects, and performing spatial operations, e.g. plotting data as maps, spatial selection, retrieving coordinates, subsetting, print, summary. \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 1: Geospatial data science tools in selected programming environments \\begin{tabular}{p{113.8pt} p{113.8pt} p{113.8pt}} R & Simple Features (sf) [209] & Reading, writing and converting Simple Features. Provides a set of tools for working with geospatial geometries represented by points, lines, polygons. \\\\ \\hline R & raster [210] & Creating, reading, manipulating, and writing raster data. The package can process very large datasets. Raster algebra functions and high-level methods such as cropping and resampling are implemented. \\\\ \\hline Python & geopandas [211] & Extends the datatypes used by python pandas to allow spatial operations on geospatial vector data. Reading, writing files, making maps and plots, data analysis and manipulations, such as buffering, intersection, and spatial joins. \\\\ \\hline Python & rasterio [212] & Reading and writing gridded raster datasets, raster values can be extracted at precise points, and raster data can be warped and reprojected. \\\\ \\hline Python & gdal [213] & Geospatial Data Abstraction Library for raster and vector data reading, creating, writing, transformation and analysis \\\\ \\hline Python & pysal [214] & A family of packages for spatial data science: a collection of tools to explore, visualise and estimate relationships in spatial data with a focus on vector data. Among the core functions are detection of spatial clusters, hot-spots, and outliers, exploratory spatiotemporal data analysis, spatial regression and statistical modeling supported with inference obtaining. \\\\ \\hline General & R & imbalance [215] \\\\ oversampling & R & imbalance [215] \\\\ methods of minority & & & \\\\ class & & & \\\\ \\cline{2-3} & R & smotefamily [88] & A collection of various oversampling techniques for the minority class developed from SMOTE, including ADASYN, Borderline-SMOTE, DBSMOTE \\\\ \\hline General & R & themis [216] & Collection of techniques for balancing the data, including variations of SMOTE, ADASYN, ROSE, balancing majority class in a distance-based manned \\\\ \\cline{2-3} & class and & & Over-sampling methods include SMOTE, SMOTEN, SMOTEN, SMOTEN, ADASYN, BorderlineSMOTE, KMeansSMOTE, SVMSMOTE. Under-sampling techniques include random under-sampling, algorithms based on cluster centroid of a KMeans algorithm, instance hardness threshold, variations of the nearest neighbour method, etc. \\\\ \\hline Spatial oversampling methods of minority class & R & biomod2 [218] \\\\ \\hline \\end{tabular} \\begin{tabular}{p{113.8pt} p{113.8pt} p{113.8pt}} Spatial oversampling methods of minority class & R & biomod2 [218] \\\\ \\end{tabular} (a condition that differs from a defined proportion of prevalent class, could lead to over-optimistic result) manners. \\begin{tabular}{c c c c} \\hline \\hline Spatial & R & SpThin [85] & SpThin provides spatial thinning of species occurrence records but can be applied for any other point-based spatial data. It is helpful for addressing problems associated with spatial sampling biases. \\\\ \\hline Measuring SAC & R & spdep [219] & Conducting spatial autocorrelation analysis, constructing spatial weight matrices, and visualizing spatial dependence patterns. Includes global and local Morans I and Gearys C, Hubert/Mantel general cross product statistic, Empirical Bayes estimates. \\\\ \\cline{2-4} R & ncf [220] & Spatial analysis tool which is addressed to spatial autocorrelation, modelling semi-variograms, computing spatial covariance functions, and performing geostatistical interpolation. \\\\ \\cline{2-4} Python & esda [214] & Exploratory Spatial Data Analysis: Geary, Moran, Silhouette statistics are available. \\\\ \\hline Spatial & R & blockCV [178] & Toolbox for cross-validation in spatial modelling. Includes tools for generating train and test folds for k-fold and leave-one-out cross-validation, to measure spatial autocorrelation ranges in candidate covariates, interactive graphical capabilities for creating spatial blocks and exploring data folds. \\\\ \\hline Uncertainty estimation & R & inlabru [221] & Spatial and general latent Gaussian modelling using integrated nested Laplace approximation. A prediction method based on fast Monte Carlo sampling allows posterior prediction of general expressions of the latent variables. Provide access to Bayesian inference from spatial point process, spatial count, gridded, and georeferenced data. \\\\ \\cline{2-4} R & Vizumap [222] & R package for visualising uncertainty in spatial data creating bivariate maps, pixel maps, glyph maps, and exceedance probability maps. \\\\ \\hline R & spup [223] & The package for examining the uncertainty propagation for input data and model parameters via the environmental model onto model outputs. The functions include uncertainty model specification, stochastic simulation and uncertainty propagation using Monte Carlo techniques. Probability distributions describe uncertain variables. \\\\ \\cline{2-4} Python & Uncertainty Toolbox [224] & A python toolbox for predictive uncertainty quantification, calibration, metrics, and visualizations. \\\\ \\hline \\multirow{4}{*}{Spatial modelling} & \\multirow{4}{*}{R} & \\multirow{4}{*}{biomod [218]} & Functions for modelling, calibration and evaluation, an ensemble of models, ensemble forecasting and visualization. Models include Random Forest, Boosted Regression Trees, Support Vector Machines, Artificial Neural Networks and others. \\\\ \\cline{2-4} JavaScript/Python & Google Earth Engine [225] & Catalog of satellite imagery and geospatial datasets and collection of tools for data retrieving, geospatial analysis and modelling. \\\\ \\cline{2-4} R & sdmTMB [226] & Implements spatial and spatiotemporal Generalized Linear Mixed Effect Models. \\\\ \\hline \\hline \\end{tabular} ## 6 Key areas for focus and growth Geospatial modelling has grown rapidly, driven by data-based models and the integration of ML and DL alongside traditional geospatial statistics. Previous sections highlighted common implementation gaps and approaches to address them. Additionally, it is worth exploring and discussing future developments and key possibilities concerning challenges in data-driven geospatial modelling. Below, we highlight the major points of growth that can lead to new seminal works in this area. New generation of datasetsIt is crucial to enhance data quality, quantity, and diversity to ensure reliable models. Establishing well-curated databases in environmental research is of utmost importance as it drives scientific progress and industrial innovation. When combined with modern tools, these databases can contribute to developing powerful models. A particular area of interest is the collection of cost-effective and efficient semi-supervised data, which typically has limited labels. Although currently underdeveloped, this data type holds significant potential for expansion and improvement. In computer vision and natural language processing, the superior quality of recently introduced models often comes from using more extensive and better datasets. Internal Google dataset on semi-supervised data JFT-3B with nearly three billion labelled images led to major improvements [29, 230]. Another major computer vision dataset example is LVD-142M with about 142 million images [231]. We note that the paper provides a pipeline that can be used to extend the size of existing datasets to two orders of magnitude. In natural language processing, a recent important example is training large language models [232]. It uses a preprocessed dataset with 2 trillion tokens. More closely related to geospatial modelling is the adoption of climate data. It now also allows the application of DL models mainly due to the increasing number of available measurements. For example, SEVIR dataset [233] allowed better prediction via a variant of Transformer architecture [234]. In [235], the authors developed a model for precipitation nowcasting. To train the model, they employ radar measurements at a grid with cells of \\(1\\times 1\\) kilometres, taking every 5 minutes for 3 years. In total, around 1 TB of data were used. Furthermore, integrating diverse data sources offers a promising path forward. Combining datasets from various domains, such as satellite imagery, meteorological and climatic data, and social data, such as social media posts that provide real-time environmental information for specific locations, can be beneficial. By developing multimodal models capable of processing these diverse data sources, the community can enhance model robustness and effectively address the challenges discussed in this study and the existing literature. Most of the research combines image and natural language modalities [236], while other options are possible. New generation of modelsThe continuous advancement of technology has led to the emergence of more sophisticated data sources, including higher-resolution remote sensing and more accurate geolocation data. Additionally, human efforts contribute to high-quality curated data. While this is beneficial, it presents challenges in adapting existing geospatial models to handle such data. Traditional models may need to be more suitable and efficient, necessitating the developing and validation of new models and computational methods. Incorporating DL methods is a potential solution, although they come with challenges related to interpretability and computational efficiency, especially when dealing with large volumes of data. We anticipate the emergence of self-supervised models trained on large semi-curated datasets for geospatial mapping in environmental research, similar to what we have seen in language modelling and computer vision. Such modelling approaches have also been applied to satellite images [237] including, for example, a problem of the state of plants estimation [102] and assessment of damaged buildings in disaster-affected area [238]. Producing industry-quality solutions: deployment and maintenanceAfter constructing a model, it needs to be deployed in a production environment. Access to necessary data and supporting services is crucial to ensure safe and continuous operation. Another challenge is the ageing of data-based models caused by environmental factors like changing climate [239], shifts in data sources, or transformations in output variables, e.g., alterations of land use and land cover [240]. Monitoring and considering such changes is essential to either discontinue using an outdated model or retrain it with new data [241]. The monitoring schedule can vary, guided by planned validation checking or triggered by data corruption as well as new business process implementations. Deployment and maintenance are often underestimated despite requiring significant resources andadditional steps for long-term success [242]. Another area of possible growth is related to developing new methods, including advanced DL methods. Incorporation of concept drift into the maintenance process is also an option [243]. ## References * [1] Giglio, L., Loboda, T., Roy, D. P., Quayle, B. & Justice, C. O. An active-fire based burned area mapping algorithm for the modis sensor. _Remote. sensing environment_**113**, 408-420 (2009). * [2] Chuvieco, E. _et al._ Historical background and current developments for mapping burned area from satellite earth observation. _Remote. Sens. Environ._**225**, 45-64 (2019). * [3] Mohajane, M. _et al._ Application of remote sensing and machine learning algorithms for forest fire mapping in a mediterranean area. _Ecol. Indic._**129**, 107869 (2021). * [4] Uddin, K., Matin, M. A. & Meyer, F. J. Operational flood mapping using multi-temporal sentinel-1 sar images: A case study from banglades. _Remote. Sens._**11**, 1581 (2019). * [5] Tarpanelli, A., Mondini, A. C. & Camici, S. Effectiveness of sentinel-1 and sentinel-2 for flood detection assessment in europe. _Nat. Hazards Earth Syst. Sci._**22**, 2473-2489 (2022). * [6] Tavus, B., Kocaman, S. & Gokceoglu, C. Flood damage assessment with sentinel-1 and sentinel-2 data after sardoba dam break with glcm features and random forest method. _Sci. The Total. Environ._**816**, 151585 (2022). * [7] Hoque, M. A.-A., Pradhan, B. & Ahmed, N. Assessing drought vulnerability using geospatial techniques in northwestern part of banglades. _Sci. The Total. Environ._**705**, 135957 (2020). * [8] Lu, J., Carbone, G. J., Huang, X., Lackstrom, K. & Gao, P. Mapping the sensitivity of agriculture to drought and estimating the effect of irrigation in the united states, 1950-2016. _Agric. For. Meteorol._**292**, 108124 (2020). * [9] Verstegen, J. A., van der Laan, C., Dekker, S. C., Faaij, A. P. & Santos, M. J. Recent and projected impacts of land use and land cover changes on carbon stocks and biodiversity in east kalimantan, indonesia. _Ecol. Indic._**103**, 563-575 (2019). * [10] Jetz, W. _et al._ Essential biodiversity variables for mapping and monitoring species populations. _Nat. ecology & evolution_**3**, 539-551 (2019). * [11] Moilanen, A., Kujala, H. & Mikkonen, N. A practical method for evaluating spatial biodiversity offset scenarios based on spatial conservation prioritization outputs. _Methods Ecol. Evol._**11**, 794-803 (2020). * [12] Zuo, R., Xiong, Y., Wang, J. & Carranza, E. J. M. Deep learning and its application in geochemical mapping. _Earth-science reviews_**192**, 1-14 (2019). * [13] Tapia, J. F. D., Doliente, S. S. & Samsatli, S. How much land is available for sustainable palm oil? _Land Use Policy_**102**, 105187 (2021). * [14] Heinrich, V. H. _et al._ The carbon sink of secondary and degraded humid tropical forests. _Nature_**615**, 436-442 (2023). * [15] Karra, K. _et al._ Global land use/land cover with sentinel 2 and deep learning. In _2021 IEEE international geoscience and remote sensing symposium IGARSS_, 4704-4707 (IEEE, 2021). * [16] Brown, C. F. _et al._ Dynamic world, near real-time global 10 m land use land cover mapping. _Sci. Data_**9**, 251 (2022). * [17] Yang, Y. _et al._ Mapping ecosystem services bundles to detect high-and low-value ecosystem services areas for land use management. _J. Clean. Prod._**225**, 11-17 (2019). * [18] Orsi, F., Ciolli, M., Primmer, E., Varumo, L. & Geneletti, D. Mapping hotspots and bundles of forest ecosystem services across the european union. _Land use policy_**99**, 104840 (2020). * [19] Senf, C. & Seidl, R. Mapping the forest disturbance regimes of europe. _Nat. Sustain._**4**, 63-70 (2021). * [20] Margono, B. A., Potapov, P. V., Turubanova, S., Stolle, F. & Hansen, M. C. Primary forest cover loss in indonesia over 2000-2012. _Nat. climate change_**4**, 730-735 (2014). * [21] Roman Dobarco, M. _et al._ Mapping soil organic carbon fractions for australia, their stocks, and uncertainty. _Biogeosciences_**20**, 1559-1586 (2023). * [22] Asim, M., Brekke, C., Mahmood, A., Eltoft, T. & Reigstad, M. Improving chlorophyll-a estimation from sentinel-2 (msi) in the barens sea using machine learning. _IEEE J. Sel. Top. Appl. Earth Obs. Remote. Sens._**14**, 5529-5549 (2021). * [23] Bouchard, M., Pothier, D. & Gauthier, S. Fire return intervals and tree species succession in the north shore region of eastern quebec. _Can. J. For. Res._**38**, 1621-1633 (2008). * [* [24] Syphard, A. D., Sheehan, T., Rustigian-Romsos, H. & Ferschweiler, K. Mapping future fire probability under climate change: does vegetation matter? _PLoS One_**13**, e0201680 (2018). * [25] Fan, L. _et al._ Siberian carbon sink reduced by forest disturbances. _Nat. Geosci._**16**, 56-62 (2023). * [26] Schillaci, C. _et al._ Spatio-temporal topsoil organic carbon mapping of a semi-arid mediterranean region: The role of land use, soil texture, topographic indices and the influence of remote sensing data to modelling. _Sci. total environment_**601**, 821-832 (2017). * [27] Keskin, H., Grunwald, S. & Harris, W. G. Digital mapping of soil carbon fractions with machine learning. _Geoderma_**339**, 40-58 (2019). * [28] Bjanes, A., De La Fuente, R. & Mena, P. A deep learning ensemble model for wildfire susceptibility mapping. _Ecol. Informatics_**65**, 101397 (2021). * [29] Zhang, H. K., Roy, D. P. & Luo, D. Demonstration of large area land cover classification with a one dimensional convolutional neural network applied to single pixel temporal metric percentiles. _Remote. Sens. Environ._**295**, 113653 (2023). * [30] Gewin, V. Mapping opportunities. _Nature_**427**, 376-377 (2004). * [31] Eidenshink, J. _et al._ A project for monitoring trends in burn severity. _Fire ecology_**3**, 3-21 (2007). * [32] of the Interior, U. D. _Interior Invasive Species Strategic Plan, Fiscal Years 2021-2025_ (U.S. Department of the Interior, Washington, D.C., 2021). * [33] Parliament, E. Directive 2007/60/ec of the european parliament and of the council of 23 october 2007 on the assessment and management of flood risks. Tech. Rep. (2007). * [34] Melo, J., Baker, T., Nemitz, D., Quegan, S. & Ziv, G. Satellite-based global maps are rarely used in forest reference levels submitted to the unfccc. _Environ. Res. Lett._**18**, 034021 (2023). * [35] Rogelj, J. _et al._ Mitigation pathways compatible with 1.5 c in the context of sustainable development. In _Global warming of 1.5 C_, 93-174 (Intergovernmental Panel on Climate Change, 2018). * [36] Janowicz, K. Philosophical foundations of geoai: Exploring sustainability, diversity, and bias in geoai and spatial data science. _arXiv preprint arXiv:2304.06508_ (2023). * [37] Ploton, P. _et al._ Spatial validation reveals poor predictive performance of large-scale ecological mapping models. _Nat. communications_**11**, 4540 (2020). * [38] Wadoux, A. M.-C., Heuvelink, G. B., De Bruin, S. & Brus, D. J. Spatial cross-validation is not the right way to evaluate map accuracy. _Ecol. Model._**457**, 109692 (2021). * [39] Karasiak, N., Dejoux, J.-F., Monteil, C. & Sheeren, D. Spatial dependence between training and test sets: another pitfall of classification accuracy assessment in remote sensing. _Mach. Learn._**111**, 2715-2740 (2022). * [40] Meyer, H. & Pebesma, E. Machine learning-based global maps of ecological variables and the challenge of assessing them. _Nat. Commun._**13**, 2208 (2022). * [41] Kanevski, M., Pozdnoukhov, A., Pozdnukhov, A. & Timonin, V. _Machine learning for spatial environmental data: theory, applications, and software_ (EPFL press, 2009). * [42] Li, J., Heap, A. D., Potter, A. & Daniell, J. J. Application of machine learning methods to spatial interpolation of environmental variables. _Environ. Model. & Softw._**26**, 1647-1659 (2011). * [43] Dale, M. R. & Fortin, M.-J. _Spatial analysis: a guide for ecologists_ (Cambridge University Press, 2014). * [44] Thessen, A. Adoption of machine learning techniques in ecology and earth science. _One Ecosyst._**1**, e8621 (2016). * [45] Feng, X. _et al._ A checklist for maximizing reproducibility of ecological niche models. _Nat. Ecol. & Evol._**3**, 1382-1395 (2019). * [46] Meyer, H., Reudenbach, C., Wollauer, S. & Nauss, T. Importance of spatial predictor variable selection in machine learning applications-moving from data reproduction to spatial prediction. _Ecol. Model._**411**, 108815 (2019). * [47] Tahmasebi, P., Kamrava, S., Bai, T. & Sahimi, M. Machine learning in geo-and environmental sciences: From small to large scale. _Adv. Water Resour._**142**, 103619 (2020). * [48] Azevedo, A. & Santos, M. F. Kdd, semma and crisp-dm: a parallel overview. _IADS-DM_ (2008). * [49] Schroer, C., Kruse, F. & Gomez, J. M. A systematic literature review on applying crisp-dm process model. _Procedia Comput. Sci._**181**, 526-534 (2021). * [* [50] Sillero, N. _et al._ Want to model a species niche? a step-by-step guideline on correlative ecological niche modelling. _Ecol. Model._**456**, 109671 (2021). * [51] Wirth, R. & Hipp, J. CRISPR-DM: Towards a standard process model for data mining. In _Proceedings of the 4th international conference on the practical applications of knowledge discovery and data mining_, vol. 1, 29-39 (Manchester, 2000). * [52] Wang, S., Azzari, G. & Lobell, D. B. Crop type mapping without field-level labels: Random forest transfer and unsupervised clustering techniques. _Remote. sensing environment_**222**, 303-317 (2019). * [53] Wang, Y., Fang, Z. & Hong, H. Comparison of convolutional neural networks for landslide susceptibility mapping in yanshan county, china. _Sci. total environment_**666**, 975-993 (2019). * [54] Yuan, Q. _et al._ Deep learning in environmental remote sensing: Achievements and challenges. _Remote. Sens. Environ._**241**, 111716 (2020). * [55] You, N. _et al._ The 10-m crop type maps in northeast china during 2017-2019. _Sci. data_**8**, 41 (2021). * [56] Jia, X. _et al._ A methodological framework for identifying potential sources of soil heavy metal pollution based on machine learning: A case study in the yangtze delta, china. _Environ. Pollut._**250**, 601-609 (2019). * [57] Ozigis, M. S., Kaduk, J. D. & Jarvis, C. H. Mapping terrestrial oil spill impact using machine learning random forest and landsat 8 oli imagery: A case site within the niger delta region of nigeria. _Environ. Sci. Pollut. Res._**26**, 3621-3635 (2019). * [58] Hamilton, H. _et al._ Increasing taxonomic diversity and spatial resolution clarifies opportunities for protecting us imperiled species. _Ecol. Appl._**32**, e2534 (2022). * [59] Panahi, M., Sadhasivam, N., Pourghasemi, H. R., Rezaie, F. & Lee, S. Spatial prediction of groundwater potential mapping based on convolutional neural network (cnn) and support vector regression (svr). _J. Hydrol._**588**, 125033 (2020). * [60] Nikitin, A. _et al._ Regulation-based probabilistic substance quality index and automated geo-spatial modeling for water quality assessment. _Sci. Reports_**11**, 1-14 (2021). * [61] Potapov, P. _et al._ Mapping global forest canopy height through integration of gedi and landsat data. _Remote. Sens. Environ._**253**, 112165 (2021). * [62] Harris, N. L. _et al._ Global maps of twenty-first century forest carbon fluxes. _Nat. Clim. Chang._**11**, 234-240 (2021). * [63] Kubat, M., Matwin, S. _et al._ Addressing the curse of imbalanced training sets: one-sided selection. In _Icml_, vol. 97, 179 (Citeseer, 1997). * [64] Kaur, H., Pannu, H. S. & Malhi, A. K. A systematic review on imbalanced data challenges in machine learning: Applications and solutions. _ACM Comput. Surv. (CSUR)_**52**, 1-36 (2019). * [65] Jasiewicz, J. & Sobkowiak-Tabaka, I. Geo-spatial modelling with unbalanced data: modelling the spatial pattern of human activityduring the stone age. _Open Geosci._**7** (2015). * [66] Langford, Z., Kumar, J. & Hoffman, F. Wildfire mapping in interior alaska using deep neural networks on imbalanced datasets. In _2018 IEEE International Conference on Data Mining Workshops (ICDMW)_, 770-778 (IEEE, 2018). * [67] Shaeri Karimi, S., Saintilan, N., Wen, L. & Valavi, R. Application of machine learning to model wetland inundation patterns across a large semiarid floodplain. _Water Resour. Res._**55**, 8765-8778 (2019). * [68] Benkendorf, D. J. & Hawkins, C. P. Effects of sample size and network depth on a deep learning approach to species distribution modeling. _Ecol. Informatics_**60**, 101137 (2020). * [69] Sharma, A., Ahuja, A., Devi, S. & Pasari, S. Use of spatio-temporal features for earthquake forecasting of imbalanced data. In _2022 International Conference on Intelligent Innovations in Engineering and Technology (ICIIET)_, 178-182 (IEEE, 2022). * [70] GBIF.org. Accessed: 2021-08-17. * [71] Anderson, R. P. _et al._ Final report of the task group on gbif data fitness for use in distribution modelling. _Glob. Biodivers. Inf. Facil._ 1-27 (2016). * [72] Kubat, M., Holte, R. C. & Matwin, S. Machine learning for the detection of oil spills in satellite radar images. _Mach. learning_**30**, 195-215 (1998). * [73] Shaban, M. _et al._ A deep-learning framework for the detection of oil spills from sar data. _Sensors_**21**, 2351 (2021). * [74] Weiss, G. M. & Provost, F. Learning when training data are costly: The effect of class distribution on tree induction. _J. artificial intelligence research_**19**, 315-354 (2003). * [* [75] Japkowicz, N. & Stephen, S. The class imbalance problem: A systematic study. _Intell. data analysis_**6**, 429-449 (2002). * [76] Sun, Y., Wong, A. K. & Kamel, M. S. Classification of imbalanced data: A review. _Int. journal pattern recognition artificial intelligence_**23**, 687-719 (2009). * [77] He, H. & Garcia, E. A. Learning from imbalanced data. _IEEE Transactions on knowledge data engineering_**21**, 1263-1284 (2009). * [78] Chawla, N. V., Bowyer, K. W., Hall, L. O. & Kegelmeyer, W. P. Smote: synthetic minority over-sampling technique. _J. artificial intelligence research_**16**, 321-357 (2002). * [79] Van Rijsbergen, C. Information retrieval: theory and practice. In _Proceedings of the joint IBM/University of Newcastle upon type seminar on data base systems_, vol. 79 (1979). * [80] Japkowicz, N. & Shah, M. _Evaluating learning algorithms: a classification perspective_ (Cambridge University Press, 2011). * [81] Krawczyk, B. Learning from imbalanced data: open challenges and future directions. _Prog. Artif. Intell._**5**, 221-232 (2016). * [82] Chawla, N. V., Japkowicz, N. & Kotcz, A. Special issue on learning from imbalanced data sets. _ACM SIGKDD explorations newsletter_**6**, 1-6 (2004). * [83] Estabrooks, A. _A combination scheme for inductive learning from imbalanced data sets_. Ph.D. thesis, DalTech (2000). * [84] Thuiller, W., Lafourcade, B., Engler, R. & Araujo, M. B. Biomod-a platform for ensemble forecasting of species distributions. _Ecography_**32**, 369-373 (2009). * [85] Aiello-Lammens, M. E., Boria, R. A., Radosavljevic, A., Vilela, B. & Anderson, R. P. spthin: an R package for spatial thinning of species occurrence records for use in ecological niche models. _Ecography_**38**, 541-545 (2015). * [86] Leroy, B., Meynard, C. N., Bellard, C. & Courchamp, F. virtualspecies, an r package to generate virtual species distributions. _Ecography_**39**, 599-607 (2016). * [87] Fick, S. E. & Hijmans, R. J. Worldclim 2: new 1-km spatial resolution climate surfaces for global land areas. _Int. journal climatology_**37**, 4302-4315 (2017). * [88] Siriseriwan, W. _A Collection of Oversampling Techniques for Class Imbalance Problem Based on SMOTE_ (2022). Version 1.3.1. * [89] Shelke, M. S., Deshmukh, P. R. & Shandilya, V. K. A review on imbalanced data handling using undersampling and oversampling technique. _Int. J. Recent Trends Eng. Res_**3**, 444-449 (2017). * [90] Kovacs, G. An empirical comparison and evaluation of minority oversampling techniques on a large number of imbalanced datasets. _Appl. Soft Comput._**83**, 105662 (2019). * [91] Fernandez, A., Garcia, S., Herrera, F. & Chawla, N. V. Smote for learning from imbalanced data: progress and challenges, marking the 15-year anniversary. _J. artificial intelligence research_**61**, 863-905 (2018). * [92] Zhang, S. & Yu, P. Seismic landslide susceptibility assessment based on adasyn-lda model. In _IOP Conference Series: Earth and Environmental Science_, vol. 525, 012087 (IOP Publishing, 2020). * [93] Perez-Porras, F.-J. _et al._ Machine learning methods and synthetic data generation to predict large wildfires. _Sensors_**21**, 3694 (2021). * [94] Cao, H., Xie, X., Shi, J. & Wang, Y. Evaluating the validity of class balancing algorithms-based machine learning models for geogenic contaminated groundwaters prediction. _J. Hydrol._**610**, 127933 (2022). * [95] Gomez-Escalonilla, V. _et al._ Multiclass spatial predictions of borehole yield in southern mali by means of machine learning classifiers. _J. Hydrol. Reg. Stud._**44**, 101245 (2022). * [96] He, H., Bai, Y., Garcia, E. A. & Li, S. Adasyn: Adaptive synthetic sampling approach for imbalanced learning. In _2008 IEEE international joint conference on neural networks (IEEE world congress on computational intelligence)_, 1322-1328 (IEEE, 2008). * [97] Han, H., Wang, W.-Y. & Mao, B.-H. Borderline-smote: a new over-sampling method in imbalanced data sets learning. In _Advances in Intelligent Computing: International Conference on Intelligent Computing, ICIC 2005, Hefei, China, August 23-26, 2005, Proceedings, Part I_ 1, 878-887 (Springer, 2005). * [98] Barua, S., Islam, M. M., Yao, X. & Murase, K. Mwmote-majority weighted minority oversampling technique for imbalanced data set learning. _IEEE Transactions on knowledge data engineering_**26**, 405-425 (2012). * [99] Batista, G. E., Prati, R. C. & Monard, M. C. A study of the behavior of several methods for balancing machine learning training data. _ACM SIGKDD explorations newsletter_**6**, 20-29 (2004). * [100] Shamsolmoali, P., Zareapoor, M., Wang, R., Zhou, H. & Yang, J. A novel deep structure u-net for sea-land segmentation in remote sensing images. _IEEE J. Sel. Top. Appl. Earth Obs. Remote. Sens._**12**, 3219-3232 (2019). * [101] Nowakowski, A. _et al._ Crop type mapping by using transfer learning. _Int. J. Appl. Earth Obs. Geoinformation_**98**, 102313 (2021). * [102] Illarionova, S. _et al._ Estimation of the canopy height model from multispectral satellite imagery with convolutional neural networks. _IEEE Access_**10**, 34116-34132 (2022). * [103] Simard, P. Y., Steinkraus, D., Platt, J. C. _et al._ Best practices for convolutional neural networks applied to visual document analysis. In _Icdar_, vol. 3 (Edinburgh, 2003). * [104] Yang, N., Zhang, Z., Yang, J. & Hong, Z. Applications of data augmentation in mineral prospectivity prediction based on convolutional neural networks. _Comput. & geosciences_**161**, 105075 (2022). * [105] Khosla, C. & Saini, B. S. Enhancing performance of deep learning models with different data augmentation techniques: A survey. In _2020 International Conference on Intelligent Engineering and Management (ICIEM)_, 79-85 (IEEE, 2020). * [106] Gatys, L. A., Ecker, A. S. & Bethge, M. A neural algorithm of artistic style. _arXiv preprint arXiv:1508.06576_ (2015). * [107] Xiao, Q. _et al._ Progressive data augmentation method for remote sensing ship image classification based on imaging simulation system and neural style transfer. _IEEE J. Sel. Top. Appl. Earth Obs. Remote. Sens._**14**, 9176-9186 (2021). * [108] Asami, K., Shono Fujita, K. & Hatayama, M. Data augmentation with synthesized damaged roof images generated by gan. In _Proceedings of the ISCRAM 2022 Conference Proceedings, 19th International Conference on Information Systems for Crisis Response and Management, Tarbes, France_, 7-9 (2022). * [109] Wang, Y. _et al._ Gan and cnn for imbalanced partial discharge pattern recognition in gis. _High Volt._**7**, 452-460 (2022). * [110] Al-Najjar, H. A., Pradhan, B., Sarkar, R., Beydoun, G. & Alamri, A. A new integrated approach for landslide data balancing and spatial prediction based on generative adversarial networks (gan). _Remote. Sens._**13**, 4011 (2021). * [111] Lv, N. _et al._ Remote sensing data augmentation through adversarial training. _IEEE J. Sel. Top. Appl. Earth Obs. Remote. Sens._**14**, 9318-9333 (2021). * [112] Sampath, V., Mauruta, I., Aguilar Martin, J. J. & Gutierrez, A. A survey on generative adversarial networks for imbalance problems in computer vision tasks. _J. big Data_**8**, 1-59 (2021). * [113] Elkan, C. The foundations of cost-sensitive learning. In _International joint conference on artificial intelligence_, vol. 17, 973-978 (Lawrence Erlbaum Associates Ltd, 2001). * [114] Tsai, C.-h., Chang, L.-c. & Chiang, H.-c. Forecasting of ozone episode days by cost-sensitive neural network methods. _Sci. Total. Environ._**407**, 2124-2135 (2009). * [115] Kang, M., Liu, Y., Wang, M., Li, L. & Weng, M. A random forest classifier with cost-sensitive learning to extract urban landmarks from an imbalanced dataset. _Int. J. Geogr. Inf. Sci._**36**, 496-513 (2022). * [116] Wu, M. _et al._ A multi-attention dynamic graph convolution network with cost-sensitive learning approach to road-level and minute-level traffic accident prediction. _IET Intell. Transp. Syst._**17**, 270-284 (2023). * [117] Tien Bui, D. _et al._ Gis-based modeling of rainfall-induced landslides using data mining-based functional trees classifier with adaboost, bagging, and multiboost ensemble frameworks. _Environ. Earth Sci._**75**, 1-22 (2016). * [118] Song, Y. _et al._ Landslide susceptibility mapping based on weighted gradient boosting decision tree in wanzhou section of the three gorges reservoir area (china). _ISPRS Int. J. Geo-Information_**8**, 4 (2018). * [119] Yu, H., Cooper, A. R. & Infante, D. M. Improving species distribution model predictive accuracy using species abundance: Application with boosted regression trees. _Ecol. Model._**432**, 109202 (2020). * [120] Kozlovskaia, N. & Zaytsev, A. Deep ensembles for imbalanced classification. In _2017 16th IEEE International Conference on Machine Learning and Applications (ICMLA)_, 908-913 (IEEE, 2017). * [121] Sun, Y., Kamel, M. S., Wong, A. K. & Wang, Y. Cost-sensitive boosting for classification of imbalanced data. _Pattern recognition_**40**, 3358-3378 (2007). * [122] Sun, Y., Wong, A. K. & Wang, Y. Parameter inference of cost-sensitive boosting algorithms. In _Machine Learning and Data Mining in Pattern Recognition: 4th International Conference, MLDM 2005, Leipzig, Germany, July 9-11, 2005. Proceedings 4_, 21-30 (Springer, 2005). * [* [123] Cui, Y., Ma, H. & Saha, T. Improvement of power transformer insulation diagnosis using oil characteristics data preprocessed by smoteboost technique. _IEEE Transactions on Dielectr. Electr. Insulation_**21**, 2363-2373 (2014). * [124] Seiffert, C., Khoshgoftaar, T. M., Van Hulse, J. & Napolitano, A. Rusboost: A hybrid approach to alleviating class imbalance. _IEEE Transactions on Syst. Man, Cybern. A: Syst. Humans_**40**, 185-197 (2009). * [125] Eltehewy, R., Abouelfarag, A. & Saleh, S. N. Efficient classification of imbalanced natural disasters data using generative adversarial networks for data augmentation. _ISPRS Int. J. Geo-Information_**12**, 245 (2023). * [126] Dong, Y., Xiao, H. & Dong, Y. Sa-cgan: An oversampling method based on single attribute guided conditional gan for multi-class imbalanced learning. _Neurocomputing_**472**, 326-337 (2022). * [127] Li, W. _et al._ Eid-gan: Generative adversarial nets for extremely imbalanced data augmentation. _IEEE Transactions on Ind. Informatics_ (2022). * [128] Schratz, P., Muenchow, J., Iturritxa, E., Richter, J. & Brenning, A. Hyperparameter tuning and performance assessment of statistical and machine-learning algorithms using spatial data. _Ecol. Model._**406**, 109-120 (2019). * [129] Salazar, J. J., Garland, L., Ochoa, J. & Pyrcz, M. J. Fair train-test split in machine learning: Mitigating spatial autocorrelation for improved prediction accuracy. _J. Petroleum Sci. Eng._**209**, 109885 (2022). * [130] Li, L., Tang, H., Lei, J. & Song, X. Spatial autocorrelation in land use type and ecosystem service value in hainan tropical rain forest national park. _Ecol. Indic._**137**, 108727 (2022). * [131] Tiranti, D., Nicolo, G. & Gaeta, A. R. Shallow landslides predisposing and triggering factors in developing a regional early warning system. _Landslides_**16**, 235-251 (2019). * [132] Ren, H., Shang, Y. & Zhang, S. Measuring the spatiotemporal variations of vegetation net primary productivity in inner mongolia using spatial autocorrelation. _Ecol. Indic._**112**, 106108 (2020). * [133] Box George, E., Jenkins Gwilym, M., Reinsel Gregory, C. & Ljung Greta, M. Time series analysis: forecasting and control. _San Francisco: Holden Bay_ (1976). * [134] Hubert, L. J., Golledge, R. G. & Costanzo, C. M. Generalized procedures for evaluating spatial autocorrelation. _Geogr. analysis_**13**, 224-233 (1981). * [135] Leung, Y., Mei, C.-L. & Zhang, W.-X. Testing for spatial autocorrelation among the residuals of the geographically weighted regression. _Environ. Plan. A_**32**, 871-890 (2000). * [136] Cho, S.-H., Lambert, D. M. & Chen, Z. Geographically weighted regression bandwidth selection and spatial autocorrelation: an empirical example using chinese agriculture data. _Appl. Econ. Lett._**17**, 767-772 (2010). * [137] Gaspard, G., Kim, D. & Chun, Y. Residual spatial autocorrelation in macroecological and biogeographical modeling: a review. _J. Ecol. Environ._**43**, 1-11 (2019). * [138] Crase, B., Liedloff, A., Vesk, P. A., Fukuda, Y. & Wintle, B. A. Incorporating spatial autocorrelation into species distribution models alters forecasts of climate-mediated range shifts. _Glob. Chang. Biol._**20**, 2566-2579 (2014). * [139] Kim, D. _et al._ Predicting the influence of multi-scale spatial autocorrelation on soil-landform modeling. _Soil Sci. Soc. Am. J._**80**, 409-419 (2016). * [140] Ching, J. & Phoon, K.-K. Impact of autocorrelation function model on the probability of failure. _J. Eng. Mech._**145**, 04018123 (2019). * [141] Ceci, M., Corizzo, R., Malerba, D. & Rashkovska, A. Spatial autocorrelation and entropy for renewable energy forecasting. _Data Min. Knowl. Discov._**33**, 698-729 (2019). * [142] Smith, D. B. _et al._ Geochemical and mineralogical data for soils of the conterminous united states. Tech. Rep., US Geological Survey (2013). * [143] Dormann, C. F. _et al._ Methods to account for spatial autocorrelation in the analysis of species distributional data: a review. _Ecography_ 609-628 (2007). * [144] Bachmaier, M. & Backes, M. Variogram or semivariogram? understanding the variances in a variogram. _Precis. Agric._**9**, 173-175 (2008). * [145] Fortin, M.-J. & Dale, M. R. Spatial autocorrelation. _The SAGE handbook spatial analysis_ 89-103 (2009). * [146] Isaaks, E. H. & Srivastava, R. M. Applied geostatistics. _(No Title)_ (1989). * [147] Getis, A. Reflections on spatial autocorrelation. _Reg. Sci. Urban Econ._**37**, 491-496 (2007). * [* [148] Arbia, G., Griffith, D. & Haining, R. Error propagation modelling in raster gis: overlay operations. _Int. J. Geogr. Inf. Sci._**12**, 145-167 (1998). * [149] Griffith, D. A. Effective geographic sample size in the presence of spatial autocorrelation. _Annals Assoc. Am. Geogr._**95**, 740-760 (2005). * [150] Di, W., ZHOU, Q.-b., Peng, Y. & CHEN, Z.-x. Design of a spatial sampling scheme considering the spatial autocorrelation of crop acreage included in the sampling units. _J. Integr. Agric._**17**, 2096-2106 (2018). * [151] Radocaj, D., Jug, I., Vukadinovic, V., Jurisic, M. & Gasparovic, M. The effect of soil sampling density and spatial autocorrelation on interpolation accuracy of chemical soil properties in arable cropland. _Agronomy_**11**, 2430 (2021). * [152] Fortin, M.-J., Drapeau, P. & Legendre, P. Spatial autocorrelation and sampling design in plant ecology. _Prog. theoretical vegetation science_ 209-222 (1990). * [153] Griffith, D. A. Establishing qualitative geographic sample size in the presence of spatial autocorrelation. _Annals Assoc. Am. Geogr._**103**, 1107-1122 (2013). * [154] Scott Overton, W. & Stehman, S. V. Properties of designs for sampling continuous spatial resources from a triangular grid. _Commun. Stat. Methods_**22**, 251-264 (1993). * [155] Dutilleul, P. & Pelletier, B. Tests of significance for structural correlations in the linear model of coregionalization. _Math. Geosci._**43**, 819-846 (2011). * [156] Rocha, A. D., Groen, T. A., Skidmore, A. K. & Willemen, L. Role of sampling design when predicting spatially dependent ecological data with remote sensing. _IEEE transactions on geoscience remote sensing_**59**, 663-674 (2020). * [157] O'brien, R. M. A caution regarding rules of thumb for variance inflation factors. _Qual. & quantity_**41**, 673-690 (2007). * [158] Cavanaugh, J. E. & Neath, A. A. The akaike information criterion: Background, derivation, properties, application, interpretation, and refinements. _Wiley Interdiscip. Rev. Comput. Stat._**11**, e1460 (2019). * [159] Le Rest, K., Pinaud, D., Monestiez, P., Chadoeuf, J. & Bretagnolle, V. Spatial leave-one-out cross-validation for variable selection in the presence of spatial autocorrelation. _Glob. ecology biogeography_**23**, 811-820 (2014). * [160] Zhao, Z., Wu, J., Cai, F., Zhang, S. & Wang, Y.-G. A hybrid deep learning framework for air quality prediction with spatial autocorrelation during the covid-19 pandemic. _Sci. Reports_**13**, 1015 (2023). * [161] Liu, X., Kounadi, O. & Zurita-Milla, R. Incorporating spatial autocorrelation in machine learning models using spatial lag and eigenvector spatial filtering features. _ISPRS Int. J. Geo-Information_**11**, 242 (2022). * [162] Kim, H.-J. _et al._ Spatial autocorrelation incorporated machine learning model for geotechnical subsurface modeling. _Appl. Sci._**13**, 4497 (2023). * [163] Anselin, L. _Spatial econometrics: methods and models_, vol. 4 (Springer Science & Business Media, 1988). * [164] Lichstein, J. W., Simons, T. R., Shriner, S. A. & Franzreb, K. E. Spatial autocorrelation and autoregressive models in ecology. _Ecol. monographs_**72**, 445-463 (2002). * [165] LeSage, J. P. An introduction to spatial econometrics. _Revue d'economie industrielle_ 19-44 (2008). * [166] Brunsdon, C., Fotheringham, S. & Charlton, M. Geographically weighted regression. _J. Royal Stat. Soc. Ser. D (The Stat._**47**, 431-443 (1998). * [167] Legendre, P. & FORTIN, M.-J. Comparison of the mantel test and alternative approaches for detecting complex multivariate relationships in the spatial analysis of genetic data. _Mol. ecology resources_**10**, 831-844 (2010). * [168] Diniz-Filho, J. A. F., Bini, L. M. & Hawkins, B. A. Spatial autocorrelation and red herrings in geographical ecology. _Glob. ecology Biogeogr._**12**, 53-64 (2003). * [169] Banerjee, S., Carlin, B. P. & Gelfand, A. E. _Hierarchical modeling and analysis for spatial data_ (CRC press, 2014). * [170] Sergeev, A., Buevich, A., Baglaeva, E. & Shichkin, A. Combining spatial autocorrelation with machine learning increases prediction accuracy of soil heavy metals. _Catena_**174**, 425-435 (2019). * [171] Pohjankukka, J., Pahikkala, T., Nevalainen, P. & Heikkonen, J. Estimating the prediction performance of spatial models via spatial k-fold cross validation. _Int. J. Geogr. Inf. Sci._**31**, 2001-2019 (2017). * [172] Mila, C., Mateu, J., Pebesma, E. & Meyer, H. Nearest neighbour distance matching leave-one-out cross-validation for map validation. _Methods Ecol. Evol._**13**, 1304-1316 (2022). * [173] Koldasbayeva, D., Tregubova, P., Shadrin, D., Gasanov, M. & Pukalchik, M. Large-scale forecasting of heracleum sosnowskyi habitat suitability under the climate change on publicly available data. _Sci. reports_**12**, 6128 (2022). * [* [174] Fotheringham, A. S. & Brunsdon, C. Local forms of spatial analysis. _Geogr. analysis_**31**, 340-358 (1999). * [175] Roberts, D. R. _et al._ Cross-validation strategies for data with temporal, spatial, hierarchical, or phylogenetic structure. _Ecography_**40**, 913-929 (2017). * [176] Negret, P. J. _et al._ Effects of spatial autocorrelation and sampling design on estimates of protected area effectiveness. _Conserv. Biol._**34**, 1452-1462 (2020). * [177] Zurell, D. _et al._ A standard protocol for reporting species distribution models. _Ecography_**43**, 1261-1277 (2020). * [178] Valavi, R., Elith, J., Lahoz-Monfort, J. J. & Guillera-Arroita, G. blockcv: An r package for generating spatially or environmentally separated folds for k-fold cross-validation of species distribution models. _Biorxiv_ 357798 (2018). * [179] Poggio, L. _et al._ Soilgrids 2.0: producing soil information for the globe with quantified spatial uncertainty. _Soil_**7**, 217-240 (2021). * [180] Abdar, M. _et al._ A review of uncertainty quantification in deep learning: Techniques, applications and challenges. _Inf. Fusion_**76**, 243-297 (2021). * [181] Bassett Jr, G. & Koenker, R. Asymptotic theory of least absolute error regression. _J. Am. Stat. Assoc._**73**, 618-622 (1978). * [182] Shrestha, D. L. & Solomatine, D. P. Machine learning approaches for estimation of prediction interval for the model output. _Neural networks_**19**, 225-235 (2006). * [183] Rahmati, O. _et al._ Predicting uncertainty of machine learning models for modelling nitrate pollution of groundwater using quantile regression and uneec methods. _Sci. Total. Environ._**688**, 855-866 (2019). * [184] Kasraei, B. _et al._ Quantile regression as a generic approach for estimating uncertainty of digital soil maps produced from machine-learning. _Environ. Model. & Softw._**144**, 105139 (2021). * [185] Efron, B. Bootstrap methods: another look at the jackknife. In _Breakthroughs in statistics: Methodology and distribution_, 569-593 (Springer, 1992). * [186] Heskes, T. Practical confidence and prediction intervals. _Adv. neural information processing systems_**9** (1996). * [187] Nix, D. A. & Weigend, A. S. Estimating the mean and variance of the target probability distribution. In _Proceedings of 1994 ieee international conference on neural networks (ICNN'94)_, vol. 1, 55-60 (IEEE, 1994). * [188] Song, X. _et al._ Modeling spatio-temporal distribution of soil moisture by deep learning-based cellular automata model. _J. Arid Land_**8**, 734-748 (2016). * [189] Chen, X.-Y. & Chau, K.-W. Uncertainty analysis on hybrid double feedforward neural network model for sediment load estimation with lube method. _Water Resour. Manag._**33**, 3563-3577 (2019). * [190] Szatmari, G. & Pasztor, L. Comparison of various uncertainty modelling approaches based on geostatistics and machine learning algorithms. _Geoderma_**337**, 1329-1340 (2019). * [191] Takoutsing, B. & Heuvelink, G. B. Comparing the prediction performance, uncertainty quantification and extrapolation potential of regression kriging and random forest while accounting for soil measurement errors. _Geoderma_**428**, 116192 (2022). * [192] Ellison, A. M. Bayesian inference in ecology. _Ecol. letters_**7**, 509-520 (2004). * [193] Gal, Y. & Ghahramani, Z. Dropout as a bayesian approximation: Representing model uncertainty in deep learning. In _international conference on machine learning_, 1050-1059 (PMLR, 2016). * [194] Kupinski, M. A., Hoppin, J. W., Clarkson, E. & Barrett, H. H. Ideal-observer computation in medical imaging with use of markov-chain monte carlo techniques. _JOSA A_**20**, 430-438 (2003). * [195] Swiatkowski, J. _et al._ The k-tied normal distribution: A compact parameterization of gaussian mean field posteriors in bayesian neural networks. In _International Conference on Machine Learning_, 9289-9299 (PMLR, 2020). * [196] Lu, Q. _et al._ Risk analysis for reservoir flood control operation considering two-dimensional uncertainties based on bayesian network. _J. Hydrol._**589**, 125353 (2020). * [197] Liu, Y. _et al._ Probabilistic spatiotemporal wind speed forecasting based on a variational bayesian deep learning model. _Appl. Energy_**260**, 114259 (2020). * [198] Harrison, K. W., Kumar, S. V., Peters-Lidard, C. D. & Santanello, J. A. Quantifying the change in soil moisture modeling uncertainty from remote sensing observations using bayesian inference techniques. _Water Resour. Res._**48** (2012). * [199] Cook, A., Marion, G., Butler, A. & Gibson, G. Bayesian inference for the spatio-temporal invasion of alien species. _Bull. mathematical biology_**69**, 2005-2025 (2007). * [* [200] Meinshausen, N. & Ridgeway, G. Quantile regression forests. _J. machine learning research_**7** (2006). * [201] Sylvain, J.-D., Anctil, F. & Thiffault, E. Using bias correction and ensemble modelling for predictive mapping and related uncertainty: a case study in digital soil mapping. _Geoderma_**403**, 115153 (2021). * [202] Brungard, C. _et al._ Regional ensemble modeling reduces uncertainty for digital soil mapping. _Geoderma_**397**, 114998 (2021). * [203] Pearce, T., Brintrup, A., Zaki, M. & Neely, A. High-quality prediction intervals for deep learning: A distribution-free, ensembled approach. In _International conference on machine learning_, 4075-4084 (PMLR, 2018). * [204] Gavilan-Acuna, G. _et al._ Reducing the uncertainty of radiata pine site index maps using an spatial ensemble of machine learning models. _Forests_**12**, 77 (2021). * [205] Zhao, D., Wang, J., Zhao, X. & Triantafilis, J. Clay content mapping and uncertainty estimation using weighted model averaging. _Catena_**209**, 105791 (2022). * [206] Jansen, J. _et al._ Stop ignoring map uncertainty in biodiversity science and conservation policy. _Nat. Ecol. & Evol._**6**, 828-829 (2022). * [207] Lucchesi, L. R. & Wikle, C. K. Visualizing uncertainty in areal data with bivariate choropleth maps, map pixelation and glyph rotation. _Stat_**6**, 292-302 (2017). * [208] Bivand, R. S., Pebesma, E. J., Gomez-Rubio, V. & Pebesma, E. J. _Applied spatial data analysis with R_, vol. 747248717 (Springer, 2008). * [209] Pebesma, E. & Bivand, R. _Spatial Data Science: With applications in R_ (Chapman and Hall/CRC, 2023). * [210] Hijmans, R. J. _et al._ Package 'raster'. _R package_**734**, 473 (2015). * [211] Jordahl, K. _et al._ geopandas/geopandas: v0.8.1, DOI: 10.5281/zenodo.3946761 (2020). * [212] Gillies, S. _rasterio Documentation, Release 1.4dev_ (2023). Software documentation. * [213] GDAL/OGR contributors. _GDAL/OGR Geospatial Data Abstraction software Library_. Open Source Geospatial Foundation, DOI: 10.5281/zenodo.5884351 (2023). * [214] Rey, S. J. & Anselin, L. PySAL: A Python Library of Spatial Analytical Methods. _The Rev. Reg. Stud._**37**, 5-27 (2007). * [215] Cordon, I., Garcia, S., Fernandez, A. & Herrera, F. Imbalance: Oversampling algorithms for imbalanced classification in r. _Knowledge-Based Syst._**161**, 329-341 (2018). * [216] Huifeldt, E. _themis: Extra Recipes Steps for Dealing with Unbalanced Data_ (2023). Https://github.com/tidymodels/themis, [https://themis.tidymodels.org](https://themis.tidymodels.org). * [217] Lemaitre, G., Nogueira, F. & Aridas, C. K. Imbalanced-learn: A python toolbox to tackle the curse of imbalanced datasets in machine learning. _J. Mach. Learn. Res._**18**, 1-5 (2017). * [218] Thuiller, W. _et al._ Package 'biomod2'. _Species distribution modeling within an ensemble forecasting framework_ (2016). * [219] Roger Bivand. R packages for analyzing spatial data: A comparative case study with areal data. _Geogr. Analysis_**54**, 488-518, DOI: 10.1111/gean.12319 (2022). * [220] Bjornstad, O. N. & Bjornstad, M. O. N. Package 'ncf'. _Spatial nonparametric covariance functions_ (2016). * [221] Bachl, F. E., Lindgren, F., Borchers, D. L. & Illian, J. B. inlabru: an r package for bayesian spatial modelling from ecological survey data. _Methods Ecol. Evol._**10**, 760-766 (2019). * [222] Lucchesi, L. & Kuhnert, P. _Vizumap: Visualizing uncertainty in spatial data_ (2023). Https://lydialucchesi.github.io/Vizumap/, [https://github.com/lydialucchesi/Vizumap](https://github.com/lydialucchesi/Vizumap). * [223] Heuvelink, G. B., Brown, J. D. & van Loon, E. E. A probabilistic framework for representing and simulating uncertain environmental variables. _Int. J. Geogr. Inf. Sci._**21**, 497-513 (2007). * [224] Chung, Y., Char, I., Guo, H., Schneider, J. & Neiswanger, W. Uncertainty toolbox: an open-source library for assessing, visualizing, and improving uncertainty quantification. _arXiv preprint arXiv:2109.10254_ (2021). * [225] Gorelick, N. _et al._ Google earth engine: Planetary-scale geospatial analysis for everyone. _Remote. Sens. Environ._ DOI: 10.1016/j.rse.2017.06.031 (2017). * [226] Anderson, S. C., Ward, E. J., English, P. A. & Barnett, L. A. K. sdmtmb: an r package for fast, flexible, and user-friendly generalized linear mixed effects models with spatial and spatiotemporal random fields. _bioRxiv_**2022.03.24.485545**, DOI: 10.1101/2022.03.24.485545 (2022). * [227] Uieda, L. Verde: Processing and gridding spatial data using Green's functions. _J. Open Source Softw._**3**, 957, DOI: 10.21105/joss.00957 (2018). * [228] Muller, S., Schuler, L., Zech, A. & Hesse, F. Gstools v1. 3: a toolbox for geostatistical modelling in python. _Geosci. Model. Dev._**15**, 3161-3182 (2022). * [229] Zhai, X., Kolesnikov, A., Houlsby, N. & Beyer, L. Scaling vision transformers. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, 12104-12113 (2022). * [230] Sun, C., Shrivastava, A., Singh, S. & Gupta, A. Revisiting unreasonable effectiveness of data in deep learning era. In _Proceedings of the IEEE international conference on computer vision_, 843-852 (2017). * [231] Oquab, M. _et al._ DINOV2: Learning robust visual features without supervision. _arXiv preprint arXiv:2304.07193_ (2023). * [232] Touvron, H. _et al._ Llama 2: Open foundation and fine-tuned chat models. _arXiv preprint arXiv:2307.09288_ (2023). * [233] Veillette, M., Samsi, S. & Mattioli, C. SEVIR: A storm event imagery dataset for deep learning applications in radar and satellite meteorology. _Adv. Neural Inf. Process. Syst._**33**, 22009-22019 (2020). * [234] Gao, Z. _et al._ Earthformer: Exploring space-time transformers for earth system forecasting. _Adv. Neural Inf. Process. Syst._**35**, 25390-25403 (2022). * [235] Ravuri, S. _et al._ Skilful precipitation nowcasting using deep generative models of radar. _Nature_**597**, 672-677 (2021). * [236] Zeng, A. _et al._ Socratic models: Composing zero-shot multimodal reasoning with language. In _The Eleventh International Conference on Learning Representations_ (2022). * [237] Mohanty, S. P. _et al._ Deep learning for understanding satellite imagery: An experimental survey. _Front. Artif. Intell._**3**, 534696 (2020). * [238] Novikov, G., Trekin, A., Potapov, G., Ignatiev, V. & Burnaev, E. Satellite imagery analysis for operational damage assessment in emergency situations. In _Business Information Systems: 21st International Conference, BIS 2018, Berlin, Germany, July 18-20, 2018, Proceedings 21_, 347-358 (Springer, 2018). * [239] Burnaev, E. V. _et al._ Fundamental research and developments in the field of applied artificial intelligence. In _Doklady Mathematics_, vol. 106, S14-S22 (Springer, 2022). * [240] Kenthapadi, K., Lakkaraju, H., Natarajan, P. & Sameki, M. Model monitoring in practice: lessons learned and open challenges. In _Proceedings of the 28th ACM SIGKDD conference on knowledge discovery and data mining_, 4800-4801 (2022). * [241] Gama, J., Ziobaite, I., Bifet, A., Pechenizkiy, M. & Bouchachia, A. A survey on concept drift adaptation. _ACM computing surveys (CSUR)_**46**, 1-37 (2014). * [242] Vela, D. _et al._ Temporal quality degradation in ai models. _Sci. reports_**12**, 11654 (2022). * [243] van de Ven, G. M., Tuytelaars, T. & Tolias, A. S. Three types of incremental learning. _Nat. Mach. Intell._**4**, 1185-1197 (2022).
With the rise of electronic data, particularly Earth observation data, data-based geospatial modelling using machine learning (ML) has gained popularity in environmental research. Accurate geospatial predictions are vital for domain research based on ecosystem monitoring and quality assessment and for policy-making and action planning, considering effective management of natural resources. The accuracy and computation speed of ML has generally proved efficient. However, many questions have yet to be addressed to obtain precise and reproducible results suitable for further use in both research and practice. A better understanding of the ML concepts applicable to geospatial problems enhances the development of data science tools providing transparent information crucial for making decisions on global challenges such as biosphere degradation and climate change. This survey reviews common nuances in geospatial modelling, such as imbalanced data, spatial autocorrelation, prediction errors, model generalisation, domain specificity, and uncertainty estimation. We provide an overview of techniques and popular programming tools to overcome or account for the challenges. We also discuss prospects for geospatial Artificial Intelligence in environmental applications.
Condense the content of the following passage.
arxiv/c5c8a123_ab01_409e_8f3a_71a6a07c0956.md
On the impact of key design aspects in simulated Hybrid Quantum Neural Networks for Earth Observation Lorenzo Papa, Alessandro Sebastianelli, Gabriele Meoni, and Irene Amerini, L. Papa and I. Amerini are with the Department of Computer, Control and Management Engineering, Sapienza University of Rome, Italy, IT, 00185. E-mail: {papa, amerini}@diag.uniroma1.itA. Sebastianelli is with the \\(\\phi\\)-lab at European Space Agency (ESA), Frascati, Italy, IT, 00078. E-mail: aslesandro. [email protected]. Meoni is with the \\(\\phi\\)-lab, ESA, Frascati, Italy, IT, 00078 and with the Advanced Concepts and Studies Office, ESA, Keplerlaan 1, 2201 AZ Noordwijk, Netherlands, NL.The work has been developed during the visiting research period of L. Papa at \\(\\phi\\)-lab European Space Agency (ESA), Frascati, Italy.Manuscript received April 19, 2021; revised August 16, 2021. ## I Introduction The advent of quantum computing has introduced revolutionary opportunities for tackling machine learning (ML) tasks from a new and powerful perspective. Quantum computing leverages the principles of superposition, entanglement, and quantum interference, which enable the processing of complex computations that are far beyond the capabilities of classical systems. More in detail, in recent years, traditional ML approaches, especially deep learning (DL), have shown outstanding performances across several domains, including image recognition, natural language processing, and Earth Observation. However, as the researcher's pursuit of higher accuracy and efficiency continues to grow, the limitations of traditional computing become more evident. As a result, the integration of quantum computing with traditional DL paradigms has emerged as a promising research frontier across various domains. This development is primarily driven by the ability of quantum algorithms to process and encode high-dimensional data in ways that classical systems find challenging, thereby offering the potential for improved accuracy and efficiency. Furthermore, quantum-enhanced models are capable of exploring larger solution spaces more effectively, which may lead to faster convergence training behaviors, improved generalization, and superior performance, especially in tasks involving large-scale and complex datasets. Consequently, following this research trend, several works on remote sensing data have been developed in order to investigate the use of such innovative, powerful technologies over EO tasks. However, despite the growing interest in the combination of quantum computing (QC) and DL4EO, the majority of the existing research has been primarily focused on advancing hybrid models from an architectural perspective, i.e., focusing on (convolutional) encoding and/or quantum circuit components. Despite such significant advances, they represent just a portion of the broader challenges and potential associated with the application of quantum-enhanced models in EO tasks. Consequently, building on previously related works and motivated by Zaidenberg et al. [1], this study intends to explore several key features that are central to advancing the field of hybrid quantum DL models and their applications in EO tasks. More in detail, the rationale of this work is threefold: 1. Starting from Zaidenberg et al. [1], this study aims to evaluate the behavior of quantum computing libraries used to train quantum neural network (QNN) architectures. 2. Investigate and compare the sensitivity performance of both quantized and non-quantized neural networks in response to different initialization (i.e., seed values). Specifically, we will examine the convergence behavior of chosen architectures when subject to different starting conditions. 3. Explore the potential of hybrid quantum architectures by incorporating simple single quantum circuits into Vision Transformer (ViT) structures. More in detail, the objective is to push the boundaries of the work proposed by Zaidenberg et al. [1] assessing the performance of these novel HQViT models in Earth Observation(EO) tasks and comparing their behavior to their non-quantized counterpart. However, while quantum computing has the potential to give new prospects when compared to traditional DL learning frameworks, it still faces several significant challenges and limitations. Concerns include hardware stability, error rates, scalability, and the difficulty of developing effective quantum algorithms. Consequently, we have to consider that these challenges may limit the practical deployment of quantum-enhanced neural networks and have to be considered while assessing their potential in real-world applications. Summarizing (1) by evaluating different quantum libraries, this research seeks to uncover potential performance discrepancies and challenges that may arise when implementing quantum-enhanced neural networks. The second point (2) is crucial for understanding how initialization impacts the stability and training efficacy of quantum and classical networks, as well as evaluating their sensitivity to initial parameter choices, which is a common concern in neural network training. Finally, the third study (3) is motivated by the rising interest in hybrid quantum-classical techniques, which exploit quantum components to augment the capabilities of classical neural architectures in specific domains such as image classification and remote sensing. As a result, while considering quantum limitations, this work intends to contribute to the growing body of knowledge on QNN by systematically investigating the interactions between quantum libraries, initialization values, and hybrid model structures, with a particular focus on their application to EO tasks. The rest of this paper is organized as follows: Section II reviews the relevant literature on quantum and non-quantum deep learning approaches for Earth Observation (EO). Section III outlines the three case studies, highlighting their respective challenges. Section IV-B provides a detailed description of the dataset and the implementation specifics required to replicate the reported experiments. Section V presents and analyzes the experimental results, while Section VI provides final thoughts and discusses future research directions. ## II Related Works The rapid advancements in both DL and quantum computing have generated significant interest in the EO domain in recent years. Consequently, we review recent related studies applied to EO tasks. This section provides a comprehensive overview of hybrid approaches, key challenges, and the potential of quantum computing in EO. Zeng et al. [2] (2020) laid the groundwork by introducing the Quantum Mechanism Effect Spectral Clustering (QMESC) model. Their model leverages quantum mechanics to tackle pixel mixture challenges in hyperspectral images, using Green's function to accurately decompose mixed pixels and identify cluster centers with quantum potential energy. The following year, Zaidenberg et al. [1] (2021) further advanced this field by developing a QNN model for remote sensing image classification using the EuroSAT dataset [3]. Their study emphasizes the speed and feasibility of QML for EO, showcasing performance on par with classical models. The author focuses on qubit decoherence and data processing on Noisy Intermediate-Scale Quantum (NISQ) devices, underscoring the need for improvements in data handling and model scalability for future applications. In the same year, Otgonbaatar and Datcu [4] (2021) explored quantum annealing with a D-Wave quantum computer for feature selection in hyperspectral images. Their Mutual Information-based method identifies the most informative spectral bands, demonstrated on the Indian Pine dataset. Therefore, by employing quantum classifiers like Qboost, their approach achieved comparable or improved accuracy over classical methods, illustrating quantum annealing's potential for remote sensing data processing. Sebastianelli et al. [5] (2022) build upon [1], introducing a hybrid quantum convolutional neural network (HQCNN) that incorporates quantum layers within a classical CNN for enhanced land-use classification. Tested on the EuroSAT dataset, the authors show that HQCNN can improve traditional DL models by leveraging entanglement for improved classification accuracy. This work highlights the potential of quantum circuits for EO, paving the way for future applications with hybrid architectures. Furthermore, Mate et al. [6] (2022) proposed an ansatz-free optimization technique for quantum circuits, parameterizing circuits in the Lie algebra to simplify optimization and enhance training speed. This approach enables flexible exploration of quantum circuits, avoiding the constraints of fixed architectures. Tested on both toy and image classification tasks, their method demonstrates the computational advantages of unitary optimization, adding robustness to quantum machine learning models. Expanding on hybrid quantum-classical approaches, Otgonbaatar et al. [7] (2022) investigated networks for large-scale EO data processing. They identified real-world problems suitable for quantum computing and proposed encoding strategies on NISQ devices. Their comparisons between hybrid models and conventional techniques underscore the potential for quantum computing to handle big data challenges, even amid hardware limitations. Moreover, Gupta et al. [8] (2022) examined the integration of classical neural networks with Projected Quantum Kernel (PQK) features for Land Use and Land Cover tasks using Sentinel-2 data. They found that PQK significantly improved training accuracy, highlighting the advantages of QML in handling multispectral EO data. This study suggests promising avenues for future applications of quantum-enhanced features in remote sensing. Further developments in 2023 saw Gupta et al. [9] investigating PQK features for multispectral classification. They achieved substantial accuracy gains, underscoring the utility of quantum kernels for complex EO datasets. Chang et al. [10] introduced Equivariant Quantum Convolutional Neural Networks (EquivQCNN), which leverage planar symmetries to enhance generalization and performance, particularly in data-limited scenarios, while highlighting the potential for symmetry-based quantum models in EO. Furthermore, Nammouchi et al. [11] (2023) provided a comprehensive review of QML applications in climate change and sustainability, emphasizing quantum methods' potential in areas like energy systems and disaster prediction. They also discuss challengeswith current quantum hardware, suggesting that QML could improve model accuracy and data processing efficiency in climate research, with potential expansions into modeling extreme events. Moreover, Otgonbaatar et al. [12] (2023) firstly explored hybrid quantum transfer learning, combining classical VGG16 with QML for high-dimensional EO datasets. They compared real amplitude and strongly entangling quantum networks, finding that the latter often yielded better accuracy due to their local effective dimension, despite challenges related to limited quantum resources. Subsequently, in another study, Otgonbaatar et al. [13] (2023) employed quantum-inspired tensor networks to enhance deep learning models for Earth science tasks. They focused on compressing physics-informed neural networks (PINNs) and improving the spectral resolution of hyperspectral images, achieving computational efficiency without compromising accuracy. Recently, Fan et al. [14] (2024) presented two HQCNNs for land cover classification using Sentinel-2 multispectral images. Their models combine quantum computing for feature extraction and classical methods for classification, achieving a performance boost over traditional CNNs. Similar to previous studies, also this research underlines the advantages of hybrid convolutional models in handling large EO datasets with improved accuracy and transferability. Moreover, Meyer et al. [15] (2024) investigate a different approach by applying quantum reinforcement learning to cognitive synthetic aperture radar (SAR) data for ship detection in maritime monitoring. Their two-stage approach integrates variational quantum circuits for scene adaptation and resource optimization, demonstrating how quantum methods could enhance SAR systems' adaptability and efficiency in EO. This timeline of advancements demonstrates the growing potential of quantum computing in EO, from quantum-enhanced clustering and feature selection to hybrid architectures and reinforcement learning. These studies collectively underscore the transformative impact quantum computing could have on EO, offering promising directions for future research and applications. Consequently, this research study aims to build on previous knowledge exploring through three cases of study less-investigated quantum aspects, i.e., focusing on quantum libraries, sensitivity, and attention-based quantum structures. ## III Cases of Study This section presents the three key areas of investigation in this study: quantum libraries in Section III-A, model robustness III-B in Section III-B, and architectural design in Section III-C. More in detail, we first describe the quantum computing libraries utilized for training quantum neural networks, examining their strengths and limitations. Next, we look into the sensitivity (sensitivity to initialization) of both quantized and non-quantized models by analyzing the importance of the impact of different random seed initializations. Finally, we define the architectures employed in this study and introduce the novel hybrid quantum ViTs. ### _Quantum Libraries_ Quantum computing has emerged as a revolutionary field capable of solving complex problems that are intractable for conventional computers, i.e., challenges that are too difficult or highly time-consuming. This behavior is mainly motivated by the fact that differently, unlike classical computers, which process information using bits (0s and 1s), quantum computers use quantum bits (qubits), allowing them to perform several calculations simultaneously. As researchers and developers explore this new frontier, various quantum computing libraries have been developed to facilitate the design, simulation, and execution of quantum algorithms. In this domain, two well-known frameworks are Qiskit and PennyLane, each offering specific features and capabilities that cover various elements of quantum computing and its integration with traditional machine learning techniques. **Qiskit** has been developed by IBM company; it is a comprehensive framework that offers a wide range of tools for designing, simulating, and executing quantum circuits. One of its key features is the ability to access real quantum hardware through the IBM Quantum platform, which allows for practical experimentation. Qiskit's modular architecture enables users to work with specific components, such as Qiskit Terra for circuit creation, Qiskit Aer for simulation, and Qiskit Ignis for error mitigation; this structure covers a wide range of applications/requirements. Additionally, Qiskit also benefits from extensive documentation and a large community, which facilitates learning and troubleshooting. However, when compared with PennyLane, due to the needed interaction between multiple components, Qiskit could be trickier, requiring a substantial time and effort investment. Furthermore, while Qiskit provides access to quantum devices, the performance and availability can be influenced by hardware limitations, such as qubit count and coherence time. Then, as quantum circuits scale in size, managing complexity and ensuring effective execution on available hardware becomes increasingly challenging. **PennyLane** has been developed by Xanadu company; it is specifically designed for quantum/traditional computations, and it integrates with popular machine learning libraries like PyTorch and TensorFlow. This integration allows for the efficient development of hybrid quantum models, which is one of PennyLane's key advantages. The framework supports differentiable quantum programming, enabling users to optimize quantum circuits alongside traditional neural networks based on backpropagation techniques. Moreover, PennyLane's design is flexible, allowing for multiple quantum hardware platforms and simulators and providing researchers with a wide range of experimental choices. However, PennyLane also has its drawbacks. For instance, even if the framework supports several backends, users may find limited access to real quantum devices, depending on the platform they choose. Additionally, the learning curve associated with understanding the hybrid model concept and differentiable programming can pose challenges in the model's convergence behaviors. In summary, each quantum library has its advantages and disadvantages that pose challenges in its implementation and usage. Moreover, the development of hybrid models that effectively leverage both classical and quantum components could not come without limitations, even with the support of such powerful libraries. Furthermore, the research field of quantum computing is evolving rapidly, necessitating continuous learning and adaptation to new features and best practices within these libraries. Motivated by previous claims, in this first case of study, we aim to investigate the practical usage and the model's convergence behavior in hybrid quantum settings in order to understand the framework's strengths and weaknesses. ### _Sensitivity to initialization_ In DL, the concept of sensitivity to initialization refers to how the initial weights and biases affect the training dynamics, convergence rate, and final performance of the model. Stability, on the other hand, measures the consistency of a model's performance across different runs under varying initial conditions, such as different random seeds. Therefore, the concept of sensitivity to initialization is crucial in DL research scenarios, as it influences how the optimization process navigates the high-dimensional loss landscape. More in detail, the weights of neural network architectures are typically initialized randomly or following a given distribution guided by a random value (seed), such as Normal, Uniform, and many others. From a mathematical point of view, given a loss function \\(\\mathcal{L}(\\theta)\\), where \\(\\theta\\) represents the parameters of the model, in a standard training procedure, such function is minimized (or maximized) through an optimization algorithm. We following report (Equation 1) how the parameters are updated at each time step (\\(t+1\\)): \\[\\theta_{t+1}=\\theta_{t}-\\eta\ abla\\mathcal{L}(\\theta_{t}) \\tag{1}\\] We indicate with \\(\\eta\\) the learning rate, and with \\(\ abla\\mathcal{L}(\\theta_{t})\\) the gradient of the loss function with respect to the parameters at time step \\(t\\). Building on this formulation, the second case study of this work aims to explore \\(\\theta_{0}\\), i.e., the initialization of \\(\\theta\\) at time \\(t_{0}\\). This focus is motivated by the fact that DL models may converge to suboptimal (local) minima or present divergent behavior due to inadequate initialization, particularly in complex loss surfaces characterized by local minima and saddle points. Generally speaking, we investigate and compare classical DL models with their quantum-enhanced counterparts, detailed in the next section, under various initialization values/conditions. The objective is to examine the stability and convergence behaviors of novel techniques in comparison to traditional approaches within convolutional and transformer structures. This concern is, in fact, particularly relevant in the context of hybrid quantum models, where the interplay between classical and quantum layers may present specific challenges in ensuring stable and reliable convergence. More in detail, in our scenario, the quantum layer/circuit is added to a conventional convolutional or transformer architecture in order to increase the feature space by leveraging quantum properties and potentially enhancing the model's performances. However, such a layer may also introduce additional sensitivity and variability. These factors, together with quantum noise and gate fidelity, may significantly impact the stability of such hybrid models. Mathematically speaking, the output state of the quantum layer, reported in Equation 2, can be expressed as a unitary transformation applied to the input state vector \\(|\\psi_{in}\\rangle\\): \\[|\\psi_{\\text{out}}\\rangle=U(\\theta)|\\psi_{\\text{in}}\\rangle \\tag{2}\\] where \\(U(\\theta)\\) is a unitary operator parameterized by \\(\\theta\\), representing the sequence of quantum gates applied to the input state. Thus, the comparative analysis of classical and quantum-enhanced models in this study aims to provide insights into the benefits and trade-offs associated with quantum integration into traditional architectures. Moreover, by examining convergence behaviors across multiple seed values, the study aims to explore robustness and stability while offering guidance for the development of future quantum-classical hybrid neural networks. ### _Architectures_ In this last section, we formally describe quantized and traditional architectures while introducing innovative hybrid quantum Vision Transformers (ViTs), which, to the best of our knowledge, are being employed for the first time in EO tasks. Specifically, this study examines four couple of architectural structures: three convolutional architectures, namely NN4EOv1, NN4EOv2, NN4EOv3 in their traditional forms, and their quantized counterparts, HQNN4EOv1, HQNN4EOv2, and HQNN4EOv3 which has been originally extracted from Zaidenberg et al. [1] and reduced in terms of number of convolutional operation to understand their behavior. Differently, the latter ViT-based structure is referred to as ViT and HQViT. These architectural structures are graphically represented in Figure 1 and following described. Before going into the details of each architectural structure, we formally introduce their fundamental elements, i.e., the convolution operation employed in CNN and NN4EO, the self-attention mechanism used in ViT, and the elementary quantum layer utilized in their hybrid quantum configurations. The **2D Convolution** operation is the foundational operation in image processing and a key component of well-established CNN architectures. This operation involves a filter (kernel), which slides over the input image to produce an output feature map. The primary objective of convolution is to extract an image's features, such as edges, textures, or patterns. More in detail, given an input image \\(I\\) and a kernel \\(K\\), the convolution produces an output feature map \\(O\\). Thus, the value of the output \\(O\\) at the pixel position \\((i,j)\\) is computed as follows: \\[O(i,j)=\\sum_{m=-a}^{a}\\sum_{n=-b}^{b}I(i+m,j+n)\\cdot K(m+a,n+b) \\tag{3}\\] Where \\(I(i,j)\\) represents the pixel value at the coordinates \\((i,j)\\) in the input image, while \\(K(m,n)\\) denotes the value at position \\((m,n)\\) within the kernel, which has dimensions of \\((2a+1)\\times(2b+1)\\). The parameters \\(a\\) and \\(b\\) represent the half-widths of the kernel in the vertical and horizontal directions, respectively. Consequently, as the kernel slides along the image, it computes a weighted sum of the pixel values covered by the kernel. This process effectively captures local patterns and translates the original image into a more abstract representation. Such a procedure enables subsequent convolutional layers to learn and extract increasingly complex features. To summarize, the kernel and its parameters significantly influence the performances of the convolution operation, as well as the types of features/information extracted from the image. The **self-attention mechanism**, introduced by Vaswani et al. [16], is the key component of the attention block employed in ViT architectures. This mechanism is specifically designed to capture long-range relationships in image data by operating on embedded images or feature patches. In particular, the self-attention operation allows each patch to relate to all others within the sequence, thereby increasing the DL model's receptive field with respect to conventional local convolutional operations. Mathematically, given an input sequence of embedded patches, self-attention computes three matrices: the query (\\(Q\\)), key (\\(K\\)), and value (\\(V\\)). Subsequently, as detailed in Equation 4, the self-attention is computed performing the dot-product interactions between queries and keys, scaled by the dimensionality \\(\\sqrt{d_{k}}\\), followed by a softmax operation in order to generate attention scores, which are then applied to the values (\\(V\\)). \\[A(Q,K,V)=\\text{Softmax}\\left(\\frac{Q\\cdot K^{T}}{\\sqrt{d_{k}}}\\right)\\cdot V \\tag{4}\\] Such a formula computes the dot-product interactions between queries and keys, scaled by the dimensionality \\(\\sqrt{d_{k}}\\), followed by a softmax operation to generate attention scores, which are then applied to the values. However, as detailed in Papa et al. [17], the time and memory complexity of this operation is \\(\\mathcal{O}(n^{2})\\) due to the quadratic cost of computing \\(A(Q,K,V)\\) making this operation particularly powerful but computationally expensive for large input sizes. Furthermore, this elementary operation can be parallelized into a multi-head self-attention (MSA) mechanism, in which multiple self-attention layers are executed simultaneously. This solution allows the model to focus on different areas/characteristics of the input features at the same time. More in detail, given the input features \\(X\\), the output features \\(X_{out}\\) resulting from the execution of an attention block can be mathematically formulated as follows: \\[\\begin{split}& X_{MSA}=\\text{Norm}(\\text{MSA}(X,X))+X\\\\ & X_{out}=\\text{Norm}(\\text{FNN}(X_{MSA}))+X_{MSA}\\end{split} \\tag{5}\\] Here, Norm denotes a normalization process, whereas FNN indicates a feed-forward network. The **quantum layer** is a key component in quantum neural networks. It consists of a sequence of quantum gates that perform unitary transformations on qubits, allowing the manipulation and entanglement of quantum states. Quantum layers can also be designed to operate as the quantum equivalent of classical neural network layers. However, similar to binary architectures, such a qubit-based layer enables the encoding, processing, and transformation of input data within quantum circuits. Here, the fundamental elements of the elementary quantum circuit used in this work are described. Specifically, the structure of the quantum circuit, as following illustrated, has been derived from Zaidenberg et al. [1]. Generally speaking, the core of quantum computing is the concept of qubit, a two-level quantum system that can be represented on the Bloch sphere. The Bloch sphere provides a geometric representation of the qubit's state, where any point on the sphere corresponds to a valid qubit state. The north pole represents the state (\\(|0\\rangle\\)), and the south pole represents (\\(|1\\rangle\\)); mathematically, a qubit can be expressed as a linear combination of its basis states as reported in Equation 6. \\[|\\psi\\rangle=\\alpha|0\\rangle+\\beta|1\\rangle \\tag{6}\\] Here, (\\(\\alpha\\)) and (\\(\\beta\\)) are complex coefficients satisfying the normalization condition (\\(|\\alpha|^{2}+|\\beta|^{2}=1\\)). Moreover, following Fig. 1: Graphical representation of the three reference models employed in this research study. Each traditional architecture, i.e., NN4EOv1, NN4EOv2, NN4EOv3, and ViT, is composed of a sequence of convolutional-self-attention layers (in orange/yellow) in addition to fully connected layers for classification. Differently, the quantum models, i.e., HQNN4EOv1, HQNN4EOv2, HQNN4EOv3, and HQViT, are developed by stacking a quantum circuit to the fully connected layers of traditional models. Within the same architectural design, double lines are used to distinguish between traditional and hybrid designs, while a Bloch sphere represents the single qubit circuit. the single-qubit circuit previously reported, several key operations are performed: (1) the qubit is initialized to a specific state, typically \\(|0\\rangle\\). Then, (2) the Hadamard (\\(H\\)) gate is used in order to create a superposition state. The action of the Hadamard gate on the basis states is defined as: \\[H|0\\rangle=\\frac{1}{\\sqrt{2}}(|0\\rangle+|1\\rangle),\\quad H|1\\rangle=\\frac{1}{ \\sqrt{2}}(|0\\rangle-|1\\rangle)\\] Where the matrix representation of the Hadamard gate is: \\[H=\\frac{1}{\\sqrt{2}}\\begin{pmatrix}1&1\\\\ 1&-1\\end{pmatrix}\\] Subsequently, (3) a rotation gate (\\(R_{y}(\\theta)\\)) allows manipulation of the qubit's state. In our case scenario, the rotation around the Y-axis of the Bloch sphere is given by the following formula: \\[R_{y}(\\theta)=e^{-i\\frac{\\theta}{2}Y}=\\cos\\left(\\frac{\\theta}{2}\\right)I-i\\sin \\left(\\frac{\\theta}{2}\\right)Y\\] where \\(Y\\) is the Pauli-Y matrix: \\[Y=\\begin{pmatrix}0&-i\\\\ i&0\\end{pmatrix}\\] Finally, (4) the qubit's state is measured by collapsing its superposition into one of the basis states, i.e., the probability of measuring state \\(|0\\rangle\\) or \\(|1\\rangle\\) is given by: \\[P(0)=|\\alpha|^{2},\\quad P(1)=|\\beta|^{2}\\] - - - Once the fundamental elements of each compared architecture have been introduced, we discuss and present the four architectural structures along with their respective quantum configurations. More in detail, we leverage three convolutional and a ViT model. A block diagram representation of these networks and their quantum counterparts are reported in Figure 1. As can be noticed, all the architecture leverages fully connected layers in order to perform the final classification. More in detail, the three convolutional architectures are composed of concatenations of convolutional blocks. Each block is composed of a two-dimensional convolution with \\(5\\times 5\\) kernel, followed by a \\(2\\times 2\\) max pooling layer and a ReLU activation function. Additionally, following Zaidenber et al. [1], in NN4EOv2 and NN4EOv3, two fully connected layers in which the first one is used to match the output flattened features from the previous encoding part and compact the information into \\(64\\) output neurons, while, the second layer outputs the binary classification probability through a single neuron. Differently, in NN4EO, and similarly to the ViT-based model, a single fully connected layer is used. However, the CNN-based models differ for the number of subsequent convolutional blocks as illustrated in Figure 1 (orange blocks), i.e., NN4EOv1, NN4EOv2, and NN4EOv3 are respectively composed by one, two and three convolutional blocks counting respectively \\(6.6K\\), \\(18K\\) and \\(68K\\) trainable parameters. Furthermore, the ViT model implemented for this study has been intentionally designed in order to maintain a simple architecture because the main objective of the third study proposed in this work is not to develop a highly complex model but rather to demonstrate the potential effectiveness of integrating quantum circuits with the ViT structure for EO tasks. More in detail, the input image is divided into \\(8\\times 8\\) patches, which are then processed through a Multi-Head Self-Attention (MSA) layer with two attention heads, and finally fed into a single fully connected layer that takes the encoded features as input and return a single prediction with a single neuron. This minimalistic design ensures a lightweight model architecture resulting in less than \\(34K\\) trainable parameters. On the other hand, quantum models, i.e., HQNN4EOv1, HQNN4EOv2, HQNN4EOv3, and HQViT, leverage the same architectural structure as their traditional counterparts, i.e., NN4EOv1, NN4EOv2, NN4EOv3, and ViT respectively, while adding the previously introduced single-qubit circuit for the final classification stage. The objective of this integration is to introduce quantum processing capabilities into traditional models, aiming to exploit quantum effects such as superposition and entanglement to potentially enhance classification performance on EO tasks. ## IV Experimental Setup In this section, we detail the experimental setup used to evaluate the studies that have just been described in the EO domain. The experimental setup is divided into two parts. Firstly, we outline in Section IV-A, the characteristics and prepossessing steps applied to the training dataset. Then, in Section IV-B, we describe the implementation details, including the software libraries, training protocols, and hyperparameters used to train and evaluate the models. ### _Training Dataset_ The study was conducted using the EO application scenario. More in detail, quantum architectures have been investigated in order to tackle the image classification task, specifically the identification of scenes in the EuroSat dataset [3]. This dataset is composed of Sentinel-2 data covering 13 spectral bands and is divided into \\(10\\) classes, with a total of \\(27000\\) labeled and georeferenced images. Moreover, following the training protocol proposed in Zaidenberg et al. [1], and in order to simplify the task given the innovative use of hybrid quantum vision transformers in the research field of EO, the number of classes has been reduced to two, resulting in multiple binary classification tasks. Precisely, at training time, the dataset has been subsequently split into training and validation sets with a division factor of \\(20\\%\\). Out of the 13 available bands, only the RGB bands have been selected. ### _Implementation Details_ The study has been implemented using PyTorch1 (v12.4.1) deep learning API. All models have been trained from scratch, following the training protocol outlined in [1], while the Binary Cross Entropy loss function has been used in order to perform the binary classification across all possible pairwise combinations of the 10 dataset's classes. We identify such classes with numbers ranging from \\(0\\) to \\(9\\), which correspond to highway (0), forest (1), sea lake (2), herbaceous vegetation (3), river (4), industrial (5), residential (6), pasture (7), permanent crop (8), and annual crop (9). Specifically, Adam optimizer [18] has been employed with \\(\\beta_{1}=0.9\\), \\(\\beta_{2}=0.999\\), and an initial learning rate of \\(0.0001\\) for a total of \\(20\\) epochs with a batch size of \\(1\\), and no data augmentation techniques applied to the training dataset. Additionally, preliminary studies involving quantum circuits have been conducted using Qiskit (v1.2.0), while Penymale (v0.28.0) has been employed to facilitate GPU support for quantum computations. For robustness investigations, multi-start experiments has been performed using \\(k=10\\) distinct seed values, specifically: \\(0\\), \\(12\\), \\(123\\), \\(1000\\), \\(1234\\), \\(10000\\), \\(12345\\), \\(100000\\), \\(123456\\), \\(1234567\\). Once the training phase has been concluded, we quantitatively evaluate the trained models using the accuracy metric (\\(Acc\\)), which is widely adopted in the literature. Moreover, we evaluate the stability of reference models through the accuracy variance (\\(\\sigma^{2}(Acc)\\)) across the \\(k\\) training/seeds, as reported below. \\[\\sigma^{2}(Acc)=\\frac{1}{k}\\sum_{i=1}^{k}(Acc_{i}-\\overline{Acc})^{2}\\] Where \\(Acc_{i}\\) is the accuracy performance of the \\(i\\)-th run, and \\(\\overline{Acc}\\) is the mean accuracy across all runs. For completeness, we remind that the lower the variance, the higher the stability, i.e., a highly stable model will exhibit minimal accuracy variability and lower variance, suggesting that the model's training dynamics are robust to random factors. ## V Results and Discussion This section will quantitatively analyze and compare the performance of eight models, namely four quantum and their respective non-quantum configurations. More in detail, Section V-A compares well-known quantum libraries to investigate their impact on QNNs training. Subsequently, in Section V-B, the impact of varying initialization values on model performances and stability will be analyzed. Lastly, Section V-C will present a comparative analysis between HQViT and ViT for EO classification tasks. ### _Comparison of Quantum Libraries_ In this first set of experiments, we are going to compare the different performances of well-known quantum libraries. As introduced in Section IV-B, we train four reference hybrid quantum models using Qiskit (v1.2.0) and PennyLane (v0.28.0) versions based on the same training configuration and a fixed seed value equal to 1699806. Due to the high number of experiments, we report the obtained results in the attached Appendix. More in details we report Tables III, V, VII, and IX respectively for HQNN4EOv1, HQNN4EOv2, HQNN4EOv3, and HQViT Qiskit configuration and in Tables IV, VI, VIII, and X for the same architectures in the PenyLane configuration. Moreover, in order to give a broader overview, we report in Table I a summary of the performed tests presenting a comparison between quantum models trained using the two previously introduced quantum computing libraries, i.e., Qiskit and PennyLane. More in detail, we report the average accuracy (\\(\\overline{Acc}\\)) and the average value in which the best model has been saved at training time (\\(k\\)*). The latter information can give us an overview of the amount of epochs needed for a model in order to reach convergence (a local minimum). Based on the reported results, both Qiskit and PennyLane libraries exhibit strong performance across all models, with only minor variations in accuracy and \\(k\\)*. For instance, in the HQNN4EOv2 model, PennyLane achieves slightly better results in both accuracy, equal to \\(92.51\\%\\) and computational efficiency \\(k\\)* = \\(16.11\\) compared to Qiskit, which achieves an accuracy of \\(92.35\\%\\) and a slightly higher \\(k\\)* value equal to \\(16.36\\). Differently, HQNN4EOv1, HQNN4EOv3, and HQViT models present a different scenario, where Qiskit slightly surpasses PennyLane in terms of accuracy, achieving the highest score. However, also in this scenario, PennyLane remains competitive with a close accuracy of \\(91.80\\%\\), \\(93.15\\%\\), and \\(87.77\\%\\) with respect to \\(91.93\\%\\), \\(93.45\\%\\), and \\(87.77\\%\\) respectively for HQNN4EOv1, HQNN4EOv3, and HQViT while achieving better convergence performances with a lower \\(k\\)* epochs needed for Qiskit full convergence. These results suggest that while Qiskit performs slightly better in terms of accuracy in certain cases, PennyLane consistently demonstrates faster convergence behavior. Furthermore, from a more detailed analysis of the tables presented in the Appendix, i.e., when comparing each pair of trained classes among the four-compared quantum models, the obtained results reveal that out of the 184 training sessions, i.e., 46 possible binary class configurations for each quantum-enhanced model, Qiskit and PennyLane obtains similar performances. More in detail, across the 184 training sessions, Qiskit and PennyLane achieve the same performances, i.e., Quiskit outperforms PennyLane in \\(45.6\\%\\) (\\(84/184\\)) instances, PennyLane outperforms Quiskit in \\(44.6\\%\\) (\\(84/184\\)), while in \\(16\\) sessions, both frameworks yield identical accuracy results. In conclusion, both Qiskit and PennyLane perform well in terms of accuracy for EO classification tasks. However, PennyLane shows a potential advantage in computational efficiency, making it a valuable tool for scaling quantum models in resource-constrained environments. The latter assumption is motivated by the fact that PennyLane achieves a constant advantage in terms of \\(k\\)*, highlighting its potential for more efficient execution, especially when dealing with larger quantum circuits or more complex tasks. Additionally, PennyLane's integration with PyTorch and its support for GPU acceleration further enhance its suitability for hybrid quantum-classical learning. These features suggest that PennyLane may be more advantageous in contexts where computational resources are limited, or efficiency is a key priority. ### _Study on the Stability Towards Initialization Values_ In this section, which is related to the second case study of this work, we investigate and compare the stability and estimation performances of reference models. Similar to the previous section, due to the extensive number of conducted experiments, we report the average class-wise results in the Appendix. Specifically, the results for NN4EOv1, NN4EOv2, NN4EOv3, and ViT are detailed in Tables XIX, XXI, XXIII, XXV respectively. Similarly, the outcomes for HQNN4EOv1, HQNN4EOv2, HQNN4EOv3, and HQViT are provided in Tables XX, XXII, XXIV, and XXVI, respectively. However, in order to provide a more general overview, we show in Table II a summary of all the experiments, reporting the mean accuracy (\\(\\overline{Acc}\\)) and mean-variance (\\(\\overline{\\sigma}^{2}\\)) across all classes over the \\(k=10\\) seeds. Based on the obtained results, it can be noted that quantum-based models, specifically HQNN4EOv3 and HQViT, demonstrate advantages in terms of accuracy, achieving mean accuracy of \\(93.47\\%\\) and \\(88.78\\%\\), respectively, i.e., a \\(0.5\\%\\) boost when compared to their traditional counterparts. Moreover, HQNN4EOv1 is also able to achieve small improvements with respect to its traditional variance. This improvement may indicate that hybrid quantum models, even if with a small boost, can enhance model performance. However, even if the improvement is limited, the quantum model is able to obtain higher estimations with a lower variance compared with its traditional configuration. Similarly, the HQNN4EOv3 model not only shows superior accuracy but also exhibits reduced variance compared to its traditional version. Such results may suggest that quantum-enhanced models can provide more consistent and stable performance across multiple initialization values. However, it is important to acknowledge that the benefits of quantum models are not uniform across all compared architectures. For instance, HQViT, while showing improved prediction performances, achieves a higher variance when compared with its traditional counterpart. This observation may underscore the need for careful parameter tuning when incorporating quantum elements. However, despite these challenges, the results reported in Table II, shows that even the simple integration of a single qubit can yield to performance gains; suggesting that quantum layers, even in their simplest forms, can enhance traditional models. In summary, we can assess that a careful design of the initialization and optimization strategies is essential to mitigate instability and achieve reliable performance. ### _Towards Hybrid Quantum Vision Transformers for Earth Observation_ In this section, we report the quantitative evaluation of experiments performed for the third case study of this work. More in detail, we investigate the potential of HQViT architectures for EO tasks by comparing the performance estimation of a traditional ViT model with its quantum-enhanced counterpart; both the models have been detailed in Section III. The objective of this study, inspired by Zaidenberg et al. [1] on CNN models, is to determine whether the integration of quantum circuits, even in their simplest structure, can positively contribute over traditional ViT approaches. However, do the the large amount of performed experiments, we report the class-wide results in the Appendix in Tables XVII and XVIII. Moreover, in order to give a faster look at the model's performances, we can refer to Table II. Based on the obtained results, it can be noticed that the average accuracy (\\(\\overline{Acc}\\)) of the HQViT model is marginally higher (\\(88.78\\)) when compared to the traditional ViT (88.37) model. This finding may indicate a present, albeit modest, improvement in the performance of the quantum-enhanced ViT structure. Consequently, the results suggest that even a minimal quantum integration can introduce qualitative improvements, potentially paving the way for more sophisticated and efficient quantum-augmented models. In conclusion, this third research study and respective set of experiments computed over minimal ViT-based architectural setups is thought of as proof-of-concept in order to highlight the potential of quantum computing in machine learning models. Moreover, the HQViT model shows that quantum-enhanced vision transformers can positively influence ViT-based models, encouraging future research into advanced quantum architectures and their integration into deep learning frameworks. ## VI Conclusions and Future Works This study investigates less-explored aspects of quantum DL applications for EO tasks. More in detail, building upon Zaidenberg et al. [1], three cases of study are investigated. Firstly, we compare the convergence behavior of well-known quantum libraries, i.e., Quiskit and PennyLane, in order to understand their potential in training hybrid quantum models. This first case of the study reveals that both libraries provide benefits for QNN models achieving comparable classification performances and convergence behaviors; however, PennyLane easily integrates with PyTorch GPU libraries, which is advantageous for researchers. Secondly, we investigate the sensitivity/stability of quantum and traditional counterparts with respect to the initialization values (seeds). This second case of the study reveals that both types of architecture need a careful design of the initialization hyperparameters in order to mitigate possible instabilities; however, over \\(k=10\\) different trials, quantum models show higher (averaged) accuracy valueswith comparable (averaged) variance. These results underline the effective contribution of quantum structures into hybrid architectures, even with elementary circuits, i.e., in our case, a single-bit module. Finally, the third case study investigates the use of such a single qubit circuit embedded into a transformer-based architecture. More in detail, by combining a simple (2 heads) multi-head attention layer with the previously introduced circuit, we show that, even with a higher variance due to the initialization values, the HQViT model is able to achieve an average boost of almost \\(0.5\\%\\) when compared with its traditional counterpart. This finding pushes the boundaries of prior research on classical convolution-based models by demonstrating the advantages of hybrid quantum architectures in complex real-world applications like EO. In summary, this study provides evidence that quantum computing libraries and quantum circuits may offer significant advantages even with simple DL architectural structures. Additionally, the successful integration of quantum circuits into ViT models for EO tasks may open new research trends for further exploration. Consequently, building on such findings, future research may focus on investigating if more extensive quantum circuits may reduce the variance with respect to the initialization values while leading to more stable models and optimizing hybrid quantum ViT architectures with more architectural-oriented structures for EO tasks and more complex EO applications in order to take advantage of such kind of global processing with respect to convolutional-based models. ## References * [1]D. A. Zaidenberg, A. Sebastianielli, D. Spiller, B. Le Saux, and S. L. Ullo (2021) Advantages and bottlenecks of quantum machine learning for remote sensing. In 2021 IEEE International Geoscience and Remote Sensing Symposium IGARSS, pp. 5680-5683. Cited by: SSI. * [2]S. Otgonbaatar, M. Datcu, X. X. Zhu, and D. Kranzlmuller (2022) Quantum machine learning for real-world, large scale datasets with applications in earth observation. Cited by: SSI. * [3]S. Otgonbaatar, M. Datcu, X. X. Zhu, and D. Kranzlmuller (2022) Quantum machine learning for real-world, large scale datasets with applications in earth observation. Cited by: SSI. * [4]S. Otgonbaatar, M. Datcu, and D. Kranzlmuller (2022) Quantum transfer learning for real-world, small, and high-dimensional remotely sensed datasets. IEEE Journal of Selected Topics in Applied Earth Observation and Remote Sensing. Cited by: SSI. [MISSING_PAGE_POST] . Ogtonbaatar and M. Datcu (2021) A quantum annealer for subset feature selection and the classification of hyperspectral images. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing14, pp. 7057-7065. Cited by: SSI. * [36]S. Ogtonbaatar, G. Schwarz, M. Datcu, and D. Kranzlmuller (2022) Quantum transfer learning for real-world, small, and high-dimensional remotely sensed datasets. IEEE Journal of Selected Topics in Applied Earth Observation and Remote Sensing. Cited by: SSI. * [37]S. Ogtonbaatar, M. Datcu, X. X. Zhu, and D. Kranzlmuller (2022) Quantum machine learning for real-world, large scale datasets with applications in earth observation. Cited by: SSI. * [38]S. Ogtonbaatar, M. Datcu, X. X. Zhu, and D. Kranzlmuller (2022) Quantum machine learning for real-world, large scale datasets with applications in earth observation. Cited by: SSI. * [39]S. Ogtonbaatar and D. Kranzlmuller (2022) Quantum-inspired tensor network for earth science. In IGARSS 2023-2023 IEEE International Geoscience and Remote Sensing Symposium, pp. 788-791. Cited by: SSI. * [40]S. Ogtonbaatar and M. Datcu (2021) Quantum annealer for subset feature selection and the classification of hyperspectral images. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing14, pp. 7057-7065. Cited by: SSI. * [41]S. Ogtonbaatar and M. Datcu (2021) A quantum annealer for subset feature selection and the classification of hyperspectral images. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing14, pp. 7057-7065. Cited by: SSI. * [42]S. Ogtonbaatar, M. Datcu, and D. Kranzlmuller (2022) Quantum machine learning for real-world, small, and high-dimensional remotely sensed datasets. IEEE Journal of Selected Topics in Applied Earth Observation and Remote Sensing. Cited by: SSI. * [43]S. Ogtonbaatar, G. Schwarz, M. Datcu, and D. Kranzlmuller (2023) Quantum transfer learning for real-world, small, and high-dimensional remotely sensed datasets. IEEE Journal of Selected Topics in Applied Earth Observation and Remote Sensing. Cited by: SSI. * [44]S. Ogtonbaatar, M. Datcu, X. Zhu, and D. Kranzlmuller (2022) Quantum machine learning for real-world, large scale datasets with applications in earth observation. Cited by: SSI. * [45]S. Ogtonbaatar, M. Datcu, X. Zhu, and D. Kranzlmuller (2022) Quantum machine learning for real-world, large scale datasets with applications in earth observation. Cited by: SSI. * [46]S. Ogtonbaatar, M. Datcu, X. Zhu, and D. Kranzlmuller (2022) Quantum machine learning for real-world, large scale datasets with applications in earth observation. Cited by: SSI. * [47]S. Ogtonbaatar and M. Datcu (2021) Quantum annealer for subset feature selection and the classification of hyperspectral images. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing14, pp. 7057-7065. Cited by: SSI. * [48]S. Ogtonbaatar and D. Kranzlmuller (2022) Quantum-inspired tensor network for earth science. In IGARSS 2023-2023 IEEE International Geoscience and Remote Sensing Symposium, pp. 788-791. Cited by: SSI. * [49]S. Ogtonbaatar, G. Schwarz, M. Datcu, and D. Kranzlmuller (2022) Quantum transfer learning for real-world, small, and high-dimensional remotely sensed datasets. IEEE Journal of Selected Topics in Applied Earth Observation and Remote Sensing. Cited by: SSI. * [50]S. Ogtonbaatar, M. Datcu, X. Zhu, and D. Kranzlmuller (2022) Quantum machine learning for real-world, large scale datasets with applications in earth observation. Cited by: SSI. * [51]S. Ogtonbaatar, M. Datcu, X. Zhu, and D. Kranzlmuller (2022) Quantum machine learning for real-world, large scale datasets with applications in earth observation. Cited by: SSI. * [52]S. Ogtonbaatar, G. Schwarz, M. Datcu, and D. Kranzlmuller (2022) Quantum transfer learning for real-world, small, and high-dimensional remotely sensed datasets. IEEE Journal of Selected Topics in Applied Earth Observation and Remote Sensing. Cited by: SSI. * [53]S. Ogtonbaatar and M. Datcu (2021) Quantum annealer for subset feature selection and the classification of hyperspectral images. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing14, pp. 7057-7065. Cited by: SSI. * [54]S. Ogtonbaatar and M. Datcu (2021) A quantum annealer for subset feature selection and the classification of hyperspectral images. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing14, pp. 7057-7065. Cited by: SSI. * [55]S. Ogtonbaatar, M. Datcu, and D. Kranzlmuller (2022) Quantum machine learning for real-world, small, and high-dimensional remotely sensed datasets. IEEE Journal of Selected Topics in Applied Earth Observation and Remote Sensing. Cited by: SSI. * [56]S. Ogtonbaatar, G. Schwarz, M. Datcu, and D. Kranzlmuller (2023) Quantum transfer learning for real-world, small, and high-dimensional remotely sensed datasets. IEEE Journal of Selected Topics in Applied Earth Observation and Remote Sensing. Cited by: SSI. * [57]S. Ogtonbaatar and D. Kranzlmuller (2022) Quantum-inspired tensor network for earth science. In IGARSS 2023-2023 IEEE International Geoscience and Remote Sensing Symposium, pp. 788-791. Cited by: SSI. * [58]S. Ogtonbaatar, M. Datcu, X. Zhu, and D. Kranzlmuller (2022) Quantum machine learning for real-world, large scale datasets with applications in earth observation. Cited by: SSI. * [59]S. Ogtonbaatar, M. Datcu, X. Zhu, and D. Kranzlmuller (2022) Quantum machine learning for real-world, large scale datasets with applications in earth observation. Cited by: SSI. * [60]S. Ogtonbaatar, M. Datcu, and D. Kranzlmuller (2022) Quantum machine learning for real-world, large scale datasets with applications in earth observation. Cited by: SSI. \\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \\hline & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 \\\\ \\hline 0 & - & 99.36 (16) & 99.36 (18) & 79.27 (19) & 73.90 (20) & 95.00 (20) & 93.27 (20) & 88.00 (20) & 80.00 (20) & 81.82 (20) \\\\ \\hline 1 & 99.36 (16) & - & 88.50 (20) & 97.92 (13) & 96.36 (16) & 100.00 (4) & 100.00 (10) & 92.30 (16) & 99.36 (19) & 97.67 (18) \\\\ \\hline 2 & 99.36 (18) & 88.50 (20) & - & 94.75 (20) & 97.00 (20) & 100.00 (6) & 99.92 (15) & 90.60 (19) & 99.45 (20) & 95.42 (19) \\\\ \\hline 3 & 79.27 (19) & 97.92 (13) & 94.75 (20) & - & 89.45 (18) & 95.09 (19) & 88.42 (20) & 91.20 (20) & 74.55 (18) & 84.42 (20) \\\\ \\hline 4 & 73.90 (20) & 96.36 (16) & 97.00 (20) & 89.45 (18) & - & 97.10 (14) & 98.64 (20) & 82.44 (20) & 91.70 (18) & 88.27 (19) \\\\ \\hline 5 & 95.00 (20) & 100.00 (4) & 100.00 (6) & 95.09 (19) & 97.10 (14) & - & 94.53 (20) & 100.00 (12) & 95.90 (17) & 98.55 (18) \\\\ \\hline 6 & 93.27 (20) & 100.00 (10) & 99.92 (15) & 88.42 (20) & 98.64 (20) & 94.55 (20) & - & 96.00 (17) & 91.55 (19) & 98.17 (18) \\\\ \\hline 7 & 88.90 (20) & 92.30 (16) & 90.60 (19) & 91.20 (20) & 82.44 (20) & 100.00 (12) & 96.00 (17) & - & 89.11 (14) & 92.40 (16) \\\\ \\hline 8 & 80.00 (20) & 99.36 (19) & 99.45 (20) & 74.55 (18) & 91.70 (18) & 95.90 (17) & 91.55 (19) & 89.11 (14) & - & 82.73 (20) \\\\ \\hline 9 & 81.82 (20) & 97.67 (18) & 95.42 (19) & 84.42 (20) & 88.27 (19) & 98.55 (18) & 98.17 (18) & 92.40 (16) & 82.73 (20) & - \\\\ \\hline \\end{tabular} \\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \\hline & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 \\\\ \\hline 0 & - & 99.00 (19) & 98.09 (20) & 79.45 (20) & 75.90 (20) & 94.30 (16) & 90.18 (20) & 92.56 (16) & 80.30 (19) & 89.45 (20) \\\\ \\hline 1 & 99.00 (19) & - & 93.25 (20) & 97.83 (19) & 96.82 (18) & 100.00 (6) & 99.75 (17) & 92.30 (10) & 98.91 (13) & 98.17 (20) \\\\ \\hline 2 & 98.09 (20) & 93.25 (20) & - & 95.75 (16) & 96.55 (20) & 99.82 (8) & 99.50 (6) & 87.40 (17) & 99.09 (20) & 95.83 (20) \\\\ \\hline 3 & 79.45 (20) & 97.83 (19) & 95.75 (16) & - & 90.09 (20) & 96.18 (18) & 92.25 (18) & 91.70 (17) & 72.55 (20) & 85.83 (19) \\\\ \\hline 4 & 75.90 (20) & 96.82 (18) & 96.55 (20) & 90.09 (20) & - & 97.90 (19) & 96.55 (15) & 90.78 (18) & 92.20 (20) & 91.64 (19) \\\\ \\hline 5 & 94.30 (16) & 100.00 (6) & 99.82 (8) & 96.18 (18) & 97.90 (19) & - & 95.18 (19) & 100.00 (4) & 97.10 (16) & 98.73 (11) \\\\ \\hline 6 & 90.18 (20) & 99.75 (17) & 99.50 (6) & 92.25 (18) & 96.55 (15) & 95.18 (19) & - & 98.70 (14) & 95.18 (17) & 98.42 (19) \\\\ \\hline 7 & 92.56 (16) & 92.30 (10) & 87.40 (17) & 91.70 (17) & 90.78 (18) & 100.00 (4) & 98.70 (14) & - & 93.22 (18) & - & 91.70 (20) \\\\ \\hline 8 & 80.30 (19) & 98.91 (13) & 99.09 (20) & 72.55 (20) & 92.20 (20) & 97.10 (16) & 95.18 (17) & 93.22 (18) & - & 89.18 (19) \\\\ \\hline 9 & 89.45 (20) & 98.17 (20) & 95.83 (20) & 85.83 (19) & 91.64 (19) & 98.73 (11) & 98.42 (19) & 91.70 (20) & 89.18 (19) & - \\\\ \\hline \\end{tabular} TABLE VII: HQWN4EOv3 - PennyLane - Avg Test Accuracy 93.15 and best model saved at epoch 15.46 \\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \\hline & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 \\\\ \\hline 0 & - & 99.00 (20) & 97.45 (11) & 79.27 (17) & 76.20 (18) & 94.60 (17) & 89.73 (18) & 91.22 (10) & 79.70 (17) & 89.45 (18) \\\\ \\hline 1 & 99.00 (20) & - & 92.17 (16) & 98.08 (19) & 96.09 (16) & 100.00 (2) & 99.83 (19) & 92.10 (19) & 98.55 (18) & 98.17 (15) \\\\ \\hline 2 & 97.45 (11) & 92.17 (16) & - & 95.50 (19) & 93.09 (19) & 99.91 (8) & 99.91 (8) & 99.42 (7) & 86.30 (20) & 99.09 (18) & 96.00 (20) \\\\ \\hline 3 & 79.27 (17) & 98.08 (19) & 95.50 (19) & - & 89.91 (20) & 96.09 (17) & 92.42 (20) & 92.00 (19) & 71.55 (18) & 83.67 (18) \\\\ \\hline 4 & 76.20 (18) & 96.09 (16) & 93.09 (19) & 89.91 (20) & - & 98.10 (17) & 95.45 (17) & 91.67 (20) & 91.80 (20) & 91.00 (17) \\\\ \\hline 5 & 94.60 (17) & 100.00 (2) & 99.91 (8) & 96.09 (17) & 98.10 (17) & - & 95.09 (16) & 99.89 (4) & 97.30 (17) & 99.45 (18) \\\\ \\hline 6 & 89.73 (18) & 99.83 (19) & 99.42 (7) & 92.42 (20) & 95.45 (17) & 95.09 (16) & - & 99.20 (20) & 96.45 (17) & 98.83 (11) \\\\ \\hline 7 & 91.22 (10) & 92.10 (19) & 86.30 (20) & 92.00 (19) & 91.67 (20) & 99.89 (4) & 99.20 (20) & - & 89.67 (18) & 92.30 (19) \\\\ \\hline 8 & 79.70 (17) & 98.55 (18) & 99.09 (18) & 71.55 (18) & 91.80 (20) & 97.30 (17) & 96.45 (17) & 89.67 (18) & - & 89.00 (17) \\\\ \\hline 9 & 89.45 (18) & 98.17 (15) & 96.00 (20) & 83.67 (18) & 91.00 (17) & 99.45 (18) & 98.83 (11) & 92.30 (19) & 89.00 (17) & - \\\\ \\hline \\end{tabular} TABLE IX: HQVit - Qiskit - Avg Test Accuracy 87.95 and best model saved at epoch 16.25 \\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|} \\hline & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 \\\\ \\hline 0 & - & 99.00 (20) & 97.45 (11) & 79.27 (17) & 76.20 (18) & 94.60 (17) & 89.73 (18) & 91.22 (10) & 79.70 (17) & 89.45 (18) \\\\ \\hline 1 & 99.00 (20) & - & 92.17 (16) & 98.08 (19) & 96.09 (16) & 100.00 (2) & 99.83 (19) & 92.10 (19) & 98.55 (18) & 98.17 (15) \\\\ \\hline 2 & 97.45 (11) & 92.17 (16) & - & 95.50 (19) & 93.09 (19) & 99.91 (8) & 99.92 (7) & 86.30 (20) & 99.09 (18) & 96.00 (20) \\\\ \\hline 3 & 79.27 (17) & 98.08 (19) & 95.50 (19) & - & 89.91 (20) & 96.09 (17) & 92.42 (20) & 92.00 (19) & 71.55 (18) & 83.67 (18) \\\\ \\hline 4 & 76.20 (18) & 96.09 (16) & 93.09 (19) & 89.91 (20) & - & 98.10 (17) & 95.45 (17) & 91.67 (20) & 91.80 (20) & 91.00 (17) \\\\ \\hline 5 & 94.60 (17) & 100.00 (2) & 99.91 (8) & 96 \\begin{table} \\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \\hline & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 \\\\ \\hline 0 & - & 98.49 & 98.06 & 80.74 & 71.37 & 94.33 & 88.81 & 90.02 & 77.46 & 84.90 \\\\ & (99.27, 97.18) & (99.36, 96.72) & (83.82, 77.82) & (75.2, 69.1) & (95.9, 93.2) & (96.18, 84.91) & (93.44, 83.22) & (79.7, 72.0) & (88.27, 79.64) \\\\ \\hline 1 & 98.49 & - & 84.62 & 97.38 & 94.71 & 99.98 & 99.47 & 95.22 & 98.87 & 98.94 \\\\ 99.27, 97.18) & - & (93.75, 74.08) & - & (98.26, 96.58) & (96.73, 92.36) & (100.00, 99.82) & (99.92, 99.08) & (99.42, 91.9) & (99.64, 97.45) & (99.58, 97.5) \\\\ \\hline 2 & 98.06 & 84.62 & 98.26 & 98.26 & 98.26 & 98.27 & 99.10 & 99.71 & 99.76 \\\\ 3 & 99.36 & - & 99.35 & 95.39 & 99.55 & 99.75 & 91.08 & 99.71 & 99.76 \\\\ 3 & 99.35 & 99.27 & 93.75 & 99.48 & 99.12 & 99.10 & 99.80 & 99.92, 99.33 & (95.1, 88.6) & (98.73, 95.0) & (95.67, 93.58) \\\\ \\hline 3 & 80.74 & 97.38 & 93.95 & - & 88.20 & 96.29 & 91.36 & 89.27 & 70.89 & 83.89 \\\\ 3 & 98.27 & 98.25 & 96.58 & 96.08 & 91.25 & - & (90.18, 84.27) & (97.09, 95.64) & (94.75, 88.92) & (92.1, 82.8) & (93.45, 86.18) \\\\ \\hline 4 & 71.37 & 97.41 & 95.39 & 88.20 & 97.03 & 95.16 & 85.95 & 96.09 & 88.27 \\\\ 5 & 97.11 & 96.73 & 92.36 & 97.55 & 99.45 & (90.18, 84.27) & - & (98.2, 96.1) & (97.09, 92.64) & (92.44, 71.67) & (92.0, 89.0) & (91.36, 84.55) \\\\ \\hline 5 & 99.43 & 99.98 & 99.65 & 96.29 & 97.03 & 93.65 & 99.60 & 96.87 & 99.88 \\\\ 3 & 99.52 & (1000, 99.82) & (1000, 98.5) & (97.90, 99.56) & (98.2, 96.1) & - & (95.91, 84.27) & (99.80, 99.88) & (97.05, 95.99) & (93.96, 96.73) \\\\ \\hline 6 & 88.81 & 99.47 & 99.75 & 91.36 & 95.16 & 93.65 & - & 97.54 & 94.39 & 98.74 \\\\ 6 & 99.68 & 99.92 & 99.93 & (94.75, 88.75) & - & (97.09, 93.18) & (97.90, 99.18) & (99.22, 96.3) & (99.33, 95.27) & (99.12, 97.67) \\\\ \\hline 7 & 99.02 & 92.52 & 91.08 & 89.27 & 85.95 & 99.60 & 97.54 & 89.61 & 91.88 \\\\ 93.48, 83.22 & (94.2, 91.5) & (95.1, 88.86) & (92.1, 82.9) & (92.44, 71.67) & (99.89, 97.88) & (99.3, 93.5) & - & (92.44, 87.33) & (93.0, 90.9) \\\\ \\hline 8 & 77.46 & 98.87 & 97.11 & 70.89 & 90.69 & 96.87 & 94.39 & 89.61 & 82.41 \\\\ (97.72, 7.20) & (96.64, 97.45) & (98.73, 95.0) & (74.56, 66.12) & (92.80, 89.0) & (97.5, 95.5) & (92.71, 27) & (92.44, 87.33) & - & (88.36, 76.82) \\\\ \\hline 9 & 84.90 & 98.84 & 94.76 & 83.89 & 88.27 & 98.83 & 98.74 & 91.88 & 82.41 \\\\ (98.27, 79.64) & (98.58, 97.93) & (95.67, 93.58) & (91.0, 80.33) & (91.0, 84.55) & (99.36, 76.33) & (99.67, 96.76) & (93.0, 90.9) & (88.36, 76.82) & - \\\\ \\hline \\end{tabular} \\end{table} TABLE XII: HN4EOV1 - MEAN (MAX, MIN) values with 10 seeds \\begin{table} \\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \\hline & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 \\\\ \\hline 0 & - & 98.11 & 98.40 & 81.45 & 74.45 & 94.50 & 88.71 & 91.74 & 78.64 & 87.72 \\\\ 0 & - & (98.73, 97.27) & (99.45, 97.09) & (84.36, 77.28) & (74.7, 71.3) & (55.4, 93.6) & (92.45, 83.18) & (93.78, 89.56) & (81.65, 75.6) & (81.75, 86.7) \\\\ \\hline 1 & 98.11 & - & 90.22 & 97.54 & 94.51 & 99.93 & 99.34 & 92.32 & 98.82 & 98.32 \\\\ 0 & (98.73, 97.27) & - & (94.83, 83.5) & (98.08, 97.0) & (96.36, 91.73) & (100.00, 99.73) & (99.83, 98.5) & (93.41, 90) & (99.73, 97.91) & (99.09, 97.58) \\\\ \\hline 2 & 98.40 & 90.22 & 99.22 & 95.46 & 96.18 & 99.89 & 99.67 & 90.44 & 97.92 & 95.39 \\\\ 0 & (99.45, 97.09) & (94.0, 83.5) & - & (97.33, 93.67) & (97.91, 88.36) & (100.00, 99.73) & (100.00, 99.73) & (100.00, 98.42) & (95.6, 78.9) & (99.45, 92.09) & (96.33, 94.67) \\\\ \\hline 3 & 81.45 & 97.54 & 95.46 & - & 80.09 & 96.32 & 93.01 & 91.51 & 71.73 & 85.14 \\\\ 4 & (94.36, 77.82) & (98.08, 97.07) & (97.33, 93.67) & (90.64, 86.73) & - & (90.64, 86.73) & (97.55, 91.69) & (93.47, 87.3) & (76.82, 67.0) & (89.75, 77.75) \\\\ \\hline 4 & 74.45 & 94.51 & 96.18 & 89.09 & 98.06 & 94.44 & 89.22 & 90.68 & 90.62 \\\\ 4 & (77.4, 71.3) & (96.36, 91.73) & (97.81, 88.36) & (90.64, 86.73) & - & (98.7, 97.1) & (97.64, 92.27) & (92.33, 87.44) & (91.88, 84.0) & (92.64, 87.64) \\\\ \\hline 5 & 94.50 & 99.93 & 99.89 & 96.32 & 98.06 & 99.07 & 99.62 & 97.01 & 99.09 \\\\ 0 & (95.4, 93.6) & (100.00, 97.93) & (97.55, 94.64) & (98.71, 97.1) & - & (94.73, 91.0) & (99.89, 99.33) & (97.96, 96.0) & (99.73, 97.82) \\\\ \\hline 6 & 88.71 & 99.34 & 99.67 & 93.01 & 94.44 & 93.07 & 98.99 & 95.75 & 98.84 \\\\ 0 & (92.45, 83.18) & (99.83, 98.98) & (100.00, 98.42) & (94.75, 91.92) & (97.64, 92.27) & (94.73, 91.0) & - & (99.66, 96.7) & (99.14, 97.3) \\\\ \\hline 7 & 91.74 & 92.32 & 90.44 & 91.51 & 89.52 & 96.2 & 98.59 & - & 99.46 & 91.68 \\\\ 0 & (93.88, 95.34) & (93.10, 95.89) & (93.4, 87.09) & (93.4, 87.3) & (93.23, 87.44) & (93.99, 93.3) & (96.9, 96.5) & (92.89, 88.56) & (92.6, 90.8) \\\\ \\hline 8 & 78.64 & 98.82 & 97.92 & 71.73 & 90.68 & 97.01 & 95.75 & 94.66 & - & 85.61 \\\\ 8 & (81.6, 75.6) & (99.73, 97.191) & (99.54, 92.09) & (76.82, 67.0) & (91.88, 88.4) & (96.91, 94.73) & (92.89, 88.56) & (92.88, 88.56) & (92.89, 88.56) \\\\ \\hline 9 & 87.72 & 98.23 & 95.39 & 85.14 & 90.62 & 99.09 & 98.84 & 91.68 & 85.61 \\\\ 9 & (90.27, 84.18) & (99.0, 97.58) & (96.33, 94.67) & (99.75, 77.75) & (92.64, 87.64) & (99.73, 97.82) & (99.42, 97.17) & (92.6, 90.8) & (99.73, 82.09) \\\\ \\hline \\end{tabular} \\end{table} TABLE XVI HQNN4EOV3 - mean (max, min) values with 10 seeds \\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \\hline 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 \\\\ \\hline - & 9.21 & 13.87 & 6.31 & 1.35 & 1.41 & 1.98 & 1.64 & 1.90 & 3.95 \\\\ \\hline 9.21 & - & 7.09 & 0.39 & 2.92 & 0.02 & 1.08 & 0.63 & 0.29 & 0.31 \\\\ \\hline 13.87 & 7.09 & - & 1.27 & 24.59 & 0.03 & 1.05 & 8.49 & 9.55 & 0.67 \\\\ \\hline 6.31 & 0.39 & 1.27 & - & 0.95 & 0.48 & 2.58 & 1.19 & 6.26 & 11.12 \\\\ \\hline 1.35 & 2.92 & 24.59 & 0.95 & - & 0.82 & 0.79 & 5.40 & 0.52 & 1.22 \\\\ \\hline 1.41 & 0.02 & 0.03 & 0.48 & 0.82 & - & 1.84 & 0.18 & 0.41 & 0.14 \\\\ \\hline 1.98 & 1.08 & 1.05 & 2.58 & 0.79 & 1.84 & - & 1.42 & 1.01 & 0.27 \\\\ \\hline 1.64 & 0.63 & 8.49 & 1.19 & 5.40 & 0.18 & 1.42 & - & 1.55 & 0.26 \\\\ \\hline 1.90 & 0.29 & 9.55 & 6.26 & 0.52 & 0.41 & 1.01 & 1.55 & - & 7.78 \\\\ \\hline 3.95 & 0.31 & 0.67 & 11.12 & 1.22 & 0.14 & 0.27 & 0.26 & 7.78 & - \\\\ \\hline \\end{tabular} TABLE XXi \\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \\hline 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 \\\\ \\hline - & 0.33 & 0.68 & 3.68 & 3.85 & 0.48 & 12.03 & 7.70 & 4.96 & 8.16 \\\\ \\hline 0.33 & - & 35.97 & 0.31 & 0.98 & 0.00 & 0.05 & 0.49 & 0.12 & 0.15 \\\\ \\hline 0.68 & 35.97 & - & 1.95 & 3.65 & 0.31 & 0.05 & 4.94 & 1.10 & 0.39 \\\\ \\hline 3.68 & 0.31 & 1.95 & - & 4.75 & 0.22 & 1.70 & 7.24 & 8.80 & 11.79 \\\\ \\hline 3.85 & 0.98 & 3.65 & 4.75 & - & 0.23 & 2.06 & 30.40 & 0.65 & 4.24 \\\\ \\hline 0.48 & 0.00 & 0.51 & 0.22 & 0.23 & - & 2.88 & 0.10 & 0.36 & 0.67 \\\\ \\hline 12.03 & 0.05 & 0.05 & 1.70 & 2.06 & 2.88 & - & 3.26 & 1.22 & 0.68 \\\\ \\hline 7.70 & 0.49 & 4.94 & 7.24 & 30.40 & 0.10 & 3.26 & - & 2.41 & 0.36 \\\\ \\hline 4.96 & 0.12 & 1.10 & 8.80 & 0.65 & 0.36 & 1.22 & 2.41 & - & 15.07 \\\\ \\hline 8.16 & 0.15 & 0.39 & 11.79 & 4.24 & 0.67 & 0.68 & 0.36 & 15.07 & - \\\\ \\hline \\end{tabular} TABLE XXi \\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \\hline 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 \\\\ \\hline - & 0.22 & 0.57 & 2.85 & 2.82 & 0.28 & 6.64 & 1.41 & 2.38 & 3.14 \\\\ \\hline 0.22 & - & 10.52 & 0.13 & 1.38 & 0.01 & 0.14 & 0.58 & 0.22 & 0.19 \\\\ \\hline 0.57 & 10.52 & - & 1.43 & 6.97 & 0.01 & 0.18 & 19.13 & 4.48 & 0.19 \\\\ \\hline 2.85 & 0.13 & 1.43 & - & 1.62 & 0.47 & 0.64 & 4.00 & 12.89 & 15.87 \\\\ \\hline 2.82 & 1.38 & 6.97 & 1.62 & - & 0.31 & 2.06 & 1.42 & 2.52 \\\\ \\hline 0.28 & 0.01 & 0.01 & 0.47 & 0.31 & - & 1.75 & 0.03 & 0.25 & 0.33 \\\\ \\hline 6.64 & 0.14 & 0.18 & 0.64 & 2.06 & 1.75 & - & 0.31 & 0.87 & 0.41 \\\\ \\hline 1.41 & 0.58 & 19.13 & 4.00 & 1.42 & 0.03 & 0.31 & - & 1.66 & 0.29 \\\\ \\hline 2.38 & 0.22 & 4.48 & 12.89 & 1.24 & 0.25 & 0.87 & 1.66 & - & 7.81 \\\\ \\hline 3.14 & 0.19 & 0.19 & 15.87 & 2.52 & 0.33 & 0.41 & 0.29 & 7.81 & - \\\\ \\hline \\end{tabular} \\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \\hline 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 \\\\ \\hline - & 0.13 & 2.54 & 2.97 & 2.86 & 0.58 & 7.98 & 1.35 & 2.91 & 6.83 \\\\ \\hline 0.13 & - & 4.05 & 0.14 & 1.39 & 0.00 & 0.07 & 0.68 & 0.32 & 0.21 \\\\ \\hline 2.54 & 4.05 & - & 1.22 & 0.46 & 0.00 & 0.17 & 16.69 & 2.26 & 0.69 \\\\ \\hline 2.97 & 0.14 & 1.22 & - & 0.95 & 0.43 & 1.45 & 1.59 & 10.05 & 7.06 \\\\ \\hline 2.86 & 1.39 & 0.46 & 0.95 & - & 0.42 & 2.68 & 4.86 & 1.16 & 2.82 \\\\ \\hline 0.58 & 0.00 & 0.00 & 0.43 & 0.42 & - & 2.46 & 0.03 & 0.08 & 0.10 \\\\ \\hline 7.98 & 0.07 & 0.17 & 1.45 & 2.68 & 2.46 & - & 0.13 & 1.24 & 0.49 \\\\ \\hline 1.35 & 0.68 & 16.69 & 1.59 & 4.86 & 0.03 & 0.13 & - & 1.54 & 0.35 \\\\ \\hline 2.91 & 0.32 & 2.26 & 10.05 & 1.16 & 0.08 & 1.24 & 1.54 & - & 13.99 \\\\ \\hline 6.83 & 0.21 & 0.69 & 7.06 & 2.82 & 0.10 & 0.49 & 0.35 & 13.99 & - \\\\ \\hline \\end{tabular} TABLE XIV: ViT - Variance of Test Accuracy across 10 seed \\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \\hline 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 \\\\ \\hline - & 0.49 & 10.95 & 6.44 & 1.70 & 3.24 & 3.10 & 10.08 & 2.15 & 3.27 \\\\ \\hline 0.49 & - & 8.03 & 0.08 & 0.63 & 0.11 & 0.44 & 2.19 & 0.12 & 0.25 \\\\ \\hline 10.95 & 8.03 & - & 0.56 & 3.21 & 0.67 & 1.08 & 5.15 & 0.58 & 1.23 \\\\ \\hline 6.44 & 0.08 & 0.56 & - & 1.62 & 6.64 & 16.37 & 1.58 & 2.66 & 15.73 \\\\ \\hline 1.70 & 0.63 & 3.21 & 1.62 & - & 0.67 & 0.58 & 3.64 & 0.51 & 1.01 \\\\ \\hline 3.24 & 0.11 & 0.67 & 6.64 & 0.67 & - & 1.36 & 1.85 & 1.50 & 10.26 \\\\ \\hline 3.10 & 0.44 & 1.08 & 16.37 & 0.58 & 11.36 & - & 0.79 & 7.04 & 2.85 \\\\ \\hline 10.08 & 2.19 & 5.15 & 1.58 & 3.64 & 1.85 & 0.79 & - & 2.63 & 0.27 \\\\ \\hline 2.15 & 0.12 & 0.58 & 2.66 & 0.51 & 1.50 & 7.04 & 2.63 & - & 0.97 \\\\ \\hline 3.27 & 0.25 & 1.23 & 15.73 & 1.01 & 10.26 & 2.85 & 0.27 & 0.97 & - \\\\ \\hline \\end{tabular}
Quantum computing has introduced novel perspectives for tackling and improving machine learning tasks. Moreover, the integration of quantum technologies together with well-known deep learning (DL) architectures has emerged as a potential research trend gaining attraction across various domains, such as Earth Observation (EO) and many other research fields. However, prior related works in EO literature have mainly focused on convolutional architectural advancements, leaving several essential topics unexplored. Consequently, this research investigates through three cases of study fundamental aspects of hybrid quantum machine models for EO tasks aiming to provide a solid groundwork for future research studies towards more adequate simulations and looking at the post-NISO era. More in detail, we firstly (1) investigate how different quantum libraries behave when training hybrid quantum models, assessing their computational efficiency and effectiveness. Secondly, (2) we analyze the stability/sensitivity to initialization values (i.e., seed values) in both traditional model and quantum-enhanced counterparts. Finally, (3) we explore the benefits of hybrid quantum attention-based models in EO applications, examining how integrating quantum circuits into VITs can improve model performance. Quantum Computing, Quantum Machine Learning, Earth Observation, Remote Sensing
Write a summary of the passage below.
arxiv/c74d07a0_96da_498e_bbf4_2f7c976d0a60.md
# Computationally-Efficient Climate Predictions using Multi-Fidelity Surrogate Modelling Ben Hudson, Frederik Nijweide, Isaac Sebenius Computer Lab, University of Cambridge {bh511, fpijn2, iss31}@cam.ac.uk November 3, 2021 ## I Introduction From predicting hourly weather forecasts to tracking global temperature changes over time, accurately modelling the Earth's climate is a pressing task with wide ranging impact. The climate science community has developed many models to simulate and predict weather and climatological dynamics. However, each model must balance geographical scale, spatial/temporal resolution, and computational cost. Global Climate Models (GCMs) model climate dynamics (e.g. temperature and wind) for the entire planet at once [1]. However, the computational cost of modelling the global climate is immense [2]; thus, GCMs are restricted to a coarse spatial and temporal resolution. Regional Climate Models (RCMs) complement GCMs as they simulate the climate system over a particular region of the globe, but in much finer detail. Multi-fidelity surrogate models based on Gaussian processes (GPs) offer a unique opportunity to break the trade-off between simulation scale, resolution, and cost. These models can infer the high-fidelity predictions over a domain by learning the relationship between the low- and high-fidelity models based on a handful of samples from both models. Existing work suggests such techniques are promising for climate modelling [3]. In this paper, we evaluate the efficacy of multi-fidelity surrogate modelling to infer high-resolution temperature predictions in a mountainous, coastal region of Peru. We analyse a dataset of pre-computed GCM (low-fidelity) and RCM (high-fidelity) predictions over this region. Beginning with low-fidelity temperature predictions for the entire region, we simulate running the high-fidelity model over square sub-regions of the region of interest, therefore acquiring the high-fidelity data on a batch-wise basis. As there is a cost associated with obtaining this high-fidelity data (the computational cost of running the RCM), we aim to produce accurate high-fidelity temperature predictions while remaining within a certain budget. We investigate if multi-fidelity models confer advantages over single-fidelity ones. We also explore the impact of the choice of acquisition function on model performance, and if there are certain geographical regions over which it is especially important or unimportant to have high-fidelity data. ## II Preliminaries ### _Gaussian processes_ Gaussian processes are a popular statistical tool for emulating black-box functions, such as complex simulations. Intuitively, they can be understood of as a distribution over functions or an infinite collection of stochastic variables where any \\(N\\) samples form an \\(N\\)-dimensional multivariate Gaussian distribution. A Gaussian process is parameterised by a mean function \\(m(\\mathbf{x})\\), evaluated at each of \\(N\\) input locations \\(\\mathbf{x}\\), and a kernel function \\(\\kappa(\\mathbf{x},\\mathbf{x}^{\\prime})\\) evaluated for each combination of input locations. Many choices can be made for the kernel function, but the RBF kernel is often a popular choice [4, 5]. The equation for this kernel is given by \\[\\kappa(\\mathbf{x},\\mathbf{x}^{\\prime})=\\sigma^{2}\\exp\\left(-\\frac{1}{2l^{2}} \\left(\\mathbf{x}-\\mathbf{x}^{\\prime}\\right)^{T}\\left(\\mathbf{x}-\\mathbf{x}^{ \\prime}\\right)\\right), \\tag{1}\\] where \\(\\sigma\\) scales the output of the kernel, and \\(\\ell\\) is the length scale, which determines how much the correlation decreases with the distance. ### _Simulators and Emulators_ Simulators are useful for modelling climate and weather patterns (see [6]). However, many (climate) simulations are very computationally intensive [7]. Frequently, this makes them impractical for applications that require data to be updated often. However, it is possible to approximate the simulator's output with reasonable accuracy using statistical emulation, thus reducing the number of times the simulatormust be evaluated. This is achieved by fitting a statistical model to the relation between inputs and outputs [8]. Unlike simulators, which usually output single-valued functions over the input space, these models output probability distributions over the input space. For example, a Gaussian process-based emulator would output a normal distribution (characterised by a mean and standard deviation) for a point in the input space. ### _Multi-fidelity Modelling_ When using simulators, there is often a trade-off between simulation cost and accuracy [9]. \"Low-fidelity\" data can be produced easily using inexpensive and approximate simulation methods, yet often deviates significantly from reality. In contrast, \"high-fidelity\" data closely resembles the real-world system. This can be gathered from real-world measurements or computationally expensive simulations. Multi-fidelity modelling provides a useful framework for combining the accuracy of high-fidelity data with the low cost of low-fidelity data. In the simplest form, one can emulate high-fidelity data by scaling the low-fidelity data and adding an error term [10]. Mathematically, \\[f_{\\text{high}}\\left(x\\right)=f_{\\text{err}}\\left(x\\right)+\\rho f_{\\text{low} }\\left(x\\right). \\tag{2}\\] When all the terms are independent Gaussian processes, we can perform mathematical operations, like addition, because the terms are multivariate normal distributions. When a nonlinear relationship exists between high-fidelity and low-fidelity data, it can be modeled using a nonlinear information fusion [10], given by \\[f_{\\text{high}}\\left(x\\right)=\\rho\\left(f_{\\text{low}}\\left(x\\right)\\right)+ \\delta(x). \\tag{3}\\] These concepts behind multi-fidelity modelling can be extended to use more than two data sources, each having distinct costs and accuracies, including real-world data and other emulators [11]. ### _Acquisition Functions_ One fundamental problem in building a statistical emulator is deciding which new locations in the input space should be expensively evaluated. This is solved using an acquisition function: given a model with a known set of inputs \\(\\mathbf{x}\\) and an acquisition function \\(a(\\mathbf{x})\\), the next point to be evaluated is \\(\\mathbf{x}_{n+1}=\\operatorname*{argmax}_{\\mathbf{x}\\in\\mathbb{X}}a(\\mathbf{x})\\), where \\(\\mathbb{X}\\) denotes the total possible input space. Many acquisition functions exist, each with its own assumptions and optimization metrics [12, 13]. There are two acquisition functions that are relevant to our work. #### Iii-D1 Model Variance This acquisition function selects sequential points based on the model's uncertainty, where each new selected point \\(x_{N+1}\\) corresponds to the one with the highest variance, given by \\[a_{MV,x}=\\sigma^{2}(x). \\tag{4}\\] #### Iii-D2 Integrated Variance Reduction Rather than choosing new points at which the model has the highest variance, the integrated variance reduction acquisition function aims to sample a new point which reduces the overall uncertainty of the model. More formally, one can approximate the integrated variance reduction as \\[a_{IVR,x}=\\frac{1}{\\left|X\\right|}\\sum_{x_{i}\\in X}\\left[\\sigma^{2}\\left(x_{i }\\right)-\\sigma^{2}\\left(x_{i};x\\right)\\right], \\tag{5}\\] where \\(X\\) is the set of test points used in the estimation in the calculation. ## III Dataset We base our work on the multi-fidelity dataset provided by Hosking [14], which is comprised of low-fidelity and high-fidelity climate model outputs over a region of Peru (shown in Figure 4 in Appendix A). The high-fidelity data is available at \\(40\\times\\) higher spatial resolution than the low-fidelity data. Specifically, the following data is available, for each month from 1980 through 2018: * _High-fidelity temperature predictions_. The output from the RCM. This is the \"target\" that we are interested in modelling. * _High-fidelity elevation data_. It is assumed that this elevation data remains constant over time. * _Low-fidelity temperature predictions_. The output from the GCM. This is inexpensive to compute, but predictions are at a coarser scale than its high-fidelity counterpart. * _Low-fidelity wind predictions_. The output from the GCM. It is available in North-South and East-West wind components. ## IV Methods We imagine having access to two climate models: a GCM, which can produce low-fidelity predictions of the temperature and wind speed quickly and inexpensively over the entire region of interest, and an RCM, which can produce high-fidelity predictions of the near-ground temperature using boundary conditions set by the GCM. However, the cost of running Fig. 1: The size of the sub-regions in which high-fidelity data is acquired. The smaller is about half the area of the larger region. the RCM scales proportionally with the area covered by the simulation and is therefore prohibitively expensive to run over the entire region of interest. Instead, we run the RCM over several _sub_-regions to remain within budget, and to combine these high-fidelity predictions with complete elevation data (and optionally low-fidelity GCM predictions) to infer the RCM's high-fidelity predictions over the remainder of the region of interest. Figure 1 shows two examples of sub-regions. Note that the sub-region on the right covers twice the area of the sub-region on the left. Thus, it would incur twice the cost to run the RCM over this region compared to the other. ### _Models_ We compare three different models, summarised in Table I. The first, \\(HF\\) infers the high-fidelity temperature at a given location based only on the latitude, longitude and altitude of that location. The second \\(LF\\to HF\\) infers the high-fidelity temperature based on the latitude, longitude, altitude, low-fidelity temperature and wind speed at that location. Finally, \\(MF\\) is a linear multi-fidelity model, as shown in equation (2). This model infers the low-fidelity temperature and high-fidelity temperature at a given location based on the latitude, longitude, and altitude at that location. ### _Batch Acquisition Function_ The crux of the problem is selecting where to run the costly RCM. This corresponds to selecting a batch of \\(n\\) promising points to evaluate expensively, a task known as _batch acquisition_. Batch acquisition is a well studied topic in statistical modelling, especially in Bayesian Optimisation [15, 16, 17]. Some approaches jointly optimise these points' locations to maximise the acquisition function's value. In practice this is computationally intractable, so a heuristic is often used to select the points sequentially. In our problem, the points in a batch are constrained to a small grid (see Figure 1). For each iteration, we would like to select the grid of points to evaluate expensively, such that the improvement of the model is maximised. #### Iv-B1 Batch-wise Total Model Variance As a baseline method, we propose a batch-wise acquisition function extending the popular model variance function defined in in equation (4), based on Uncertainty Sampling. To do so, we evaluate the total model variance across a batch of points. This is given by \\[a_{MV,B,\\Sigma}=\\sum_{b\\in B}\\sigma^{2}(b), \\tag{6}\\] where \\(B\\) is a set of promising points. When \\(B\\) is a grid of points describing a sub-region, this heuristic selects the sub-region where the model variance is highest. As we are concerned with acquiring points in the high-fidelity model, we only evaluate the acquisition function over the high-fidelity component in the multi-fidelity case. #### Iv-B2 Batch-wise Maximum Integrated Variance Reduction To improve on \\(a_{MV,B,\\Sigma}\\), we propose an extension of the integrated variance reduction acquisition function defined in equation (5) for the batch acquisition scenario. This proposed heuristic evaluates the expected variance reduction when integrating each point in the batch at a set of test points and returns the maximum reduction across the batch for each test point. Mathematically, this is given by \\[a_{IVR,B,\\text{max}}=\\frac{1}{|X|}\\sum_{x_{i}\\in X}\\max_{b\\in B}\\left[\\sigma^{ 2}(x_{i})-\\sigma^{2}(x_{i};b)\\right] \\tag{7}\\] where \\(B\\) is a set of promising points and \\(X\\) is the set of points at which to evaluate the variance. This heuristic approximates the expected variance reduction of integrating an entire batch of points. Figure 2 shows a demonstration of the heuristic in a simple scenario. Figure 2b shows how the estimated variance reduction of integrating a particular batch \\(B=\\{b_{0},b_{1},b_{2},b_{3}\\}\\) compares to the realised variance reduction. Note that \\(IVR(b)=\\sigma^{2}(x)-\\sigma^{2}(x;b)\\). In the example presented, our heuristic underestimates the total variance reduction of integrating \\(B\\) by 19%. An analytic comparison of this heuristic to its exact counterpart is required to establish bounds on the estimation error. This is left to future work. When \\(B\\) is a grid of points describing a sub-region this heuristic estimates the sub-region where the expected variance reduction of integrating the batch of points is maximised. Again, we only evaluate the acquisition function over the high-fidelity component in the multi-fidelity case. ## V Experiments We evaluate the models in Table I. Each model acquires sub-regions of high-fidelity data according to the acquisition functions described in the previous sections (Section IV-B1 and Section IV-B2). We evaluate two differently-sized sub-regions, shown in Figure 1. The sub-regions are squares comprising of 121 points and 225 points respectively. We limit the \"computational budget\" to a total of 500 points (excluding low-fidelity points) - about 6% of the region of interest. Therefore, approximately 4 small sub-regions or two large sub-regions can be acquired. We initialise the \\(HF\\) and \\(HF\\to LF\\) models with one randomly placed high-fidelity sub-region. We initialise the \\(MF\\) model with 100 points from the low-fidelity model and one randomly placed high-fidelity sub-region. We run each model 80 times and record the MSE of its predictions compared to the output of the RCM computed over the entire region of interest (the target). We implemented the models using Emukit [18] and GPy [19], and ran all experiments on virtual machines with 8 CPUs (Intel Cascade Lake generation) and 32 GB of RAM hosted on Google Cloud Compute Engine. \\begin{table} \\begin{tabular}{l l l} \\hline \\hline Model & Input & Output \\\\ \\hline \\(HF\\) & Latitude, Longitude, Altitude, Latitude, Longitude, Altitude, LF Wind, LF Temp & HF Temp \\\\ \\(MF\\) & Latitude, Longitude, Altitude & LF/HF Temp \\\\ \\hline \\hline \\end{tabular} \\end{table} TABLE I: Summary of the models evaluated. ## VI Results & Discussion Figure 3 shows an example of a prediction produced by the multi-fidelity model, including the low-fidelity points used, the high-fidelity sub-regions acquired, the statistical model's prediction, and the regional climate model's output over the entire region. The MSE for this example is \\(1.89^{\\circ}\\)C\\({}^{2}\\). Table II summarises the models' performance once the computational budget is reached. The performance of all models as a function of the number of points acquired is shown in Figure 6 in Appendix A. The multi-fidelity model using the batch-wise integrated variance reduction acquisition function, \\(a_{IVR,B,\\max}\\), and the small sub-region performs best, achieving an average MSE of \\(15.621^{\\circ}\\)C\\({}^{2}\\). The multi-fidelity model (\\(MF\\)) outperforms the single-fidelity models by a significant margin in all configurations. It could be that the small number of adjacent high-fidelity training points is insufficient to learn the correlation between the input and the high-fidelity output, but as the low-fidelity training points are reasonably uniformly distributed over the domain, they are sufficient to learn the correlation between the input and the low-fidelity output. Thus, the multi-fidelity approach could effectively bridge the learned relationships between input and low-fidelity output and between low- and high-fidelity output. Models using the smaller sub-region perform better than their counterparts. While both models acquire a similar number of points, the models using the small sub-region acquire more points according to the acquisition function (three small sub-regions, compared to only one large sub-region). As the acquisition function can direct where the points are acquired, it is unsurprising that the smaller sub-region models perform better. However, the sub-region cannot be infinitely small as the RCM cannot be run over a single point. In a practical scenario, the minimum size of the sub-region should be dictated by a climate scientist or domain expert. All models using the \\(a_{IVR,B,\\max}\\) acquisition function outperform their counterparts using the \\(a_{MV,B,\\Sigma}\\). The difference in performance is especially marked in the single-fidelity case, though improvements are observed in the multi-fidelity case as well. ## VII Conclusion We demonstrated that multi-fidelity surrogate modelling based on Gaussian processes can significantly reduce the cost of producing high-fidelity climate predictions. Our multi-fidelity model combines low-fidelity predictions from a GCM and a handful of high-fidelity sub-regions from an RCM to infer a high-quality prediction over an entire region of interest. Our model produces high-fidelity predictions with an average error of \\(15.62^{\\circ}\\)C\\({}^{2}\\) while only evaluating the RCM for 6% \\begin{table} \\begin{tabular}{l c c c c} \\hline \\hline & \\multicolumn{2}{c}{Configuration} & \\multicolumn{2}{c}{MSE} \\\\ Model & Acquisition & Region Size & \\(\\mu\\) & \\(\\sigma\\) \\\\ \\hline HF & \\(a_{IVR,B,\\max}\\) & Small & 115.863 & 30.295 \\\\ HF & \\(a_{IVR,B,\\max}\\) & Large & 138.184 & 25.079 \\\\ HF & \\(a_{MV,B,\\Sigma}\\) & Small & 167.076 & 25.116 \\\\ HF & \\(a_{IVR,\\Sigma}\\) & Large & 170.951 & 27.235 \\\\ LF\\(\\rightarrow\\)HF & \\(a_{IVR,B,\\max}\\) & Small & 94.302 & 53.988 \\\\ LF\\(\\rightarrow\\)HF & \\(a_{IVR,B,\\max}\\) & Large & 140.887 & 50.381 \\\\ LF\\(\\rightarrow\\)HF & \\(a_{MV,B,\\Sigma}\\) & Small & 139.977 & 38.321 \\\\ LF\\(\\rightarrow\\)HF & \\(a_{AV,B,\\Sigma}\\) & Large & 144.278 & 47.940 \\\\ **MF** & \\(\\mathbf{a_{IVR,B,\\max}}\\) & **Small** & **15.621** & **18.109** \\\\ MF & \\(a_{IVR,B,\\max}\\) & Large & 52.450 & 58.641 \\\\ MF & \\(a_{MV,B,\\Sigma}\\) & Small & 20.145 & 21.276 \\\\ MF & \\(a_{MV,B,\\Sigma}\\) & Large & 46.713 & 47.814 \\\\ \\hline \\hline \\end{tabular} \\end{table} TABLE II: Experimental results showing all combinations of model, acquisition function, and sub-region size. The multi-fidelity model using the batch-wise integrated variance reduction acquisition function and the small sub-region performs best. Fig. 2: Demonstration of our proposed batch-wise acquisition function, \\(a_{IVR,B,\\max}\\), on the Forrester function [12] for a batch \\(B=\\{b_{0},b_{1},b_{2},b_{3}\\}\\). The test function is shown above and the integrated variance reduction is shown below. of the region of interest. We demonstrated that surrogate modelling can be a useful tool for climate scientists, especially when used to emulate expensive simulators. Additionally, we proposed a novel acquisition function, the \\(a_{IVR,B,\\max}\\), for the task of batch acquisition, demonstrating marked improvements over a batch model variance baseline. We used this to determine where to evaluate the RCM. ## VIII Future Work The work in this paper raises several interesting questions to pursue in future work. There is a vast amount of low-fidelity data available, yet we initialised the multi-fidelity model using only about 1% of it in order to train the model in a reasonable time. Is there an acquisition function that could be used to determine which _low_-fidelity points to acquire to most improve the high-fidelity prediction? While we demonstrate that the \\(a_{IVR,B,\\max}\\) batch acquisition function improves performance over \\(a_{IVR,B,\\Sigma}\\), future work is necessary to analytically compare this heuristic to the exact integrated variance reduction of the entire batch. Additionally, would an acquisition function that adapts the size of the sub-region it acquires make better use of the budget? Finally, could our approach be extended to produce high-fidelity temperature predictions based on historical RCM predictions and both historical and current GCM predictions? We believe that multi-fidelity surrogate modelling will play a key role in effectively modelling the Earth's changing climate as heterogenous observational data becomes more readily available. ## References * [1] NOAA Geophysical Fluid Dynamics Laboratory, Climate modeling, Geophysical Fluid Dynamics Laboratory, 2009. URL: [https://www.grdl.noaa.gov/climate-modeling](https://www.grdl.noaa.gov/climate-modeling). * [2] E. Armstrong, P. O. Hopcroft, P. J. Valdes, Reassessing the value of regional climate modeling using paleoclimate simulations, Geophysical Research Letters 46 (2019) 12464-12475. * [3] K.-L. Chang, S. Guillas, Computer model calibration with large non-stationary spatial outputs: application to the calibration of a climate model, Journal of the Royal Statistical Society: Series C (Applied Statistics) 68 (2019) 51-78. * [4] M. Krasser, DodoTheDeveloper, Bayesian machine learning notebooks, GitHub, 2020. URL: [https://github.com/krasserm/bayesian-machine-learning](https://github.com/krasserm/bayesian-machine-learning). * [5] M. Krasser, Gaussian processes, Martin Krasser's Blog, 2018. URL: [http://krasserm.github.io/2018/03/19/gaussian-processes/](http://krasserm.github.io/2018/03/19/gaussian-processes/). * [7] P. Rasch, S. Xie, P.-L. Ma, W. Lin, H. Wang, Q. Tang, S. Burrows, P. Caldwell, K. Zhang, R. Easter, et al., An overview of the atmospheric component of the energy exascale earth system model, Journal of Advances in Modeling Earth Systems 11 (2019) 2377-2411. * [8] A. Grow, J. Hilton, Statistical emulation, Wiley StatsRef: Statistics Reference Online (2014) 1-8. * [9] M. C. Kennedy, A. O'Hagan, Predicting the output from a complex computer code when fast approximations are available, Biometrika 87 (2000) 1-13. * [10] P. Perdikaris, M. Raissi, A. Damianou, N. D. Lawrence, G. E. Karniadakis, Nonlinear information fusion algorithms for data-efficient multi-fidelity modelling, Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences 473 (2017). * [11] A. Damianou, N. D. Lawrence, Deep gaussian processes, in: Artificial intelligence and statistics, PMLR, 2013, pp. 207-215. * [12] A. Forrester, A. Sobester, A. Keane, Engineering design via surrogate modelling: a practical guide, John Wiley & Sons, 2008. * [13] B. Shahriari, K. Swersky, Z. Wang, R. P. Adams, N. De Freitas, Taking the human out of the loop: A review of bayesian optimization, Proceedings of the IEEE 104 (2015) 148-175. Fig. 3: A sample of the multi-fidelity model using the proposed batch-wise acquisition function, \\(a_{IVR,B,\\max}\\), and acquiring small sub-regions of high-fidelity data, showing (a) the low-fidelity training samples from the GCM (b) the high-fidelity training sub-regions from the RCM (c) the inferred high-fidelity temperature predictions for the entire region of interest, and (d) the target high-fidelity temperature predictions from the RCM. * [14] S. Hosking, Multifidelity climate modelling, GitHub, 2020. URL: [https://github.com/scotthosking/mf_modelling](https://github.com/scotthosking/mf_modelling). * [15] D. Ginsbourger, R. Le Riche, L. Carraro, A Multi-points Criterion for Deterministic Parallel Global Optimization based on Gaussian Processes, Technical Report, Archive ouverte HAL, 2008. * [16] G. De Ath, R. M. Everson, J. E. Fieldsend, A. A. M. Rahat, \\(\\epsilon\\)-shotgun: \\(\\epsilon\\)-greedy batch bayesian optimisation, in: Proceedings of the 2020 Genetic and Evolutionary Computation Conference, Association for Computing Machinery, 2020, p. 787-795. * [17] M. Jarvenpaa, A. Vehtari, P. Marttinen, Batch simulations and uncertainty quantification in gaussian process surrogate approximate bayesian computation, in: Proceedings of the 36th Conference on Uncertainty in Artificial Intelligence (UAI), volume 124, PMLR, 2020, pp. 779-788. * [18] A. Paleyes, M. Pullin, M. Mahsereci, N. Lawrence, J. Gonzalez, Emulation of physical processes with emukit, in: Second Workshop on Machine Learning and the Physical Sciences, NeurIPS, NeurIPS, 2019. * [19] SheffieldML, GPy: A gaussian process framework in python, GitHub, 2012. URL: [http://github.com/SheffieldML/GPy](http://github.com/SheffieldML/GPy). ## References * [1] S. Agarwal, A. Agarwal, and A. Agarwal. (2016) A survey of machine learning algorithms. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_, pages 1-10, 2016. [MISSING_PAGE_POST] 2016) A survey of machine
Accurately modelling the Earth's climate has widespread applications ranging from forecasting local weather to understanding global climate change. Low-fidelity simulations of climate phenomena are readily available, but high-fidelity simulations are expensive to obtain. We therefore investigate the potential of Gaussian process-based multi-fidelity surrogate modelling as a way to produce high-fidelity climate predictions at low cost. Specifically, our model combines the predictions of a low-fidelity Global Climate Model (GCM) and those of a high-fidelity Regional Climate Model (RCM) to produce high-fidelity temperature predictions for a mountainous region on the coastline of Peru. We are able to produce high-fidelity temperature predictions at significantly lower computational cost compared to the high-fidelity model alone: our predictions have an average error of \\(15.62^{\\circ}\\)C\\({}^{2}\\) yet our approach only evaluates the high-fidelity model on 6% of the region of interest. Gaussian processes, Multi-fidelity modelling, Climate modelling, Earth observation
Condense the content of the following passage.
arxiv/d0e8b859_7544_48af_bd3f_b17c49218b39.md
# Knowledge-aware Text-Image Retrieval for Remote Sensing Images Li Mi, _Student Member, IEEE_, Xianjie Dai, Javiera Castillo-Navarro, Devis Tuia, _Fellow, IEEE_ This work was supported by EPFL Science Seed Funds under Grant 21692.L. Mi, X. Dai, J. Castillo-Navarro and D. Tuia are with Ecole Polytechnique Federale de Lausanne (EPFL). Corresponding author: Li Mi ([email protected]). ## I Introduction Recent advances in satellite data acquisition and storage have led to a rapid increase in the size and complexity of remote sensing image archives. To explore these archives, image retrieval has received increasing attention with multiple systems designed to conduct searches based on visual similarity to the query [1, 2]. However, retrieving images using example images limits the versatility of the retrieval system, since with the query image only, one cannot specify which elements are essential or what the retrieval objective is. As a solution, text-image retrieval [3, 4, 5] has been introduced to explicit the retrieval targets in a semantic way. Text-image retrieval aims at finding an image based on a text or, in reverse, retrieving a text pertaining to an image. When using text, the prospective retrieval system gains in usability, but at the same time faces the problem of information gaps between texts and images [6]. Previous efforts attempted to fill the cross-modal information gap by establishing a representative text-image joint embedding space. For example, recent works have explored image feature representations [4, 5, 7, 8], fusion models [9, 10] and contrastive objectives [11, 12] for better aligning the cross-modal features. However, the information asymmetry caused by cross-modality information is rarely mentioned, _i.e._, the fact that the short texts do not allow much freedom to represent diverse image content. When dealing with very high-resolution remote sensing images, the image content can be very diverse, hence it is difficult to be comprehensively summarized by a short caption only. On one hand, human captions can only describe one or a few aspects of the image, focusing on the most dominant information. For example, one image could receive the following caption text: _There is a lake_. Nevertheless, there might be trees and mountains around the lake which are ignored by humans or caption generators. On the other hand, different people will describe the image from subjective perspectives, resulting in a variety of text information for a single image, which may confuse the matching model. For example, the captions _There is a lake_ and _There are many boats_ could refer to the same image. Therefore, strategies to handle lacunary captions, nuances and synonyms are needed for the task, and a balance between objectivity and completeness must be achieved. Commonsense knowledge sources have been recognized as effective priors in many vision-and-language research [13, 14] to reveal commonsense and alleviate ambiguities. For example, by adding commonsense knowledge from external sources, the description _This is a lake_ will probably be expanded by introducing related concepts and relations such as _Lake has water_ and _Bouts on the lake_, which expand the text content to match the image. In addition, external knowledge also provides an opportunity to link descriptions of different concepts. For example, the concepts _lake_ and _boat_ could be bridged by _Boat on the lake_ from external knowledge sources. In order to fill the cross-modal information asymmetry Fig. 1: Intuition behind the proposed KTIR system: in a standard text-image retrieval approach (a), text and images are matched directly, while in KTIR (b), commonsense knowledge is added from external sources (a knowledge base) to make the retrieval more varied, robust to ambiguities and consistent with general knowledge. and reduce the impact of language ambiguity, we propose a Knowledge-aware Text-Image Retrieval (KTIR) method to introduce external knowledge for remote sensing images into text-image retrieval tasks (Fig. 1). More specifically, based on the objects mentioned in a sentence as starting points, KTIR proposes to mine the expanded nodes and edges in the external knowledge graph and embed them as features to enrich those extracted from the text content alone. In a way, KTIR adds extra commonsense-based links which can enrich the semantics, expand the scope of the text query, alleviate the potential language ambiguity and also facilitate the adaptation of the general vision-language pretraining models to the remote sensing domain. To demonstrate the effectiveness of the proposed KTIR method, we design experiments on three commonly used remote sensing text-image retrieval benchmarks: the UCM-Caption dataset [15], the RSICD dataset [16], and the RSITMD dataset [7]. Results show that KTIR outperforms the comparison methods. Our experiments also explore the differences between different knowledge sources and the relationship between knowledge and image content. The remainder of the paper is organised as follows: Section II details the related works about remote sensing text-image retrieval, external knowledge sources and knowledge-aware vision-language research. Section III presents the proposed KTIR method. Section IV and Section V present the experimental settings and the experiment results, respectively. Section VI concludes the paper. ## II Related Work ### _Text-Image Retrieval for Remote Sensing Images_ Due to the increasing quantity of multi-modal remote sensing data, vision-language research, such as image captioning [16], visual question answering [17], text-guided visual grounding and cross-modal retrieval [4] has attracted increasing attention [18] in remote sensing. Among them, text-image retrieval is regarded as one of the fundamental vision-language tasks for cross-modal alignment. Recent advances in remote sensing text-image retrieval mainly focused on building a representative joint embedding space, especially a multi-level or multi-scale image representation [4, 7, 19, 20]. For example, Yuan _et al._[4] designed a dynamic fusion module of the global and local information to generate a multi-level visual representation. Yuan _et al._[7] designed an asymmetric multimodal feature matching network which can adapt to multi-scale image inputs. Some works mentioned the information asymmetry between image and text and tried to address it by extracting more representative features [5, 8, 21] or designing a powerful fusion method [9, 11, 22, 23]. For example, Yu _et al._[21] proposed to use a Graph Neural Network (GNN) for better representing the object relationships in the image content. Cheng _et al._[11] proposed an attention-based module to fuse the feature from different modalities and used a triplet loss to learn the matching. Besides the aforementioned methods that are proposed especially for text-image retrieval, vision-language models in remote sensing [24, 25, 26, 27] also regard text-image retrieval as a fundamental training and evaluation task. To train those models, a large amount of domain-specific annotations is crucial to the adaptation of pre-trained models to remote sensing images. For example, RemoteCLIP [24] is trained on 17 remote sensing datasets including detection, segmentation and text-image retrieval. GeoRSCLIP [25] collected an additional dataset of 5 million remote sensing images for pretraining. Despite the tremendous progress made, the insufficiency and ambiguity of textual information are rarely addressed. Departing from previous efforts, which based the retrieval on the image and caption only, we propose to enrich the latter with external knowledge sources that would extend the text content and alleviate ambiguities. Moreover, external knowledge [28, 29] also serves as a bridge to adapt pre-trained general vision-language models [30, 31] to the remote sensing domain without supplementary annotations. ### _External Knowledge Sources_ Sources of external knowledge can be wide and diverse, including different types of knowledge. In vision-language research, commonsense knowledge [28] and domain-specific knowledge [29] are often used. Commonsense knowledge bases (_e.g._, ConceptNet [28] and ATOMIC [32]) include the basic concepts and facts which are usually shared by most people and implicitly assumed in communications. For example, everyday events and their effects (_e.g._, _eat something if feeling hungry_), facts about beliefs and desires (_e.g._, _keep exercising to get in good health_), and properties of objects (_e.g._, _fishes live in water_). Most of the commonsense knowledge sources use triplets (_i.e._, \\(<\\)head, relation, tail\\(>\\)) to store and represent knowledge. For example, the triplet \\(<\\)_cooking_, _Requires_, _food\\(>\\)_ means that '_the prerequisite of cooking is food_'. In this paper, we consider ConceptNet [28] as a commonsense knowledge source. ConceptNet is a multilingual knowledge graph that aligns its knowledge resources to 36 types of relations, including symmetric relations (_e.g._, _RelatedTo_ and _SimilarTo_) and asymmetric relations (_e.g._, _AtLocation_, _CapableOf_, _Causes_, _Desires_, _HasA_, _HasProperty_, _UsedFor_, _etc._). Different from commonsense knowledge that is generic to many domains and daily life, domain-specific knowledge is adapted to a particular domain. The domain-specific knowledge is obtained by filtering out irrelevant objects and relations from general knowledge bases or directly collecting from domain-specific corpus or annotations [33, 29]. In remote sensing research, Li _et al._[29] constructed a remote sensing knowledge graph (RSKG) to support zero-shot remote sensing image scene classification. RSKG has 117 entities, 26 relations and 191 triples, manually selected for remote sensing images. We consider RSKG as the domain-specific knowledge source in the paper. Detailed information on the knowledge sources we used in the paper can be found in Section III-B. ### _Knowledge-aware Vision-Language Research_ Explicitly incorporating knowledge into language models has been an emerging trend in recent natural language processing (NLP) research [36, 37]. Similar to the pure NLP tasks, recent research has shown that many vision tasks such as visual question answering, image captioning, and vision-language navigation, can be enhanced by adding knowledge [13, 38, 39, 40, 41, 42, 43]. In those tasks, external knowledge is often regarded as a significant source of information that is difficult to obtain directly from vision. For example, to answer knowledge-aware visual-related questions [38], the model is supposed to understand the vision content, as well as to retrieve the knowledge bases to obtain concepts and relations that are not visible in the image itself. In this work, external knowledge is integrated into text-image retrieval in remote sensing images to 1) narrow the cross-modality information gap by the explicit integration of external knowledge; and 2) adapt general vision-language models to the remote sensing domain by using domain-specific commonsense knowledge. The proposed KTIR is an extension of our preliminary work KCR [44]. The differences between KCR and KTIR are as follows: * _The knowledge sources_: KCR only supports using RSKG as knowledge sources while KTIR also includes ConceptNet (See Section V-A). * _The base models_: The text encoder in KCR is frozen while KTIR is based on the BLIP model [30] and all the modules are trainable. * _Knowledge embedding methods_: KCR combines the knowledge triplets and the captions directly, while KTIR uses a cross-attention mechanism to fuse two text sources (See Section V-D). ## III Knowledge-aware Text-Image Retrieval Method The proposed text-image retrieval system comprises three main components: an image encoder, a knowledge-aware text encoder and a similarity measurement module (Fig. 2 (a)). The image encoder is designed to extract image features by a Vision Transformer (ViT) [34]. The text encoder embeds a sentence and its related external knowledge into a joint feature space representing the text inputs. Finally, the image and text features are both used within the similarity measurement module to compute the similarity score between text queries and candidate images, which are then ranked according to their relevance. The model can also be applied in reverse, where the best captions to summarize an image are retrieved. Note that we build the KTIR method upon the BLIP [30] model, but different backbones could be used instead (See Section V-F). ### _Image encoder_ The image encoder is a ViT, where an image \\(\\hat{\\mathbf{i}}\\) is divided into several patches, which are processed by a transformer encoder. Then, the extracted image features are further reprojected to a space of dimension equivalent to the output of the text encoder described in the next section. A fully-connected (FC) layer is used for such reprojection. The image embedding process can be denoted as: \\[\\mathbf{f}_{img}=\\mathrm{FC}_{img}\\left(\\mathrm{ViT}(\\hat{\\mathbf{i}})\\right). \\tag{1}\\] ### _Knowledge-aware text encoder_ The knowledge-aware text encoder embeds the textual description for text-image retrieval. It uses as input a number of captions describing the scene as well as external knowledge to strengthen the text representation. In this section, we first explain the knowledge extraction process (Fig. 2 (b)) and then describe the fusion of captions and external knowledge in the text encoder. Fig. 2: (a) The pipeline of the KTIR. The proposed text-image retrieval system comprises three main components: an image encoder, a knowledge-aware text encoder and a similarity measurement module. The image feature (\\(\\mathbf{f}_{img}\\)) are obtained by ViT [34] (ViT). A BERT [35] is used as text-only mode (\\(\\mathbf{BERT}_{feat}\\), green mode) and multimodal mode (\\(\\mathbf{BERT}_{multi}\\), yellow mode) to encode the knowledge-aware text feature (\\(\\mathbf{f}_{text}\\)) and the multimodal feature (\\(\\mathbf{f}_{multi}\\)), respectively. Then the text-image contrastive loss (\\(\\mathcal{L}_{\\mathrm{con}}\\)) and the text-image matching loss (\\(\\mathcal{L}_{\\mathrm{mat}}\\)) are used as the training objectives for cross-modality retrieval. (b) The knowledge extraction process. In the knowledge-aware text encoder, the knowledge extraction process includes keyword extraction, knowledge retrieval and knowledge sentence construction. After knowledge extraction, the knowledge sentences \\(\\mathbf{s}_{\\mathbf{k}}\\) are collected for each caption \\(\\mathbf{s}\\). Numbers in the feature vectors denote their dimension. #### Iv-B1 **Knowledge extraction** The key idea is to retrieve and represent relevant knowledge triplets from external knowledge sources based on the information provided by the caption sentence. The knowledge extraction is repeated for each caption and consists of the following three steps: Keyword extractionFor a caption \\(\\mathbf{s}\\) with \\(n\\) words: \\(\\mathbf{s}=\\{w_{1},w_{2}, ,w_{n}\\}\\)\\((n\\geq 1)\\), a tokenizer is used to separate every word and divide the part-of-speech (_e.g._, noun, verb, adjective, adverb, _etc._). Based on the part-of-speech tags, all the nouns can be appended to a word list \\(\\mathbf{s_{n}}\\). Note that we turn all the plurals into their singular form. Knolwedge triplet retrievalWe use two different knowledge sources, RSKG [29] and ConceptNet [28]. The combination of the two sources is also considered. * RSKG [29] is a hand-crafted knowledge graph for the remote sensing domain. The knowledge graph is designed for the remote sensing scenes so that the objects and relations conform to remote sensing vocabulary. Based on the keyword list \\(\\mathbf{s_{n}}\\), we retrieve all the related knowledge triplets in the graph \\(\\mathbf{t_{rs}}=\\{t_{rs}^{1},t_{rs}^{2}, ,t_{rs}^{m_{rs}}\\}\\), where \\(m_{rs}\\) is the number of triplets from the RSKG knowledge graph and \\(t_{rs}^{m_{rs}}\\) is a triplet \\(<\\)head, relation, tail\\(>\\) and the head or the tail is in the keyword list \\(\\mathbf{s_{n}}\\). More specifically, the nouns in the word list are regarded as the initial nodes. Starting from those nodes, all the one-step neighbours with the connected edges in RSKG are included. We keep all the relations from the graph. * ConceptNet [28] is a multi-language commonsense knowledge graph. Compared to RSKG, ConceptNet is much larger and more general in terms of objects and relationships. Given a keyword, the ConceptNet official API 1 returns the related triplets that involve the query keywords. Similar to RSKG, a triplet list \\(\\mathbf{t_{ce}}=\\{t_{ce}^{1},t_{ce}^{2}, ,t_{ce}^{m_{ce}}\\}\\) is constructed based on the word list \\(\\mathbf{s_{n}}\\). \\(m_{ce}\\) is the number of triplets from ConceptNet. Note that during the retrieval, we filter out the triplets that have non-English words. For the relations, we choose 15 relations from the total set, they are: _UsedFor, ReceivesAction, HasA, Causes, HasProperty, CreatedBy, DefinedAs, AtLocation, HasSubEvent, MadeUpOf, HasPrerequisite, Desires, NotDesires, IsA_ and _CapableOf_. Footnote 1: [https://conceptnet.io/](https://conceptnet.io/) * Combine RSKG and ConceptNet. For the triplet list obtained from ConceptNet \\(\\mathbf{t_{ce}}\\), we filter out items where both head and tail are not in the RSKG objects to limit the external knowledge to remote sensing domain. Together with the RSKG triplet list \\(\\mathbf{t_{rs}}\\), a new triplet list \\(\\mathbf{t_{co}}=\\{t_{co}^{1},t_{co}^{2}, ,t_{co}^{m_{ce}}\\}\\), is constructed by combining the triplets from two knowledge sources, where \\(m_{co}\\leq m_{re}+m_{ce}\\). Detailed statistics of different external knowledge sources are shown in Table I. In general, \\(\\mathbf{t_{rs}}\\) has fewer objects and relation types compared to \\(\\mathbf{t_{ce}}\\) and \\(\\mathbf{t_{co}}\\). By combining two knowledge sources (\\(\\mathbf{t_{co}}\\)), a reasonable amount of additional concepts and diverse types of relations are considered. Note that mining all the related objects and relations might be redundant, so in the experiments, we randomly select \\(m\\) triplets from all the available ones for a caption \\(\\mathbf{s}\\) (Detailed analysis of triplet numbers and selection strategies can be found in Section V-D and Section V-E, respectively). In the end, the selected triplet list can be denoted as \\(\\mathbf{t}\\). Knowledge sentence constructionBefore encoding the knowledge triplets, we convert the selected ones (composing the list \\(\\mathbf{t}\\)) to short knowledge sentences. More specifically, for a triplet (\\(<\\)head, relation, tail\\(>\\)), we keep the head subject and tail object as they are, but re-formulate the relation to construct a meaningful sentence according to a transformation template (_e.g._, _UsedFor_ as \"_is used for_\"). The specific templates are listed in Table II. Following these rules, for example, the triplet \\(<\\) boat, AtLocation, water \\(>\\) can be re-written as _boat is at location of water_. After this re-formulation, a list of knowledge sentences is obtained, one per knowledge triplet in list \\(\\mathbf{t}\\). We then combine \\(m\\) knowledge sentences as the final knowledge sentence for caption \\(\\mathbf{s}\\), which we refer to as \\(\\mathbf{s_{k}}\\). representation (\\(\\mathbf{f}_{kval}\\)), specifically. \\[\\begin{split}\\mathbf{f}_{cap}&=\\mathrm{BERT}_{ text}\\left(\\mathbf{s}\\right),\\\\ \\mathbf{f}_{kval}&=\\mathrm{BERT}_{text}\\left(\\mathbf{s _{k}}\\right).\\end{split} \\tag{2}\\] The knowledge-aware text feature is generated by fusing \\(\\mathbf{f}_{cap}\\) and \\(\\mathbf{f}_{kwl}\\) through a single cross-attention layer. The cross-attention mechanism can be represented as: \\[\\mathrm{CrossAtt}(\\mathbf{f}_{1},\\mathbf{f}_{2})=\\mathrm{softmax}\\left(\\frac{ \\mathbf{f}_{1}W_{1}(\\mathbf{f}_{2}W_{2})^{T}}{\\sqrt{d_{k}}}\\right)\\mathbf{f}_{ 2}W_{2}, \\tag{3}\\] where \\(d_{k}\\) is the dimension of the feature space, and \\(W_{1}\\in\\mathbb{R}^{d_{k}\\times d_{k}}\\) and \\(W_{2}\\in\\mathbb{R}^{d_{k}\\times d_{k}}\\) are the transformation matrices for input features \\(\\mathbf{f}_{1}\\in\\mathbb{R}^{d_{k}}\\) and \\(\\mathbf{f}_{2}\\in\\mathbb{R}^{d_{k}}\\), respectively. We input both sequences (caption representations, then knowledge sentence representations and their reverse) as the input of the cross-attention layer. Note that after concatenation, the dimension of the concatenated sequences is reduced to \\(d_{k}\\). After being embedded in the feature space, a final FC layer is applied to obtain the overall representation of a text: \\[\\mathbf{f}_{txt}=\\mathrm{FC}_{text}(\\mathrm{CrossAtt}([\\mathbf{f}_{cap}, \\mathbf{f}_{kwl}],[\\mathbf{f}_{kwl},\\mathbf{f}_{cap}])), \\tag{4}\\] where [, ] denotes the concatenation operation and the FC layer to reduce the feature dimensions. multi-modal featureAs stated in the previous subsection, the BLIP text encoder can also be used in multimodal mode (\\(\\mathrm{BERT}_{multi}\\)), where the text inputs (\\(\\mathbf{s},\\mathbf{s_{k}}\\)) and image embeddings before \\(\\mathrm{FC}_{img}\\) layer, denoted as \\(\\mathbf{f}_{img}^{\\prime}\\), are used as the two inputs of the text encoder. The multimodal mode builds a joint feature (\\(\\mathbf{f}_{multi}\\)) for multimodal inputs: \\[\\mathbf{f}_{multi}=\\mathrm{BERT}_{multi}\\left((\\mathbf{s},\\mathbf{s_{k}}), \\mathbf{f}_{img}^{\\prime}\\right). \\tag{5}\\] The multimodal feature is then used in a FC layer (\\(\\mathrm{FC}_{multi}\\)) acting as a classifier to compute the probability of alignment between the image and text pair. This is implemented as a binary classification task: \\[\\hat{y}=\\mathrm{FC}_{multi}(\\mathbf{f}_{multi}), \\tag{6}\\] where \\(\\hat{y}\\) denotes the prediction for the text-image pair, indicating whether they are matched or not. ### _Similarity Measurement_ After encoding, the text and image are represented as image feature (\\(\\mathbf{f}_{img}\\)), knowledge-aware text feature (\\(\\mathbf{f}_{txt}\\)), multimodal joint embedding (\\(\\mathbf{f}_{multi}\\)) and a predicted probability of matching (\\(\\hat{y}\\)). In this section, we describe the objectives based on those features to perform cross-modal retrieval. #### Iii-C1 **Knowledge-aware text-image contrastive loss** The contrastive loss constrains the similarity score of the matched image-text pairs to be higher than the similarity score of the unmatched ones. It creates a joint feature embedding space for both image and text by aligning the feature embeddings of the paired images and texts. We construct the two contrastive losses for text-image matching (\\(\\mathcal{L}_{\\mathrm{img2xt}}\\)) and text-image matching (\\(\\mathcal{L}_{\\mathrm{txt2img}}\\)), respectively: \\[\\begin{split}\\mathcal{L}_{\\mathrm{img2txt}}&=-\\log \\frac{\\exp\\left(\\mathbf{f}_{img}\\cdot\\mathbf{f}_{txt}^{+}/\\tau\\right)}{\\sum_{i= 1}^{N}\\exp\\left(\\mathbf{f}_{img}\\cdot\\mathbf{f}_{txt}^{i}/\\tau\\right)},\\\\ \\mathcal{L}_{\\mathrm{txt2img}}&=-\\log\\frac{\\exp \\left(\\mathbf{f}_{txt}\\cdot\\mathbf{f}_{img}^{\\prime}/\\tau\\right)}{\\sum_{i=1}^ {N}\\exp\\left(\\mathbf{f}_{txt}\\cdot\\mathbf{f}_{img}^{i}/\\tau\\right)},\\end{split} \\tag{7}\\] where \\(\\mathbf{f}_{txt}^{+}\\) and \\(\\mathbf{f}_{img}^{+}\\) represent the positive examples and \\(N\\) is the number of pairs in a batch. \\(\\tau\\) is a temperature parameter. Finally, the contrastive loss is: \\[\\mathcal{L}_{\\mathrm{con}}=\\frac{1}{2}(\\mathcal{L}_{\\mathrm{img2txt}}+ \\mathcal{L}_{\\mathrm{txt2img}}). \\tag{8}\\] #### Iii-C2 **Knowledge-aware text-image matching loss** Unlike the contrastive loss that aims to align the unimodal features, the matching loss learns fine-grained multimodal alignment by a binary classification task [30, 45]. The model uses a linear layer to predict whether an image-text pair is positive (matched) or negative (unmatched) given its multimodal feature. The matching loss is the binary cross entropy loss: \\[\\mathcal{L}_{\\mathrm{mat}}=-\\frac{1}{N}\\sum_{i=1}^{N}y_{i}\\cdot\\log\\left(\\hat{y }_{i}\\right)+(1-y_{i})\\cdot\\log\\left(1-\\hat{y}_{i}\\right), \\tag{9}\\] where \\(y_{i}\\) is the binary label depicting whther the \\(i\\)-th image-text pair is a match and \\(\\hat{y}_{i}\\) is the corresponding predicted probability from the binary classifier (Eq. 6). The final training objective \\(\\mathcal{L}\\) is the combination of knowledge-aware text-image contrastive loss and knowledge-aware text-image matching loss with the weights \\(w_{1}\\) and \\(w_{2}\\): \\[\\mathcal{L}=w_{1}\\mathcal{L}_{\\mathrm{con}}+w_{2}\\mathcal{L}_{\\mathrm{mat}}. \\tag{10}\\] In practice, we follow the loss calculation described in previous work [30, 45]: A momentum encoder is introduced to create soft labels as training targets. This is to account for the potential positives in the negative pairs for knowledge-aware text-image contrastive loss (\\(\\mathcal{L}_{\\mathrm{con}}\\)). We use a hard negative sampling strategy [45] to find negative samples showing the highest contrastive similarity in the mini-batch. We then use those samples for computing the knowledge-aware text-image matching loss (\\(\\mathcal{L}_{\\mathrm{mat}}\\)). ### _Inference_ At inference, text and image features are obtained by the encoders. Two scores are used to decide the final retrieval result. Firstly, the text-image similarity score (\\(\\mathrm{S}_{\\mathrm{sim}}\\)) is calculated as the pairwise cosine similarity scores between image features and text features. In this case, the text encoder uses the text-only mode (\\(\\mathrm{BERT}_{text}\\)). Secondly, a text-image matching score (\\(\\mathrm{S}_{\\mathrm{mat}}\\)) is obtained by the output probability of the binary classifier using the multi-modal mode of the text encoder (\\(\\mathrm{BERT}_{multi}\\)). The final score \\(\\mathrm{S}\\) is calculated as a sum of the two scores: \\[\\mathrm{S}=\\mathrm{S}_{\\mathrm{sim}}+\\mathrm{S}_{\\mathrm{mat}}. \\tag{11}\\] For text-image retrieval, both the text and image features are used to compute the final score between a text query and candidate images, which are then ranked according to their relevance. When applied in reverse, where the best captionsto summarize an image are retrieved based on the similarity score between the query image and the candidate texts. ## IV Experimental Setup ### _Datasets_ We perform experiments on three RS text-image datasets: UCM-Caption [15], RSICD [16], and RSITMD [7]. Examples from the three datasets are shown in Fig 3. * **UCM-Captions**[15] is based on the UC Merced Land Use dataset [46]. It contains remote sensing images categorized into 21 scene categories (_e.g._, buildings, intersection, parking lost, runway, agricultural, forest, _etc_), with 100 samples for each class. Each image contains 256 \\(\\times\\) 256 pixels and 5 sentences annotated. * **RSICD**[16] is a large remote sensing text-image dataset, and a commonly used benchmark for remote sensing text-image retrieval and remote sensing image captioning. It contains 10921 images with the size 224\\(\\times\\)224 pixels with various resolutions. There are also 5 sentences per image. * **RSITMD**[7] is a fine-grained remote sensing text-image dataset. It contains 4743 images from 24 categories with 23715 captions and 21403 keywords. Compared to the RSICD dataset, the RSITMD dataset was designed to have more fine-grained and diverse text descriptions. We follow the train-test split in previous work [25, 7, 4]. ### _Metrics_ To evaluate the model performance, we exploit the standard evaluation metrics in retrieval tasks and measure the rank-based performance by R@\\(k\\) and mR [3, 7, 47]. With different values of \\(k\\), R@\\(k\\) means the fraction of queries for which the most relevant item is ranked among the top-\\(k\\) retrievals. mR represents the average of all R@\\(k\\) in both text-image retrieval and image-text retrieval. mR\\({}_{t2i}\\) and mR\\({}_{i2t}\\) denote the average of all R@\\(k\\) in text-image retrieval and image-text retrieval, respectively. In our experiments, we report the results of \\(k=[1,5,10]\\), as in previous works. ### _Implementation details_ Following BLIP, the image encoder is a ViT-B/16, a ViT architecture with 12 attention heads, \\(12\\) hidden layers, and images divided into \\(16\\times 16\\) patches. The text encoder is BERT\\({}_{\\text{base}}\\), a transformer encoder with \\(12\\) attention heads and 12 hidden layers. The output feature dimension of both ViT and BERT\\({}_{\\text{base}}\\) is \\(768\\). The dimension of the final image feature (\\(\\mathbf{f}_{img}\\)), of the knowledge-aware text feature (\\(\\mathbf{f}_{txt}\\)) and of the multi-modal joint embedding (\\(\\mathbf{f}_{multi}\\)) are \\(256\\), \\(256\\) and \\(768\\), respectively. In the cross-attention mechanism, \\(d_{k}\\) is set to \\(768\\) as well, therefore matching the output of the text encoder. We initialize the encoder-decoder architecture with the corresponding pre-trained modules from BLIP [30]. Since all BLIP models are publicly available, we choose the \"BLIP w/ ViT-B and CapFilt-L\" checkpoint for initialization. This model was pre-trained on 129M noisy image-text pairs using CapFilt-L, a captioning and filtering method 2. In Section V-F, we also integrate knowledge into CLIP [31]. For the CLIP-based model, we use ViT-B/32. Footnote 2: [https://github.com/salesforce/BLIP](https://github.com/salesforce/BLIP) All the experiments were run on a single NVIDIA 4090. We train all the KTIR models for 10 epochs for each dataset, with a batch size of 16. We use AdamW as the optimizer and a cosine learning rate scheduler to adjust the learning rate during training. The starting learning rate is 5e-6, with a weight decay of 0.05. The temperature parameter (\\(\\tau\\)) is initialized with 0.07 and is learnable during the training process. The CLIP-based models in Section V-F are trained for 30 epochs with a batch size of 64. We set the maximum number of triplets (\\(m\\)) as 5 for all the experiments, unless stated. The loss weights \\(w_{1}\\) and \\(w_{2}\\) are set to 1 experimentally. Their effect will be studied in Section V-D. ### _Baseline Methods_ We compare KTIR with the following state-of-the-art methods in text-image retrieval. Based on the datasets that the model was trained on, there are two categories: supervised training methods and pretraining-finetuning methods. #### Iv-D1 Supervised training methods This category includes methods that only use the corresponding datasets to train the model. This means that the training samples are limited to the dataset. * **AMFMN**[7] employs multi-scale self-attention to extract the visual features and guide the text representation. * **CMFM-Net**[21] uses a GNN to model the object and relations in the text. * **GaLR**[4] utilizes an attention-based multi-level module to fuse global and local features extracted by a CNN and a GNN, respectively. In addition, GaLR involves a post-processing stage based on a rerank algorithm. * **KCR**[44] proposes to use external knowledge triplets to expand the text content. It is the early version of KTIR. Note that, different from other methods, the text encoder is frozen in KCR. #### Iv-D2 Pretraining-finetuning methods With the development of vision-language pretraining (VLP), some methods are trained on other larger datasets and can be fine-tuned for remote sensing text-image retrieval. * **MLT**[5] a multilingual retrieval method that is based on the CLIP [31] structure and pre-trained weights. MLT is fine-tuned on each text-image retrieval dataset separately. We report the results of the single-language mode. Fig. 3: Examples of images and text sentences are from the three datasets. * **RemoteCLIP**[24] is a find-tuned CLIP model on 17 remote sensing datasets including UCM-Caption, RSICD and RSITMD and showed competitive performance on several remote sensing tasks. We report the best results of RemoteCLIP on each dataset. * **GeoRSCLIP**[25] is another fine-tuned version of the CLIP model. It was pre-trained on the RS5M dataset with 5 million remote sensing images. We report the results from the best-performing model which is fine-tuned on the combination of RSICD and RSITMD datasets after trained on the the RS5M dataset. * **BLIP (KTIR-base)**[30] is a VLP framework which transfers to both vision-language understanding and generation tasks. Here we fine-tune BLIP separately on the three datasets to ensure a fair comparison. ## V Results ### _The Effectiveness of External Knowledge_ In our first experiment, we investigate the effectiveness of the two different knowledge sources (RSKG and ConceptNet) and of their combination. To further explore the effectiveness of different knowledge sources, we also augment KTIR using a large language model: we prompt GPT-4 [48] to generate relevant sentences to expand the scope of the given text, and then use these generated sentences as knowledge sentences for KTIR. To ensure a fair comparison, the number of generated sentences is kept the same as the number of knowledge sentences constructed from the other knowledge graphs, _i.e._, \\(m\\). In Table III, we compare KTIR with different knowledge sources with the fine-tuned BLIP modal on the UCM-Caption dataset. The results suggest external concepts can supply text information to fill the information gap between text descriptions and image content. For example, KTIR with GPT-4, RSKG and ConceptNet improve the mR of the model without knowledge by 0.17%, 0.49% and 0.60%, respectively. By combining RSKG and ConceptNet, KTIR improves the performance of the baseline model even further, by 1.14%. The GPT-4-augmented KTIR underperforms KTIR with knowledge graphs. This is likely due to generative models hallucinating irrelevant sentences about the content, along with their lack of stability and reproducibility during the generation process, which might pose challenges to the retrieval task. Comparing the three different knowledge graphs as knowledge sources, KTIR with the combination of ConceptNet and RSKG outperforms the other two sources in isolation, suggesting the effectiveness of balancing external concepts and domain-specific knowledge. According to the experimental results, KTIR is not limited to a specific knowledge source and exhibits scalability for larger knowledge graphs. However, there is a trade-off between the size of the knowledge sources and the relevance of external knowledge. Especially, in the case of current remote sensing caption lowing sections denotes KTIR (Combined). Unless specified, otherwise, 'Baseline' denotes BLIP (KTIR-base), which is fine-tuned on the specific dataset without external knowledge. The impact of external knowledgeTo further investigate the impact of external knowledge, we show the cosine similarity scores between image and text feature in Fig. 5. We randomly select one image-text pair per scene category from 21 categories in the UCM-Caption dataset. For both results from the baseline model and KTIR, there is a clear diagonal pattern, which means in general the paired image and text have higher similarity scores. The paired image and text have relatively higher scores compared with the unpaired ones in the results from KTIR, which indicates that the embeddings become more discriminative by adding external knowledge. The t-SNE [49] visualization of text features from the baseline model and the proposed KTIR are shown in Fig. 6. We randomly selected 10 scene categories and visualized all the text embeddings from the text set of the UCM-Caption dataset. After adding external knowledge, we observed three characteristics: 1) Within categories, the features become more compact after adding knowledge. That is probably because the knowledge added to similar scenes is similar (See the orange example of _river_). 2) There are more sub-centers in each category, which indicates by adding knowledge the model also learns to identify small differences within categories (See the purple example of _tenniscourt_). 3) Semantically similar categories get closer in the embedding space. For example, _runway_ and _airplane_, _golfcourse_ and _forest_, _overpass_ and _runway_, _etc_. This indicates that by integrating external knowledge, the model learns a more representative semantic space for the remote sensing domain. ### _Results on Remote Sensing Benchmarks_ UCM-Caption (Table IV)Compared with supervised learning methods, pretraining-finetuning methods have better performance, which indicates the effectiveness of the Fig. 5: The cosine similarity scores for image-text retrieval of 21 image and text pairs from the UCM-Caption dataset sampled from different scene categories. The horizontal axis represents the text index, and the vertical axis represents the image index. Fig. 6: The t-SNE visualization of the text features of the baseline and of the proposed KTIR method on the UCM-Caption dataset. For visualization, we randomly select 10 scene categories out of 21 categories in total. latter learning strategy. KTIR gains 10.89% on mR over the knowledge-aware baseline KCR confirming the superior representation ability brought from the pretraining stage. mR shows a 1.14% increase due to the introduction of relevant external information from knowledge sources compared to BLIP. The performance improvements indicate that adding external knowledge also benefits bridging the gap between natural images and remote sensing images. Riscd (Table V)KTIR achieves the best performance over all the competitors. With equivalent or relatively smaller backbones, KTIR outperforms the state-of-the-art methods (6.34% on mR with respect to the best-performing methods). Note that compared to GeoRSCLIP, which was pre-trained on 5M remote sensing images and fine-tuned on the RSICD dataset, the proposed KTIR is only fine-tuned on the RSICD dataset. Compared with the BLIP (KTIR-base) model, adding knowledge improves the results by 1.28% on mR, which also indicates the effectiveness of our approach: the knowledge related to specific image content may help bridge the gap between pretraining data and fine-tuning data. Risimd (Table VI)Images in the RSITMD dataset have more diverse and fine-grained captions than those in the RSICD dataset. KTIR shows competitive performance on both text-image retrieval and image-text retrieval, leading to an overall improvement on mR of 2.46% with respect to GeoRSCLIP, and larger gaps with respect to all the other baselines. In terms of text-image retrieval, KTIR outperforms all the other comparison methods. For image-text retrieval, GeoRSCLIP has a better performance than KTIR in terms of R@10. The potential explanation is the insufficient training of the image encoder as we extract pre-trained weights from natural images and fine-tune the model directly on a specific remote sensing dataset. The performance gain of KTIR over the baseline model (0.88%) is also smaller compared to the performance gain on the RSICD dataset (1.28%), which may be due to the fact that the descriptions in the RSITMD dataset are more diverse than in the RSICD dataset, and therefore there may be less benefit from adding external knowledge. ### _Qualitative Analysis_ Some visual examples of retrieval are shown in Fig. 7: KTIR not only has higher retrieval performance but also orders the candidate captions or images better in the ranking. In the text-image examples, the images retrieved are more reasonable. Similarly, in the image-text example, integrating external information redirects the model's attention towards a more prominent part of the image. We also perform an open-set text-image retrieval experiment, where some concepts in the caption are not seen by the model during training. Results are shown in Fig. 8. By integrating external knowledge, the model can generalize well to unseen captions. When encountering unseen concepts (_e.g., sunshade_ and _walk_), KTIR links them with known objects based on commonsense knowledge while the baseline fails to retrieve reasonable results. ### _Ablations and Parameter Analysis_ We design ablation studies to provide further insights into the proposed KTIR method, including the analysis of the model architecture and the loss functions. The results are shown in Table VII. Starting from the image encoder (\\(\\mathrm{ViT}\\)) and text-only mode of text encoder (\\(\\mathrm{BERT}_{text}\\)) with knowledge-aware contrastive loss (\\(\\mathcal{L}_{con}\\)) (row 1), we add the multimodal mode of the text encoder (\\(\\mathrm{BERT}_{multi}\\)) and the knowledge-aware matching loss (\\(\\mathcal{L}_{mat}\\)) (row 2). Then we add the cross-attention layer to fuse the captions and knowledge (row 3). The impact of the cross-attention layer can be seen in row 2 and row 3. By using a cross-attention mechanism to fuse the knowledge embedding and text embedding, the model slightly improves its performance (0.55% on mR). The model gains 1.05% by using two losses with respect to the contrastive objective only indicating knowledge emphasizes the effectiveness of the text-image matching objective proposed in BLIP. This may be due to the lack of scene and description diversity in the specific dataset. In this case, adding additional pairing information helps model training. To study the impact of the loss weights (Eq. 10), we tested the model with different values of \\((w_{1}\\), \\(w_{2}):(0.5,1)\\), \\((1,0.5)\\)and \\((1,1)\\). The results in Table VIII suggest that the best performance is achieved when the two losses have the same weights (\\(w_{1}=w_{2}=1\\)). When \\(w_{1}>w_{2}\\), the importance of the contrastive loss component increases, leading to a 0.63% improvement in image-text retrieval performance. This indicates the impact of the contrastive objective in retrieving the right text based on the image content. When \\(w_{2}>w_{1}\\), the performance of text-image retrieval improves instead (mR\\({}_{t2i}\\) increases 0.07%). This suggests that the matching objective plays a significant role in retrieving images based on text descriptions. These findings also align with the trends observed in the loss ablation results (see rows \\(1\\) and \\(2\\) in Table VII). The number of triplets (\\(m\\)) that are used during retrieval is an important parameter of KTIR: it controls the proportion of information extracted from external knowledge sentences and fed into the final text. We varied \\(m\\) to 1, 3, 5, 7 and 10 to study the impact of this parameter on the UCM-Caption dataset in Table IX. We randomly select \\(m\\) triplets from the triplet lists to generate the knowledge sentences if the total number of triplets is more than \\(m\\), otherwise, we Fig. 8: Open-set text-image retrieval on the UCM-Caption dataset. For each example, the text query is on the left and the Top-1 image retrieval results of the baseline model and KTIR are shown on the right. Unseen concepts during training are marked in **bold**. Fig. 7: Examples of KTIR’s and the baseline model (BLIP)’s top 5 retrieval results on the UCM-Caption dataset for both text-image retrieval and image-text retrieval. Texts or images in green boxes are predictions that correspond to the ground truth. If the Texts or images are in yellow boxes, these results are not annotated as the ground truth but are regarded as reasonable predictions according to human evaluation. Texts or images in red are semantically incorrect retrieval results. Note that retrieved images or texts matching all the semantic concepts are considered reasonable results, while results missing one or more semantic concepts are considered incorrect results. keep the original number of the triplets. The results show that the balance between original information and external knowledge is achieved when \\(m=5\\). The mR improves 2.22% compared to \\(m=1\\) on the image-text retrieval (\\(i2t\\)) and 0.98% overall. The opposite behaviour is observed in the text-image retrieval (\\(t2i\\)). As the number of triplets rises from 1 to 5, the effectiveness of text-image retrieval decreases. This suggests that as text representations become more varied, the task of retrieving the same image from various texts becomes more challenging, whereas discerning text based on images becomes simpler. When \\(m>5\\), the performance of image-text retrieval drops more dramatically than in the case of text-image retrieval. This is likely due to the fact that, as the number of knowledge sentences increases, the information from the caption contributes less to the final text representation and distinguishing text descriptions becomes more challenging. ### _Triplet selection methods_ Selecting knowledge triplets at random could be sub-optimal. To understand the impact of different triplet selection methods, we tested KTIR with three approaches: * **Relevance to the caption**. We first measure the semantic similarity between the triplets and their corresponding caption using a pre-trained sentence BERT model [50]. To maximize the use of relevant triplets to expand the caption's content, we then rank the triplets in reverse order based on the similarity scores and select the top \\(m\\) ones, _i.e._, those with the largest semantic distance from the caption. * **Diversity among triplets**. We randomly select the first triplet, and then measure the semantic relevance score between the remaining triplets and the selected one. To maximize diversity among the selected triplets, we then rank the remaining ones in reverse order based on the semantic relevance scores and choose the top \\(m\\) ones. * **Random selection**. We shuffle the retrieved triplet list and randomly select \\(m\\) triplets from all the possible ones. We use the above triplet selection methods to select triplets and train the model separately on the UCM-Caption dataset. The experimental results are reported in Table X. Although relevance is the basis of the knowledge-aware method to retrieve and extract information from knowledge sources, we observe that an active selection of the knowledge triplet does not seem to improve the results numerically: random selection yields the best overall performance (gains of on average 0.65% in terms of mR). On one hand, the success of random selection may be attributed to the fact that the exploration of the knowledge graph is limited since knowledge retrieval is restricted to the one-step neighborhood on the graph. Therefore, even with random selection, the knowledge triplets maintain high semantic relevance to the captions. On the other hand, random selection might introduce more triplet combinations and higher error tolerance to the retrieval process which could also benefit the overall effectiveness. The results also encourage future research on strategies to explore knowledge sources that balance semantic relevance and information diversity. ### _Backbone Analysis_ To further demonstrate the effectiveness of external knowledge, we integrate knowledge into different backbone models including CLIP [31] and BLIP [30] on the UCM-Caption dataset. For the CLIP-based model, we only do fast knowledge integration (combining the captions with corresponding knowledge sentences as text inputs), since there is no cross-attention module in the text encoder. Note that because of these architecture differences, the knowledge-aware text-image matching loss (\\(\\mathcal{L}_{mat}\\)) cannot be used in CLIP-based model. The results on UCM-Caption in Fig. 9 show that adding external knowledge leads to better performance for both backbones, which further indicates the generalizability of knowledge-aware models. Moreover, the results also suggest that domain-specific knowledge facilitates the adaptation of general pre-trained models to the remote sensing domain. ## VI Conclusion Retrieving remote sensing images from text queries is appealing but complex since retrieval needs to be both visual and semantic. To address the information asymmetry between images and texts, we therefore propose a Knowledge-aware Text-Image Retrieval (KTIR) method. By integrating relevant information from external knowledge sources, the model enriches the text scope and alleviates the potential ambiguity to better match texts and images. Extensive experiments on three datasets show that KTIR outperforms all competitors and creates representative semantic space for remote sensing images. Supportive analysis further demonstrates the effectiveness and potential generalization capabilities of the knowledge-aware methods to unseen concepts and various backbones. We hope these results will encourage future research beyond the world of pixels and embrace new sources of knowledge towards better image retrieval systems. Fig. 9: Comparasion of using CLIP [31] and BLIP [30] the base models on UCM-Caption dataset. mR is reported in the figure. ## References * [1]W. Zhou, S. Newsam, C. Li, and Z. Shao (2018) PatternNet: a benchmark dataset for performance evaluation of remote sensing image retrieval. ISPRS Journal of Photogrammetry and Remote Sensing145, pp. 197-209. Cited by: SSI, SSII-A, SSII * [47] X. Huang and Y. Peng, \"Deep cross-media knowledge transfer,\" in _CVPR_, 2018, pp. 8837-8846. * [48] J. Achiam, S. Adler, S. Agarwal, L. Ahmad, I. Akkaya, F. L. Aleman, D. Almeida, J. Altenschmidt, S. Altman, S. Anadkat _et al._, \"GPT-4 technical report,\" _arXiv preprint arXiv:2303.08774_, 2023. * [49] L. Van der Maaten and G. Hinton, \"Visualizing data using t-SNE.\" _Journal of Machine Learning Research_, vol. 9, no. 11, 2008. * [50] N. Reimers and I. Gurevych, \"Sentence-BERT: Sentence embeddings using siamese BERT-networks,\" in _EMNLP_, 2019, pp. 671-688.
Image-based retrieval in large Earth observation archives is challenging because one needs to navigate across thousands of candidate matches only with the query image as a guide. By using text as information supporting the visual query, the retrieval system gains in usability, but at the same time faces difficulties due to the diversity of visual signals that cannot be summarized by a short caption only. For this reason, as a matching-based task, cross-modal text-image retrieval often suffers from information asymmetry between texts and images. To address this challenge, we propose a Knowledge-aware Text-Image Retrieval (KITR) method for remote sensing images. By mining relevant information from an external knowledge graph, KTIR enriches the text scope available in the search query and alleviates the information gaps between texts and images for better matching. Moreover, by integrating domain-specific knowledge, KTIR also enhances the adaptation of pre-trained vision-language models to remote sensing applications. Experimental results on three commonly used remote sensing text-image retrieval benchmarks show that the proposed knowledge-aware method leads to varied and consistent retrievals, outperforming state-of-the-art retrieval methods. Text-Image Retrieval, Remote Sensing, Knowledge Graph
Provide a brief summary of the text.
arxiv/ecd5d284_3a3b_40d0_bfbd_62c797afb2c9.md
Pixel-wise Agricultural Image Time Series Classification: Comparisons and a Deformable Prototype-based Approach Elliot Vincent [email protected]_ LIGM, Ecole des Ponts, Univ Gustave Eiffel, CNRS, France Inria Paris Jean Ponce _Department of Computer Science, Ecole normale superieure (ENS-PSL, CNRS, Inria)_ _Courant Institute of Mathematical Sciences and Center for Data Science, New York University_ Mathieu Aubry _LIGM, Ecole des Ponts, Univ Gustave Eiffel, CNRS, France_ ## 1 Introduction With risks of food supply disruptions, constantly increasing energy needs, population growth and climate change, the threats faced by global agriculture production are plenty (Prosekov and Ivanova, 2018; Mbow et al., 2019). Monitoring crop yield production, controlling plant health and growth, and optimizing crop rotations are among the essential tasks to be carried out at both national and global scales. Because regular ground-based surveys are challenging, remote sensing has very early on appeared as the most practical tool (Justice and Becker-Reshef, 2007). Thanks to public and commercial satellite launches such as ESA's Sentinel constellation (Drusch et al., 2012; Aschbacher et al., 2017), NASA's Landsat (Woodcock et al., 2008) or Planet's PlanetScope constellation (Boshuizen et al., 2014; Team, 2017), Earth observation is now possible at both high temporal frequency and moderate spatial resolution, typically in the range of 10m/pixel. Sensed data can thus be processed to form satellite image time series (SITS) for further analysis either at the image or pixel level. In particular, several recent agricultural SITS datasets (Kondmann et al., 2021; Kondmann et al., 2022; Weikmann et al., 2021; Garnot and Landrieu, 2021; Russwurm et al., 2020) make such data available to the machine learning community, mainly for improving crop type classification. In this paper, we focus on methods approaching SITS segmentation as multivariate time series classification (MTSC) by considering multi-spectral pixel sequences as the data to classify. While this excludes whole series-based methods like those of Garnot and Landrieu (2021) or Tarasiou et al. (2023) which explicitly leverage the extent of individual parcels, it enables us to extensively evaluate more general MTSC methods that have not yet been applied to agricultural SITS classification. We give particular attention to unsupervised methods as well as interpretability, which we believe would be appealing for extending results beyond well annotated geographical areas. Our contributions are twofold. First, we benchmark MTSC approaches on four recent SITS datasets (Kondmann et al., 2021, 2022; Weikmann et al., 2021; Garnot and Landrieu, 2021) (Sections 4.1 and 4.3). State-of-the-art supervised methods (Garnot and Landrieu, 2020; Tang et al., 2022; Zhang et al., 2020) are typically complex and require vast amounts of labeled data, i.e., time series with accurate crop labels. We show that, while they provide strong accuracy boosts over more traditional methods like Random Forest or Support Vector Machine classifiers on datasets with limited domain gap between train and test data, they do not improve over the simple nearest centroid classification baseline on the more challenging DENETHOR (Kondmann et al., 2021) dataset (Section 5.1). In the unsupervised setting, K-means clustering (MacQueen, 1967) and its variant (Petitjean et al., 2011; Zhang et al., 2014) using dynamic time warping (DTW) measure - instead of Euclidean distance - are the strongest baselines (Rivera et al., 2020) (Sections 5.2). Figure 1: **Reconstructing pixel sequences from satellite image time series (SITS) through learned prototypes and transformations. Given a SITS (a), we reconstruct pixel-wise multi-spectral sequences using learned prototypes and transformations. Here, we show the RGB and IR spectral intensities over time for a corn (\\(\\mathscr{P}\\)) and a wheat (\\(\\mathscr{F}\\)) pixel sequence (b), along with their corresponding prototype before (c) and after (d) transformation.** Second, we design a transformation module corresponding to time warping which enables to adapt deep transformation-invariant (DTI) clustering (Monnier et al., 2020) to SITS classification and improve nearest centroid classifier (Cover and Hart, 1967). We refer to our method as DTI-TS. While deep unsupervised methods for SITS classification typically rely either on representation learning or pseudo-labeling, our method learns deformable prototypical sequences (Figure 1) by optimizing a reconstruction loss (Section 3). Our prototypes are learned multivariate time series, typically representing a type of crop, and they can be deformed to model intra-class variabilities. DTI-TS can be trained with or without supervision. In the unsupervised case, we achieve best scores on all studied datasets by adding spectro-temporal invariance to K-means clustering (MacQueen, 1967). In the supervised case, our model can be seen as an extension of the nearest centroid classifier (Cover and Hart, 1967). In the low data regime, _i.e._ with few labeled image time series, or when there is a temporal domain shift between train and test data, we outperform all competing methods. ## 2 Related Work We first review methods specifically designed for agricultural SITS classification which are typically supervised and may take as input complete images or individual pixel sequences. When each pixel sequence is considered independently, SITS classification can be seen as a specific case of MTSC, for which both supervised and unsupervised approaches exist, which we review next. Finally, we review transformation-invariant prototype-based classification approaches which we extend to SITS classification in this paper. Crop classification with satellite image time series.Crop classification has historically been achieved at the pixel level, applying traditional machine learning approaches - such as support vector machines or random forests - to vegetation indices like the normalized difference vegetation index (NDVI) (Zheng et al., 2015; Li et al., 2020; Gao et al., 2021). Numerous studies (Kussul et al., 2017; Zhong et al., 2019; Russwurm and Korner, 2020) now show that in most cases, deep learning methods exhibit superior performance. Deep networks for SITS classification either take individual pixel sequences (Belgiu and Csillik, 2018; Garnot et al., 2020; Garnot and Landrieu, 2020; Blickensdorfer et al., 2022) or series of images (Pelletier et al., 2019; Garnot and Landrieu, 2021; Russwurm et al., 2023; Mohammadi et al., 2023; Tarasiou et al., 2023) as input. While treating images as a whole may undeniably improve pattern learning for classification as the model can access spatial context information, we focus our work on pixel sequences, which allows us to present a simpler and less restrictive framework that can generalize better to various forms of input data. Multivariate times series classification.Methods achieving MTSC can be divided in two sub-groups: whole series-based techniques and feature-based techniques. Whole series-based methods includes nearest-neighbor search - where the closest neighbor is computed either using Euclidian distance (Cover and Hart, 1967) or DTW (Sakoe and Chiba, 1978; Shokohi Yekta et al., 2015) - and prototype-based approaches that model a template for each class of the dataset (Seto et al., 2015; Shapira Weber et al., 2019) and classify an input at inference by assigning it to the nearest prototype. Though often simple and intuitive, these methods struggle with in-class temporal distortions or handle them at a high computational cost. Feature-based classifiers include bag-of-patterns methods (Schafer, 2015; Schafer and Leser, 2017), shapelet-based techniques (Lines et al., 2012; Bostrom and Bagnall, 2015) and deep encoders like 1D-Convolutionnal Neural Networks (1D-CNNs) (Tang et al., 2022; Ismail Fawaz et al., 2020) or Long Short-Term Memory (LSTM) networks (Ienco et al., 2017; Karim et al., 2019; Zhang et al., 2020). These approaches allow to automatically extract discriminative features from the data, but might be more susceptible to overfitting and tend to be less straightforward to interpret. Instead, our method mixes best of both worlds by learning prototypes along with their deformation as parameters of a deep network in order to efficiently align them with a given input. Unsupervised multivariate time series classification.The classical approach to multivariate time series clustering is to apply K-means (MacQueen, 1967) to the raw time series. This algorithm splits a collection of images into K clusters by jointly optimizing K centroids (centroid step) and the assignment of each data point to the closest centroid (assignment step). DTW has been shown to improve upon K-means for time series clustering in the particular case of SITS (Zhang et al., 2014; Petitjean et al., 2011). DTWis used during both steps of K-means: the centroids are updated as the DTW-barycenter averages of the newly formed clusters and the assignment is performed under DTW. Approaches to multivariate time series clustering often work on improving the representation used by K-means. Methods either extract hand-crafted features (Wang et al., 2005; Rajan and Rayner, 1995; Petitjean et al., 2012) or apply principal component analysis (Li, 2019; Singhal and Seborg, 2005). In Petitjean et al. (2012), mean-shift (Comaniciu and Meer, 2002) is used to segment the image into potential individual crops and K-means features are the means of the spectral bands and the smoothness, area and elongation of the obtained segments. Kalinicheva et al. (2020) reproduce this multi-step scheme but instead (i) applies mean-shift segmentation to a feature map encoded by a 3D spatio-temporal deep convolutional autoencoder, (ii) takes the median of the spectral bands over a segment as a feature representation, and (iii) uses hierarchical clustering to classify each segment. Other deep learning approaches that perform unsupervised classification of time series either use pseudo-labels to train neural networks in a supervised fashion (Guo et al., 2022; Iounouse et al., 2015) or focus on learning deep representations on which clustering can be performed with standards algorithms (Franceschi et al., 2019; Tonekaboni et al., 2021). Deep Temporal Iterative Clustering (DTIC) (Guo et al., 2022) iteratively trains a TempCNN (Pelletier et al., 2019) with pseudo-labeling and performs K-means on the learned features to update the pseudo-labels. Methods that perform deep unsupervised representation learning and clustering simultaneously (Caron et al., 2018; YM. et al., 2020) are promising for time series classification. Although some recent works (Franceschi et al., 2019; Tonekaboni et al., 2021) train supervised classifiers using these learned features on temporal data as input, to the best of our knowledge, no method designed for time series performs classification in a fully unsupervised manner. Transformation-invariant prototype-based classification.The DTI framework (Monnier et al., 2020) jointly learns prototypes and prototype-specific transformations for each sample. The prototypes belong to the input space and their pixel values (in the case of 2D images) or their point coordinates (in the case of 3D point clouds) are free parameters learned while training the model. Each prototype is associated with its own specific transformation network, which predicts transformation parameters for every sample and thus enables the prototype to better reconstruct them. The resulting models can be used for downstream tasks such as classification (Monnier et al., 2020; Loiseau et al., 2022), few-shot segmentation (Loiseau et al., 2021) and multi-object instance discovery (Monnier et al., 2021) and be trained with or without supervision. To the best of our knowledge, the DTI framework has never been applied to the case of time series, for which classifiers need to be invariant to some temporal distortions. Previous works bypass this concern using DTW to compare the samples to classify (Petitjean et al., 2011; Seto et al., 2015) or by applying a transformation field to a selection of control points to distort the time series. Specific to agricultural time series, Nyborg et al. (2022) leverage the fact that temperature is the main factor of temporal variations Figure 2: **Overview of DTI-TS.** Our method reconstructs a pixel-wise multi-spectral input sequence, extracted from a SITS, thanks to a prototype to which are successively applied a time warping and an offset. The parameters of these transformations are input-dependent and prototype-specific. The functions \\(g_{1:K}\\) predicting the parameters of the transformations and the prototypes \\(\\mathbf{P}_{1:K}\\) can be learned with or without supervision. and uses thermal positional encoding of the temporal dimension to account for temperature change from a year (or location) to another. We use the DTI framework to instead learn the alignment of samples to the prototypes. Shapira Weber et al. (2019) explore a similar idea for generic univariate time series, but, to the best of our knowledge, our paper is the first to perform both supervised and unsupervised transformation-invariant classification for agricultural satellite time series. ## 3 Method In this section, we explain how we adapt the DTI framework (Monnier et al., 2020) to pixel-wise SITS classification. First, we explain our model and network architecture (Sec. 3.1). Second, we present our training losses in the supervised and unsupervised cases and give implementation and optimization details (Sec. 3.2). We refer to our method as DTI-TS. NotationWe use bold letters for multivariate time series (e.g., \\(\\mathbf{a}\\), \\(\\mathbf{A}\\)), brackets \\([.]\\) to index time series dimensions and we write \\(a_{1:N}\\) for the set \\(\\{a_{1}, ,a_{n}\\}\\). ### Model Overview.An overview of our model is presented in Figure 2. We consider a pixel time series \\(\\mathbf{x}\\) in \\(\\mathbb{R}^{T\\times C}\\) of temporal length \\(T\\) with \\(C\\) spectral bands and we reconstruct it as a transformation of a prototypical time series. We will consider a set of \\(K\\) prototypical time series \\(\\mathbf{P}_{1:K}\\), each one being a time series \\(\\mathbf{P}_{k}\\in\\mathbb{R}^{T\\times C}\\) of same size as \\(\\mathbf{x}\\) and each intuitively corresponding to a different crop type. We consider a family of multivariate time series transformations \\(\\mathcal{T}_{\\beta}:\\mathbb{R}^{T\\times C}\\longrightarrow\\mathbb{R}^{T\\times C}\\) parametrized by \\(\\beta\\). Our main assumption is that we can faithfully reconstruct the sequence \\(\\mathbf{x}\\) by applying to a prototype \\(\\mathbf{P}_{k}\\) a transformation \\(\\mathcal{T}_{g_{k}(\\mathbf{x})}\\) with some input-dependent and prototype-specific parameters \\(g_{k}(\\mathbf{x})\\). We denote by \\(\\mathbf{R}_{k}(\\mathbf{x})\\in\\mathbb{R}^{T\\times C}\\) the reconstruction of the time series \\(\\mathbf{x}\\) obtained using a specific prototype \\(\\mathbf{P}_{k}\\) and the prototype-specific parameters \\(g_{k}(\\mathbf{x})\\): \\[\\mathbf{R}_{k}\\big{(}\\mathbf{x}\\big{)}=\\mathcal{T}_{g_{k}(\\mathbf{x})}\\big{(} \\mathbf{P}_{k}\\big{)}. \\tag{1}\\] Intuitively, a prototype corresponds to a type of crop (wheat, oat, etc.) and a given input should be best reconstructed by the prototype of the corresponding class. For this reason, we want the transformations to only account for intra-class variability, which requires defining an adapted transformation model. Transformation model.We have designed a transformation model specific to SITS and based on two transformations: an offset along the spectral dimension and a time warping. The 'offset' transformation allows the prototypes to be shifted in the spectral dimension to best reconstruct a given input time series (Figure 3a). More formally, the deformation with parameters \\(\\beta^{\\text{offset}}\\) in \\(\\mathbb{R}^{C}\\) applied to a prototype \\(\\mathbf{P}\\) can be written as: \\[\\mathcal{T}_{\\beta^{\\text{offset}}}^{\\text{offset}}\\big{(}\\mathbf{P}\\big{)}= \\beta^{\\text{offset}}+\\mathbf{P}, \\tag{2}\\] where the addition is to be understood channel-wise. The 'time warping' deformation aims at modeling intra-class temporal variability (Figure 3b) and is defined using a thin-plate spline (Bookstein, 1989) transformation along the temporal dimension of the time series. More formally, we start by defining a set of \\(M\\) uniformly spaced landmark time steps \\((t_{1}, ,t_{M})^{\\top}\\). Given \\(M\\) target shifts \\(\\beta^{\\text{tw}}=(\\beta_{1}^{\\text{tw}}, ,\\beta_{M}^{\\text{tw}})^{\\top}\\), we denote by \\(h_{\\beta^{\\text{tw}}}\\) the unique 1D thin-plate spline that maps each \\(t_{m}\\) to \\(t_{m}^{\\prime}=t_{m}+\\beta_{m}^{\\text{tw}}\\). Now, given an input pixel time series \\(\\mathbf{x}\\) and \\(\\beta^{\\text{tw}}\\in\\mathbb{R}^{M}\\), we define the time warping deformation applied to a prototype \\(\\mathbf{P}\\) as: \\[\\mathcal{T}_{\\beta^{\\text{tw}}}^{\\text{tw}}\\big{(}\\mathbf{P}\\big{)}[t]= \\mathbf{P}\\big{[}h_{\\beta^{\\text{tw}}}(t)\\big{]}, \\tag{3}\\] for \\(t\\in[1,T]\\). Note that the offset is time-independent and that the time warping is channel-independent. To define our full transformation model, we compose these two transformations, which leads to reconstructions: \\[\\mathbf{R}_{k}\\big{(}\\mathbf{x}\\big{)}=\\mathcal{T}_{\\beta^{\\text{offset}}}^{ \\text{offset}}\\circ\\mathcal{T}_{\\beta^{\\text{inv}}}^{\\text{tw}}\\big{(}\\mathbf{P} _{k}\\big{)},\\,\\text{with }(\\beta^{\\text{offset}},\\beta^{\\text{tw}})=g_{k}(\\mathbf{x}). \\tag{4}\\] Architecture.The prototypes are multivariate time series whose values in all channels and for all time stamps are free parameters learned through the optimization of a training objective (see Section 3.2). We implement the functions \\(g_{1:K}\\) predicting the transformation parameters as a neural network composed of a shared encoder, for which we use the convolutional network architecture proposed by Wang et al. (2017), and a final linear layer with \\(K\\times(C+M)\\) outputs followed by the hyperbolic tangent (tanh) function as activation layer. We interpret this output as \\(K\\) sets of \\((C+M)\\) parameters for the transformations of the \\(K\\) prototypes. By design, these transformation parameters take values in \\([-1,1]\\). This is appropriate for the offset transformation since we normalize the time series before processing, but not for the time warping. We thus multiply the outputs of the network corresponding to the time warping parameters so that the maximum shift of the landmark time steps corresponds to a week. We choose \\(M\\) for each dataset so that we have a landmark time step every month. In the supervised case, we choose \\(K\\) equal to the number of crop classes in each dataset and we set \\(K\\) to 32 in the unsupervised case. ### Losses and training We learn the prototypes \\(\\mathbf{P}_{1:K}\\) and the deformation prediction networks \\(g_{1:K}\\) by minimizing a mean loss on a dataset of \\(N\\) multivariate pixel time series \\(\\mathbf{x}_{1:N}\\). We define this loss below in the supervised and unsupervised scenarios. Unsupervised case.In this scenario, our loss is composed of two terms. The first one is a reconstruction loss and corresponds to the mean squared error between the input time series and the transformed prototype that best reconstructs it for all pixels \\(\\mathbf{x}\\) of the studied dataset: \\[\\mathcal{L}_{\\text{rec}}(\\mathbf{P}_{1:K},g_{1:K})=\\frac{1}{NTC}\\sum_{i=1}^{N} \\min_{k}\\Big{|}\\Big{|}\\mathbf{x}_{i}-\\mathbf{R}_{k}(\\mathbf{x}_{i})\\Big{|} \\Big{|}_{2}^{2}. \\tag{5}\\] The second loss is a regularization term, which prevents high frequencies in the learned prototypes. Indeed, the time warping module allows interpolations between prototype values at consecutive time steps \\(t\\) and \\(t+1\\), and our network could thus use temporal shifts together with high-frequencies in the prototypes to obtain better reconstructions. To avoid these unwanted high-frequency artifacts, we add a total variation regularization (Rudin et al., 1992): \\[\\mathcal{L}_{\\text{tv}}(\\mathbf{P}_{1:K})=\\frac{1}{K(T-1)C}\\sum_{k=1}^{K}\\sum_ {t=1}^{T-1}\\Big{|}\\Big{|}\\mathbf{P}_{k}[t+1]-\\mathbf{P}_{k}[t]\\Big{|}\\Big{|}_ {2}. \\tag{6}\\] Figure 3: **Prototype deformations.** We show the visual interpretations of our time series deformations. The offset deformation is time-independent and performed on each spectral band separately. On the other hand, the time warping is channel-independent and achieved by translating landmark time-steps, allowing targeted temporal adjustments. The full training loss without supervision is thus: \\[\\mathcal{L}_{\\text{unsup}}(\\mathbf{P}_{1:K},g_{1:K})=\\mathcal{L}_{\\text{rec}}( \\mathbf{P}_{1:K},g_{1:K})+\\lambda\\mathcal{L}_{\\text{tv}}(\\mathbf{P}_{1:K}), \\tag{7}\\] with \\(\\lambda\\) a scalar hyperparameter set to \\(1\\) in all our experiments. Supervised case.In the supervised scenario, we choose \\(K\\) as the true number of classes in the studied dataset, and set a one-to-one correspondence between each prototype and one class. We leverage this knowledge of the class labels to define two losses. Let \\(y_{i}\\in\\{1, ,K\\}\\) be the class label of input pixel \\(\\mathbf{x}_{i}\\). First, a reconstruction loss similar to (5) penalizes the mean squared error between an input and its reconstruction using the true-class prototype: \\[\\mathcal{L}_{\\text{rec\\_sup}}(\\mathbf{P}_{1:K},g_{1:K})=\\frac{1}{NTC}\\sum_{i= 1}^{N}\\Big{|}\\Big{|}\\mathbf{x}_{i}-\\mathbf{R}_{y_{i}}(\\mathbf{x})\\Big{|}\\Big{|} _{2}^{2}. \\tag{8}\\] Second, in order to boost the discriminative power of our model, we add a contrastive loss (Loiseau et al., 2022) based on the reconstruction error: \\[\\mathcal{L}_{\\text{cont}}(\\mathbf{P}_{1:K},g_{1:K})=-\\frac{1}{N}\\sum_{i=1}^{ N}\\log\\Bigg{(}\\frac{\\exp\\big{(}-\\Big{|}\\Big{|}\\mathbf{x}_{i}-\\mathbf{R}_{y_{i}}( \\mathbf{x})\\Big{|}\\Big{|}_{2}^{2}\\big{)}}{\\sum_{k=1}^{K}\\exp\\big{(}-\\Big{|} \\Big{|}\\mathbf{x}_{i}-\\mathbf{R}_{k}(\\mathbf{x})\\Big{|}\\Big{|}_{2}^{2}\\big{)} }\\Bigg{)}. \\tag{9}\\] We also use the same total variation regularization as in the unsupervised case, and the full training loss under supervision is: \\[\\mathcal{L}_{\\text{sup}}(\\mathbf{P}_{1:K},g_{1:K})=\\mathcal{L}_{\\text{rec\\_sup }}(\\mathbf{P}_{1:K},g_{1:K})+\\mu\\mathcal{L}_{\\text{tv}}(\\mathbf{P}_{1:K})+\ u \\mathcal{L}_{\\text{cont}}(\\mathbf{P}_{1:K},g_{1:K}), \\tag{10}\\] with \\(\\mu\\) and \\(\ u\\) two hyperparameters equal to \\(1\\) and \\(0.01\\) respectively in all our experiments. Initialization.The learnable parameters of our model are (i) the prototypes, (ii) the encoder and (iii) the time warping and offset decoders. We initialize our prototypes with the centroids learned by NCC (resp. K-means) in the supervised (resp. unsupervised) case. Default Kaming He initialization (He et al., 2015) is used for the encoder while the parameters of both decoders are set to zero. This ensures that at initialization the predicted transformations are the identity. Optimization.Parameters are learned using the ADAM optimizer (Kingma and Ba, 2015) with a learning rate of \\(10^{-5}\\). We train our model following a curriculum modeling scheme (Elman, 1993; Monnier et al., 2020): we progressively increase the model complexity by first training without deformation, then adding the time warp deformation and finally the offset deformation. We add transformations when the mean accuracy does not increase in the supervised setting and, in the unsupervised setting, when the reconstruction loss does not decrease, after 5 validation steps. Note that the contrastive loss is only activated at the end of the curriculum in the supervised setting. ### Handling missing data Our method, as presented in Section 3, is designed for uniformly sampled constant-sized time series. While satellite time series from PlanetScope are pre-processed to obtain such regular data, time series acquired by Sentinel 2 have at most a data point every 5 days due to a lower revisit frequency, and additional missing dates because of clouds or shadows. To handle such non-regularly sampled time series, the remote sensing literature proposes several gap filling methods (Belda et al., 2020; Julien and Sobrino, 2019; Kandasamy et al., 2013). Instead, since our method is distance-based, we propose (i) to filter the input data to prevent possible outliers and (ii) to only compare inputs and prototypes on time stamps for which the input is defined. Let us consider a specific time series, acquired over a period of length \\(T\\) but with missing data. We define the associated raw time series \\(\\mathbf{x}_{\\text{raw}}\\in\\mathbb{R}^{T\\times C}\\) by setting zero values for missing time stamps and the associated binary mask \\(\\mathbf{m}_{\\text{raw}}\\in\\{0,1\\}^{T}\\), equal to 0 for missing time stamps and 1 otherwise. We define the filtered time series \\(\\mathbf{x}\\) extracted from \\(\\mathbf{x}_{\\text{raw}}\\) and \\(\\mathbf{m}_{\\text{raw}}\\) through Gaussian filtering for \\(t\\in[1,T]\\) by: \\[\\mathbf{x}[t]=\\frac{1}{\\mathbf{m}[t]}\\sum_{t^{\\prime}=1}^{T}\\mathcal{G}_{t, \\sigma}[t^{\\prime}]\\cdot\\mathbf{x}_{\\text{raw}}[t^{\\prime}], \\tag{11}\\] with \\[\\mathcal{G}_{t,\\sigma}[t^{\\prime}]=\\exp\\Big{(}-\\frac{(t^{\\prime}-t)^{2}}{2 \\sigma^{2}}\\Big{)}, \\tag{12}\\] where \\(\\sigma\\) is a hyperparameter set to 7 days in our experiments. We also define the associated filtered mask \\(\\mathbf{m}\\) for \\(t\\in[1,T]\\) by: \\[\\mathbf{m}[t]=\\sum_{t^{\\prime}=1}^{T}\\mathcal{G}_{t,\\sigma}[t^{\\prime}]\\cdot \\mathbf{m}_{\\text{raw}}[t^{\\prime}], \\tag{13}\\] for \\(t\\in[1,T]\\) and with the same hyperparameter \\(\\sigma\\). Using directly this filtered time series to compute our mean square errors would lead to large errors, because data might be missing for long time periods. Thus, we modify the losses \\(\\mathcal{L}_{\\text{rec}}\\) and \\(\\mathcal{L}_{\\text{rec}\\_\\text{sup}}\\) by replacing the reconstruction error between a time series \\(\\mathbf{x}\\) and reconstruction \\(\\mathbf{R}\\), \\[\\frac{1}{TC}\\Big{|}\\Big{|}\\mathbf{x}-\\mathbf{R}\\Big{|}\\Big{|}_{2}^{2}=\\frac{1 }{C}\\sum_{t=1}^{T}\\frac{1}{T}\\Big{|}\\Big{|}\\mathbf{x}[t]-\\mathbf{R}[t]\\Big{|} \\Big{|}_{2}^{2}, \\tag{14}\\] in Equations (5) and (8) by a weighted mean squared error: \\[\\frac{1}{C}\\sum_{t=1}^{T}\\frac{\\mathbf{m}[t]}{\\sum_{t^{\\prime}=1}^{T}\\mathbf{ m}[t^{\\prime}]}\\Big{|}\\Big{|}\\mathbf{x}[t]-\\mathbf{R}[t]\\Big{|}\\Big{|}_{2}^{2}. \\tag{15}\\] This adapted loss gives more weight to time stamps \\(t\\) corresponding to true data acquisitions. In Appendix A, we justify these design choices and demonstrate that they result in superior performance when compared to alternative standard filtering schemes, both for our method and NCC. ## 4 Experiments ### Datasets We consider four recent open-source datasets on which we evaluate our method and multiple baselines. Details about these datasets can be found in Table 1. \\begin{table} \\begin{tabular}{l c c c c c c c c} \\hline \\hline Dataset & Country & \\(T\\) & \\(C\\) & \\(K\\) & Train/Test shift & Satellite(s) & Daily & Split size (x \\(10^{6}\\)) \\\\ \\hline PASTIS & \\(\\blacksquare\\) & 406 & 10 & 19 & Spat. & Sentinel 2 & � & 7.3 \\(\\mid\\) 7.3 \\(\\mid\\) 7.0 \\(\\mid\\) 7.1 \\\\ TimeSen2Crop & \\(\\blacksquare\\) & 363 & 9 & 16 & Spat. & Sentinel 2 & � & 0.8 \\(\\mid\\) 0.1 \\(\\mid\\) 0.1 \\\\ SA & \\(\\blacksquare\\) & 244 & 4 & 5 & Spat. & PlanetScope & ✓ & 60.1 \\(\\mid\\) 10.1 \\(\\mid\\) 32.0 \\\\ DENETHOR & \\(\\blacksquare\\) & 365 & 4 & 9 & Spat. \\& Temp. & PlanetScope & ✓ & 20.6 \\(\\mid\\) 3.2 \\(\\mid\\) 22.8 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 1: **Comparison of studied datasets.** The datasets we study cover different regions (France, Austria, South Africa and Germany). We distinguish between datasets where train and test splits differ only spatially (Spat.) and where they differ both spatially and temporally (Spat. & Temp.). Time series can have daily data (✓) or missing data (✗). Additionally, we report the length of the time series \\(T\\), the number of spectral bands \\(C\\) and the number of classes \\(K\\). The last column shows the split sizes as train \\(\\mid\\) val \\(\\mid\\) test, except for PASTIS where we follow the 5-fold procedure described in Garnot & Landrieu (2021) and we show the size of each of the folds. **PASTIS (Garnot & Landrieu, 2021).** This dataset contains Sentinel-2 satellite patches within the French metropolitan area, acquired from September 1, 2018 to October 31, 2019. Each image time series contains a variable number of images that can show clouds and/or shadows. We pre-process the dataset and remove most of the cloudy/shadowy pixels using a classical thresholding approach on the blue reflectance (Breon & Colzy, 1999). We consider each of the pixels of the \\(2433\\)\\(128\\times 128\\) image time series as independent time series, except those corresponding to the 'void' class, leading to 36M times series. Each is labeled with one of 19 classes (including a _background_, i.e., non-agricultural class). We follow the same 5-fold evaluation procedure as described in Garnot & Landrieu (2021), with at least 1km separating images from different folds to ensure distinct spatial coverage between them. **TimeSen2Crop (Weikmann et al., 2021).** This dataset is also built from Sentinel-2 satellite images, but covering Austrian agricultural parcels and acquired between September 3, 2017 and September 1, 2018. It does not provide images but directly 1M pixel time series of variable lengths. We pre-process these time series by removing the time-stamps associated to the'shadow' and 'clouds' annotations provided in the dataset. Each time series is labeled with one of 16 types of crops. We follow the same train/val/test splitting as in Weikmann et al. (2021) where each split covers a different area in Austria. **SA (Kondmann et al., 2022).** This dataset is built from images from the PlanetScope constellation of Cubesats satellite covering agricultural areas in South Africa, and contains daily time series from April 1, 2017 to November 31, 2017. Acquisitions are fused using Planet Fusion1 to compensate for possible missing dates, clouds or shadows so that the provided data consists in clean daily image time series. The dataset contains 4151 single-field images time series from which we extract 102M pixel time series. Each time series is labeled with one of 5 types of crops. We keep the same train/test splitting of the data and reserve 15% of the train set for validation purposes. We make sure that the obtained train and validation set do not have pixel time series extracted from the same field image. Footnote 1: [https://assets.planet.com/docs/Fusion-Tech-Spec_v1.0.0.pdf](https://assets.planet.com/docs/Fusion-Tech-Spec_v1.0.0.pdf) **DENETHOR (Kondmann et al., 2021).** This dataset is also built from Cubesats images but covers agricultural areas in Germany. The training set is built from daily time series acquired from January 1, 2018 to December 31, 2018, while the test set is built from time series acquired from January 1, 2019 to December 31, 2019. Similar to SA, the dataset has been pre-processed to provide clean daily time series. It contains 4561 single-field images time series from which we extract 47M independent pixel time series. Each time series is labeled with one of 9 types of crops. Again, we use the original splits of the data, with 15% of the training set kept for validation. All splits cover distinct areas in Germany. The time shift between train and test sets makes DENETHOR significantly more challenging than the three other datasets. On Figure 4, we illustrate this domain gap by showing the mean NDVI of three random classes for each dataset on the train and test splits. The train and test curves are more dissimilar with DENETHOR than any other dataset, and significant differences in the NDVI curves remain after alignment using our time warping and offset transformation. ### Metrics We provide two metrics for evaluating classification accuracy: overall accuracy (OA) and mean accuracy (MA). OA is computed as the ratio of correct and total predictions: \\[\\text{OA}=\\frac{\\text{TP}+\\text{TN}}{\\text{TP}+\\text{TN}+\\text{FP}+\\text{ FN}}, \\tag{16}\\] where TP, TN, FP and FN correspond to true positive, true negative, false positive and false negative, respectively. MA is the class-averaged classification accuracy: \\[\\text{MA}=\\frac{1}{K}\\sum_{k=1}^{K}\\text{OA}(\\{\\mathbf{x}_{i}|y_{i}=k\\}). \\tag{17}\\]Figure 4: **Temporal domain gap and alignment.** For each dataset, we show the mean NDVI of three randomly selected classes on train and test splits (top row). Then we align the test curve to the train mean NDVI by optimizing the parameters of the time warping and the offset transformations with gradient descent (bottom row). It is important to note that the datasets under consideration show a high degree of imbalance, making MA a more appropriate and informative metric for evaluating classification performance. For this reason, OA scores are shown in gray in Tables 2, 3 and 4. ### Baselines We validate our approach in two settings: supervised (classification) and unsupervised (clustering). For each setting, we describe below the methods evaluated in this work. #### 4.3.1 Time series classification The purpose of this section is to benchmark classic and state-of-the-art MTSC methods for crop classification in SITS data: * **NCC (Duda et al., 1973).** The nearest centroid classifier (NCC) assigns to a test sample the label of the closest class average time series using the Euclidean distance. We also report the extension of NCC with our method to add invariance to time warping and sequence offset, as well as adding our contrastive loss. * **1NN (Cover & Hart, 1967) and 1NN-DTW (Seto et al., 2015).** The first nearest neighbor algorithm assigns to a test sample the label of its closest neighbor in the train set, with respect to a given distance. This algorithm is computationally costly and, since the datasets under study typically contain millions of pixel time series, we search for neighbors of test samples in a random 0.1% subset of the train set and report the average over 5 runs with different subsets. We evaluate the nearest neighbor algorithm using the Euclidean distance (1NN) as well as using the dynamic time warping (1NN-DTW) measure on the TimeSen2Crop dataset which is small enough to compute it in a reasonable time. * **SVM (Cortes & Vapnik, 1995).** We trained a linear support vector machine (SVM) in the input space using scikit-learn library (Pedregosa et al., 2011). * **Random Forest (Ho, 1995).** We evaluated the performance of a Random Forest of a hundred trees built in the input space using scikit-learn library Pedregosa et al. (2011). * **MLSTM-FCN (Karim et al., 2019).** MLSTM-FCN is a two-branch neural network concatenating the outputs of an LSTM and a 1D-CNN to better encode time series. We use a non-official PyTorch implementation2 of MLSTM-FCN. Footnote 2: github.com/timeseriesAI/tsai * **TapNet (Zhang et al., 2020).** TapNet uses a similar architecture to MLST-FCN to learn a low-dimensional representation of the data. Additionally, Zhang et al. (2020) learn class prototypes in this latent space using the softmin of the euclidian distances of the embedding to the different class prototypes as classification scores. The official PyTorch implementation3 is designed for datasets with a size range from 27 to 10,992 only, while our datasets contain millions of time series. Thus, based on the official implementation, we implemented a batch version of TapNet which we use for our experiments. Footnote 3: github.com/kdd2019-tapnet/tapnet * **OS-CNN (Tang et al., 2022).** The Omni-Scale CNN is a 1D convolutional neural network that has shown ability to robustly capture the best time scale because it covers all the receptive field sizes in an efficient manner. We use the official implementation4 with default parameters. Footnote 4: github.com/Wensi-Tang/OS-CNN * **MLP+LTAE (Garnot & Landrieu, 2020).** The Lightweight Temporal Attention Encoder (LTAE) is an attention-based network. Used along with a Pixel Set Encoder (PSE) (Garnot et al., 2020), LTAE achieves good performances on images. To adapt it to time series, we instead use a MLP as encoder. We refer to this method as MLP+LTAE and we use the official PyTorch implementation5 of LTAE. Footnote 5: github.com/VSainteuf/lightweight-temporal-attention-pytorch Footnote 6: github.com/White-Link/UnsupervisedScalableRepresentationLearningTimeSeries * **UTAE (Garnot et al., 2020).** In addition to SITS methods, we also report the scores of U-net with Temporal Attention Encoder (UTAE) on PASTIS dataset. This method leverages complete (constant-size) images. Since it can learn from the spatial context of a given pixel, this state-of-the-art image sequence segmentation approach is expected to perform better than pixel-based MTSC approaches and is reported for reference. #### 4.3.2 Time series clustering In the unsupervised setting, we compare our method to other clustering approaches applied on learned features or directly on the time series: * **K-means (Bottou & Bengio, 1994).** We apply the classic K-means algorithm on the multivariate pixel time series directly. Clustering is performed on all splits (train, val and test). Then we determine the most frequently occurring class in each cluster, considering training data only. The result is used as label for the entire cluster. We use the gradient descent version (Bottou & Bengio, 1994) K-means with empty cluster reassignment (Caron et al., 2018; Monnier et al., 2020). * **K-means-DTW (Petitjean et al., 2011).** The K-means algorithm is applied in this case with a dynamic time warping measure instead of the usual Euclidean distance. To this end, we use the differentiable Soft-DTW (Cuturi & Blondel, 2017) version of DTW and its Pytorch implementation (Maghoumi et al., 2021). * **USRL (Franceschi et al., 2019) + K-means.** USRL is an encoder trained in an unsupervised manner to represent time series by a 320-dimensional vector. We train USRL on all splits of each dataset, then apply K-means in the feature space. We use the official implementation6 of USRL with default parameters. Footnote 6: github.com/White-Link/UnsupervisedScalableRepresentationLearningTimeSeries * **DTAN (Shapira Weber et al., 2019) + K-means.** DTAN is an unsupervised method for aligning temporally all the time series of a given set. K-means is applied on data from all splits after alignment with DTAN. We use the official implementation7 of DTAN with default parameters. Footnote 7: github.com/BGU-CS-VIL/dtan We evaluate all methods with \\(K=32\\) clusters. We discuss this choice in Appendix B. ## 5 Results and Discussion In this section, we first compare our method to top-performing supervised methods proposed in the literature for MTSC as well as traditional machine learning methods (Sec. 5.1). We then demonstrate that our method outperforms the K-means baseline on all four datasets (Sec. 5.2) thanks to the design choices for our time series deformations. Finally, we discuss qualitative results and the interpretability of out method (Sec. 5.3). ### Time series classification We report the performance of DTI-TS and competing methods in Table 2. Results on the DENETHOR dataset are qualitatively very different from the results on the other datasets. We believe this is because DENETHOR has train and test splits corresponding to two distinct years. We thus analyze it separately. Results on PASTIS, TimeSen2Crop and SA.As expected, since UTAE can leverage knowledge on the spatial context of each pixel, it achieves the best score on PASTIS dataset by +2.0% in OA and +5.5% in MA. Our improvements over the NCC method (Duda et al., 1973) - adding time warping deformation, offset deformations and contrastive loss (9) - consistently boost the mean accuracy. The improvement obtained by adding transformation modeling comes from a better capability to model the data, as confirmed by the detailed results reported in the left part of Table 4, where one can see the reconstruction error (i.e. \\(\\mathcal{L}_{\\text{rec}}\\)) significantly decreases when adding these transformations. Note that on the contrary, adding the discriminative loss increase the accuracy at the cost of decreasing the quality of the reconstruction error. Our complete supervised approach outperforms the nearest neighbor based methods, the traditional learning approaches SVM and Random Forest and also MLSTM-FCN. However, it is still significantly outperformed by top MTSC methods. This is not surprising, since these methods are able to learn complex embeddings that capture subtle signal variations, e.g. thanks to a temporal attention mechanism (Garnot and Landrieu, 2020) or to multiple-sized receptive fields (Tang et al., 2022). Note however that in doing so, they lose the interpretability of simpler approaches such as 1NN or NCC, which our method is designed to keep. Results on DENETHOR.Because the data we use is highly dependent on weather conditions, subsets acquired on distinct years follow significantly different distributions (Kondmann et al., 2021). Because of their complexity, other methods struggle to deal with this domain shift. In this setting, our extension of NCC to incorporate specific meaningful deformations achieves better performances than all the other MTSC methods we evaluated. However, adding the contrastive loss significantly degrades the results. We believe this is again due to the temporal domain shift between train and test data. This analysis is supported by results reported in Table 4 which show that on the validation set of DENETHOR, which is sampled from the same year as the training data, adding the constrative loss significantly boost the results, similar to the other dataset. One can also see again on DENETHOR the benefits of modeling the deformations in terms of reconstruction error. Low data regime.Our method is also beneficial when only few annotated images are available at training time. In Figure 5, we plot the MA obtained by NCC, MLP+LTAE, OS-CNN, TapNet and our method depending on the proportion of the SITS of PASTIS dataset. While all methods benefit from more training data, our prototype-based approach generalizes better from few annotated samples. When using 4% of the dataset or less, _i.e._ 60 annotated image time series or less, our method is the best of all MTSC methods \\begin{table} \\begin{tabular}{l r r r r r r r r r r} \\hline \\hline & \\#param & Inf. time & PASTIS & \\multicolumn{3}{c}{TS2C} & \\multicolumn{3}{c}{SA} & \\multicolumn{2}{c}{DENETH.} \\\\ \\cline{2-13} Method & (x1000) & (ms/batch) & O\\(\\Delta\\)T & MA\\(\\uparrow\\) & O\\(\\Delta\\)T & MA\\(\\uparrow\\) & O\\(\\Delta\\)T & MA\\(\\uparrow\\) & O\\(\\Delta\\)T & MA\\(\\uparrow\\) \\\\ \\hline UTAE (Garnot and Landrieu, 2021) & 1 087 & — & 83.3 & **73.6** & — & — & — & — & — & — \\\\ \\hline MLP + LTAE (Garnot and Landrieu, 2020) & 320 & 78 & 80.6 & 65.9 & 88.7 & 80.9 & 67.4 & **63.7** & 55.6 & **43.6** \\\\ OS-CNN (Tang et al., 2022) & 4 729 & 119 & 81.3 & **68.1** & 87.9 & **81.2** & 64.6 & 60.3 & 49.0 & **39.2** \\\\ TapNet (Zhang et al., 2020) & 1 882 & 229 & 78.0 & 60.3 & 83.1 & 77.3 & 59.6 & 56.7 & 53.1 & 43.7 \\\\ MLSTM-FCN (Karim et al., 2019) & 490 & 11 & 44.4 & 10.9 & 58.7 & 44.0 & 56.1 & 47.9 & 58.2 & 48.3 \\\\ SVM (Cortes and Yapnik, 1995) & 77 & 48 & 76.3 & 48.7 & 74.9 & 56.1 & 64.6 & 52.8 & 35.6 & **28.6** \\\\ Random Forest (Ho, 1995) & 16 & 140 & 76.6 & 46.6 & 66.9 & 50.2 & 62.9 & 61.3 & 59.9 & **51.6** \\\\ 1NN-DTW (Seto et al., 2015) & 0 & \\(>\\)10\\({}^{4}\\) & — & — & 32.2 & 23.0 & — & — & — & — \\\\ 1NN (Cover and Hart, 1967) & 0 & 6 & 65.8 & 40.1 & 43.9 & 35.0 & 60.7 & 54.9 & 56.7 & **48.2** \\\\ NCC (Duda et al., 1973) & 77 & 24 & 56.5 & 48.4 & 57.1 & 49.9 & 51.3 & 46.4 & 61.3 & **55.5** \\\\ \\hline DTI-TS: NCC + time warping & 398 & 97 & 56.2 & 51.4 & 59.9 & 52.3 & 54.5 & 49.7 & 62.4 & 56.4 \\\\ + offset & 423 & 97 & 53.5 & 53.8 & 57.3 & 55.0 & 60.6 & 50.0 & 59.8 & **62.9** \\\\ + contrastive loss & 423 & 97 & 73.7 & **59.1** & 78.5 & **70.5** & 62.3 & **54.9** & 56.5 & **54.2** \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 2: **Performance comparison for classification on all datasets.** We report for our method and competing methods, the number of trainable parameters (#param) when trained on PASTIS, the overall accuracy (OA) and the mean class accuracy (MA). We distinguish with a background color the DENETHOR dataset - where train and test splits are acquired during different periods - from the others. 1NN-DTW is tested on TimeSen2Crop dataset only, due to the expensive cost of the algorithm. We separate results in 3 parts: the image level method UTAE, MTSC methods and different ablations of DTI-TS. We put in bold the best method in each of the 3 parts and underline the absolute best for each dataset. We report the average inference time of each method to process a batch of 2,048 time series from TS2C on a single NVIDIA GeForce RTX 2080 Ti GPU. benchmarked in this paper. Training on 1% of the data it outperforms MLP+LTAE by +4.7% in MA, TapNet by +8.8% and OS-CNN by +10.9% but is not able to clearly do better than the NCC baseline. Using 2% or 4% of the dataset, DTI-TS clearly improves over NCC and still has better scores than MLP+LTAE, TapNet and OS-CNN. ### Time series clustering In this section, we demonstrate clear boosts provided by our method on the four SITS datasets we study. We report the performance of DTI-TS and competing methods in Table 3. Our method outperforms all the other baselines on the four datasets, always achieving the best mean accuracy. In particular, our time warping transformation appears to be the best way to handle temporal information when clustering agricultural time series. Indeed, DTAN+K-means leads to a significantly less accurate clustering than simple K-means. It confirms that temporal information is crucial when clustering agricultural time series: when DTAN aligns temporally all the sequences of a given dataset, it probably discards discriminative information, leading to \\begin{table} \\begin{tabular}{l c c c c c c c c c c} \\hline \\hline & \\#param & Inf. time & \\multicolumn{2}{c}{PASTIS} & \\multicolumn{2}{c}{TS2C} & \\multicolumn{2}{c}{SA} & \\multicolumn{2}{c}{DENETH} \\\\ \\cline{3-11} Method & (x1000) & (ms/batch) & OA\\(\\uparrow\\) & MA\\(\\uparrow\\) & OA\\(\\uparrow\\) & MA\\(\\uparrow\\) & OA\\(\\uparrow\\) & MA\\(\\uparrow\\) & OA\\(\\uparrow\\) & MA\\(\\uparrow\\) \\\\ \\hline K-means-DTW (Petitjean et al., 2011) & 130 & \\(>\\)10\\({}^{4}\\) & — & — & 40.5 & 26.8 & — & — & — & — \\\\ USRL (Franceschi et al., 2019)+K-means & 259 & 193 & 63.9 & 20.4 & 34.9 & 23.6 & 60.9 & 48.6 & 54.0 & 46.4 \\\\ DTAN (Shapira Weber et al., 2019)+K-means & 256 & 28 & 65.6 & 21.4 & 47.7 & 29.3 & 60.5 & **48.6** & 46.3 & 36.9 \\\\ K-means (Bottou \\& Bengio, 1994) & 130 & 7 & 60.0 & 29.8 & 49.5 & 32.5 & 61.9 & 47.8 & 57.2 & **48.5** \\\\ \\hline DTI-TS: K-means = time warping & 471 & 13 & 69.1 & **30.4** & 52.3 & **36.0** & 64.1 & **51.7** & 57.6 & 51.1 \\\\ + offset & 512 & 18 & 67.7 & 28.6 & 52.0 & 35.5 & 63.6 & 50.4 & 58.5 & **52.6** \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 3: **Performance comparison for clustering on all datasets.** We report for our method and competing methods the number of trainable parameters (#param) when trained on PASTIS, the accuracy (OA) and the mean class accuracy (MA). K-means clustering is run with 32 clusters for all methods for fair comparison. We distinguish with a background color the DENETHOR dataset - where train and test splits are acquired during different periods - from the others. K-means-DTW is tested on TimeSen2Crop dataset only, due to the expensive cost of the algorithm. We report the average inference time of each method to process a batch of 2,048 time series from TS2C on a single NVIDIA GeForce RTX 2080 Ti GPU. Figure 5: **Low data regime on PASTIS dataset.** Using Fold 2 of PASTIS dataset, we train on only 1, 2, 4 or 10% of the image time series of the training set. For the 1%, 2% and 4% samples, we show the average for 5 random different subsets. poor performance. The same conclusion can be drawn from the results of K-means-DTW on TimeSen2Crop. In contrast, our time warping appears as constrained enough to both reach satisfying scores and account for the temporal diversity of the data. Using an offset transformation on the spectral intensities consistently results in improved sample reconstruction using our prototypes, as demonstrated in Table 4. However, it only increases classification scores for DENETHOR. We attribute this improvement to the offset transformation's ability to better handle the domain shift between the training and testing data on the DENETHOR dataset. The results on the other datasets suggest that this transformation accounts for more than just intra-class variability, leading to less accurate classification scores, as discussed in Section 5.4. For all the methods compared above, we label the clusters with the most frequently occurring class in each of them on the train set. This can correspond to millions of annotated pixel time series being used, but our method works with far less annotations. We report in Figure 6 the MA of our method on TS2C when only 1, 5 or 10 annotated pixel time series used to decide for each cluster's label. We sample either random time series in each cluster ('Random') or select the time series that are best reconstructed by the given prototype ('Closest'). There is a clear 5% performance drop when a single time series is used to label each cluster. However, using 5 time series per cluster is already enough to recover scores similar to the ones obtained using the full training dataset. For TS2C, this amounts to 0.001% of all the training data. ### Qualitative evaluation #### 5.3.1 Land cover maps We provide in Figure 7 a visualization of the land cover maps obtained by our method and competing supervised approaches on PASTIS dataset. We show 4 randomly selected image time series from Fold 2 test set. One can see how our method improves over NCC by allowing pixels of the same field to be classified similarly. We highlight with black circles (\\(\\bigcirc\\)) examples of areas where NCC gives different labels to central and border pixels whereas our method use the same deformable prototype to reconstruct all pixels of the field. OS-CNN and TapNet fail to classify properly the sea as background which we highlight with a yellow circle (\\(\\bigcirc\\)). TapNet land cover maps are the most noisy, with a salt-and-pepper effect that is particularly noticeable on the third row. MLP+LTAE is the best at maintaining spatial consistency within crop classes and at accurately delineating boundaries. Similarly, we visually compare in Figure 8 land cover maps obtained with K-means and our method. We highlight with black circles (\\(\\bigcirc\\)) areas where our approach distinguish more faithfully agricultural parcels from the background class than K-means. Since cluster labeling is performed Figure 6: **Number of labeled pixel times series used to assign prototypes.** We label each of the \\(K=32\\) prototypes obtained on TS2C training set using the 1, 5 and 10 closest - or 1, 5 and 10 random - pixel time series in its cluster on the training set. We report the mean accuracy averaged over 5 runs and compare it to when using all annotated pixel time series of the training set. through majority voting, most clusters get assign to the majority background class on PASTIS: it is the case for 47% of K-means clusters on Fold 2. However, our deformable prototypes can represent the same class with less clusters, hence only 41% of them account for the background class. All pixel-wise SITS semantic Figure 8: **Qualitative comparisons of unsupervised methods.** We show predicted segmentation maps for K-means (b) and our method (c) for randomly selected SITS from Fold 2 test set of PASTIS (a). Dark grey segments correspond to the _void_ class and are ignored by all methods. Figure 7: **Qualitative comparisons of supervised methods.** We show predicted segmentation maps for best-performing supervised methods (b-d), NCC (e) and our method (f) for randomly selected SITS from Fold 2 test set of PASTIS (a). Dark grey segments correspond to the _void_ class and are ignored by all methods. The legend above is used for all other semantic segmentation visualizations of this paper. segmentation methods can benefit from a post-processing step taking into account spatial information, _e.g._, aggregating predictions in nearby pixels, or on each field. While such post-processing is not the focus of our paper, we demonstrate in Appendix C the benefits of several such post-processing methods. #### 5.3.2 Visualizing prototypes We show in Figure 9 our prototypes and how they are deformed to reconstruct a given input. For each class of the SA dataset, we show an input time series that has been correctly assigned to its corresponding prototype by our model trained with supervision but without \\(\\mathcal{L}_{\\text{cont}}\\). We see that the inputs are best reconstructed by a prototype of their class. Looking at any of the columns, we see that prototypes of other classes can also be deformed to reconstruct a given input, but only to a certain extent. This confirms that the transformations considered are simple enough so that the reconstruction power of each prototype is limited, but powerful enough to allow the prototypes to adapt to their input. Figure 10 shows the 32 prototypes learned by our unsupervised model on SA, grouped by assigned label. For each prototype, we show an example input sample whose best reconstruction is obtained using this particular prototype and the obtained corresponding reconstruction. We see that prototypes are not equally assigned to classes, with class _Canola_ having 14 prototypes when class _Small Grain Gazing_ only has 1. This is due to the high imbalance of the classes in the datasets and different intra-class variabilities. Inside a class, different prototypes account for intra-class variability beyond what our deformations can model. ### Discussion DTI-TS fails at classifying an input pixel time series when the prototype of a wrong crop type is able to better reconstruct it than the prototype of the true class. This may happen in three cases that we detail below: (i) because both classes are very similar, (ii) because our deformations are powerful enough to align semantically different prototypes to the same input sequence or (iii) simply because the input time series is a difficult sample to reconstruct. \\begin{table} \\begin{tabular}{l l c c c c c c c c c c c} \\hline \\hline & & \\multicolumn{6}{c}{Supervised} & \\multicolumn{6}{c}{Unsupervised} \\\\ \\cline{3-13} & & \\multicolumn{3}{c}{Val} & \\multicolumn{3}{c}{Test} & \\multicolumn{3}{c}{Train} & \\multicolumn{3}{c}{Test} \\\\ \\cline{3-13} & & OA\\(\\uparrow\\) & MA\\(\\uparrow\\) & \\(\\mathcal{L}_{\\text{rec}}\\downarrow\\) & OA\\(\\uparrow\\) & MA\\(\\uparrow\\) & \\(\\mathcal{L}_{\\text{rec}}\\downarrow\\) & OA\\(\\uparrow\\) & MA\\(\\uparrow\\) & \\(\\mathcal{L}_{\\text{rec}}\\downarrow\\) & OA\\(\\uparrow\\) & MA\\(\\uparrow\\) & \\(\\mathcal{L}_{\\text{rec}}\\downarrow\\) \\\\ \\hline \\multirow{4}{*}{\\begin{tabular}{l} **Semi** \\\\ **Cover** \\\\ **Cover** \\\\ **Cover** \\\\ **Cover** \\\\ **Cover** \\\\ **Cover** \\\\ **Cover** \\\\ **Cover** \\\\ **Cover** \\\\ **Cover** \\\\ **Cover** \\\\ **Cover** \\\\ **Cover** \\\\ \\end{tabular} } & \\begin{tabular}{c} Raw prototypes \\\\ 57.3 \\\\ 56.8 \\\\ 55.7 \\\\ 57.8 \\\\ 56.8 \\\\ 57.4 \\\\ 55.0 \\\\ 55.7 \\\\ 57.4 \\\\ 56.0 \\\\ 56.0 \\\\ 56.9 \\\\ 57.4 \\\\ 56.9 \\\\ 57.4 \\\\ 56.0 \\\\ 57.4 \\\\ 58.5 \\\\ 58.0 \\\\ 57.4 \\\\ 58.5 \\\\ 58.0 \\\\ 58. Similar classes.Example of similar classes can be seen in Figure 4 where the mean NDVI over time of the Winter wheat and the Rye classes on TS2C as well as the Barley and Rye classes of DENETHOR are very close. Our transformations may align indifferently both class prototypes to an input sequence, discarding small differencies that would have helped classify it. Transformation design.While our deformations are simple, they may not be constrained enough for the task of crop classification. The time warping stretches or squeezes temporally a time series using uniformly spaced control points. In Figure 4, looking at the Wheat class for SA, note how this time warping is able to align train (in blue) and test (in red) curves, despite a clear temporal shift. Even though these deformations are limited to 7 days in each direction, they do not focus on a specific period in the year. Our offset transformation assumes that intra-class spectral distortions are time-independent. Though we show empirically that we can reconstruct better time series when using this transformation, this comes at the price of reduced classification performance. We believe performances could be further improved by the design of physics-based transformations that could account for actual meteorologic events. Reconstruction performance on misclassified samples.Misclassified samples by our method tend to be the most difficult to reconstruct. This statement is supported quantitatively in Table 5 where we show \\begin{table} \\begin{tabular}{l c c} \\hline \\hline & \\(\\checkmark\\) Correct & \\(\\checkmark\\) Wrong \\\\ & predictions & predictions \\\\ \\hline PASTIS (Garnot \\& Landrieu, 2021) & **2.52** & 2.72 \\\\ TS2C (Weikmann et al., 2021) & **3.44** & 3.79 \\\\ SA (Kondmann et al., 2022) & **1.84** & 2.26 \\\\ DENETHOR (Kondmann et al., 2021) & **3.54** & 3.57 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 5: **Reconstruction loss of correct and wrong predictions. We report the average reconstruction loss of correct and wrong predictions on all datasets for our method in the supervised case. We highlight in bold the lowest reconstruction loss for each row: time series that we tend to misclassify are also more difficult to reconstruct in general.** Figure 9: **Reconstructions from different prototypes. We show the reconstructions of input samples (columns) from SA (Kondmann et al., 2022) by learned prototypes (rows) in the supervised setting without \\(\\mathcal{L}_{\\text{cont}}\\). Selected prototypes (frames) correspond to the lowest reconstruction error.** that the reconstruction loss is higher on average for misclassified time series on all datasets. In Figure 11, we can see that wrongly classified time series (in red) often show clear differences from the learned prototype (in bold blue). Again, better suited deformations for this task should help prototypes reconstruct more accurately. Figure 10: **Learned prototypes on SA.** We show the 32 prototypes learned on the SA dataset (Kondmann et al., 2022) (first column) in the unsupervised setting with time warping and offset deformations. For each prototype, we show an example time series of the corresponding class from the test set that is best reconstructed by it (second column) along with its reconstruction by our model (third column). accurately diverse time series of the same class and meanwhile not let them fit times series of other classes. We believe this to be a challenge that should be addressed in future work. ## 6 Conclusion We have presented an approach to learning invariance to transformations relevant for agricultural time series using deep learning, and demonstrated how it can be used to perform both supervised and unsupervised pixel-based classification of crop SITS. We perform our analysis on four recent public datasets with diverse characteristics and covering different countries. Our method significantly improves the performance of NCC and K-means on all datasets, while keeping their interpretability. We show it improves the state of the art on the DENETHOR dataset for classification. This result emphasizes the need for more multi-year datasets to reliably evaluate the potential of automatic methods for practical crop segmentation scenarios, for which our deformation modeling approach seems to provide significant advantages. DTI-TS also achieves best results in a low data regime on PASTIS, and on all datasets for unsupervised clustering. Additionally, we provide a benchmark of MTSC classification approaches for agricultural SITS classification. ## References * Aschbacher et al. (2017) Josef Aschbacher, Masami Onoda, and Oran R Young. _ESA's Earth Observation Strategy and Copernicus_, pp. 81-86. Springer Singapore, Singapore, 2017. ISBN 978-981-10-3713-9. doi: 10.1007/978-981-10-3713-9_5. URL [https://doi.org/10.1007/978-981-10-3713-9_5](https://doi.org/10.1007/978-981-10-3713-9_5). * Belda et al. (2020) Santiago Belda, Luca Pipia, Pablo Morcillo-Pallares, Juan Pablo Rivera-Caicedo, Eatidal Amin, Charlotte De Grave, and Jochem Verrelst. Datimes: A machine learning time series gui toolbox for gap-filling and vegetation phenology trends detection. _Environmental Modelling & Software_, 127:104666, 2020. * Belgiu and Csillik (2018) Mariana Belgiu and Ovidiu Csillik. Sentinel-2 cropland mapping using pixel-based and object-based time-weighted dynamic time warping analysis. _Remote Sensing of Environment_, 204:509-523, 2018. ISSN 0034 Figure 11: **Visual representation of failure cases.** We show the normalized red band of randomly selected time series from two classes of SA (Kondmann et al., 2022) and DENETHOR (Kondmann et al., 2021). We distinguish between time series correctly classified by our model (in blue) and time series misclassified (in red). We also display the red band of the corresponding learned prototype in each case. 4257. doi: [https://doi.org/10.1016/j.rse.2017.10.005](https://doi.org/10.1016/j.rse.2017.10.005). URL [https://www.sciencedirect.com/science/article/pii/S0034425717304686](https://www.sciencedirect.com/science/article/pii/S0034425717304686). * Blickensdorfer et al. (2022) Lukas Blickensdorfer, Marcel Schwieder, Dirk Pflugmacher, Claas Nendel, Stefan Erasmi, and Patrick Hostert. Mapping of crop types and crop sequences with combined time series of sentinel-1, sentinel-2 and landsat 8 data for germany. _Remote sensing of environment_, 269:112831, 2022. * Bookstein (1989) Fred L. Bookstein. Principal warps: Thin-plate splines and the decomposition of deformations. _IEEE Transactions on pattern analysis and machine intelligence_, 11(6):567-585, 1989. * Boshuizen et al. (2014) Christopher Boshuizen, James Mason, Pete Klupar, and Shannon Spanhake. Results from the planet labs flock constellation. _AIAA/USU Conference on Small Satellites_, 2014. * Bostrom and Bagnall (2015) Aaron Bostrom and Anthony Bagnall. Binary shapelet transform for multiclass time series classification. In _International conference on big data analytics and knowledge discovery_, pp. 257-269. Springer, 2015. * Bottou and Bengio (1994) Leon Bottou and Yoshua Bengio. Convergence properties of the k-means algorithms. _Advances in neural information processing systems_, 7, 1994. * Breon and Colzy (1999) Francois-Marie Breon and Stephane Colzy. Cloud Detection from the Spaceborne POLDER Instrument and Validation against Surface Synoptic Observations. _Journal of Applied Meteorology_, 38(6):777-785, June 1999. doi: 10.1175/1520-0450(1999)038<077:CDFTSP>2.0.CO;2. URL [https://hal.archives-ouvertes.fr/hal-03119834](https://hal.archives-ouvertes.fr/hal-03119834). * Caron et al. (2018) Mathilde Caron, Piotr Bojanowski, Armand Joulin, and Matthijs Douze. Deep clustering for unsupervised learning of visual features. In _Proceedings of the European conference on computer vision (ECCV)_, pp. 132-149, 2018. * Comaniciu and Meer (2002) Dorin Comaniciu and Peter Meer. Mean shift: A robust approach toward feature space analysis. _IEEE Transactions on pattern analysis and machine intelligence_, 24(5):603-619, 2002. * Cortes and Vapnik (1995) Corinna Cortes and Vladimir Vapnik. Support-vector networks. _Machine learning_, 20(3):273-297, 1995. * Cover and Hart (1967) Thomas Cover and Peter Hart. Nearest neighbor pattern classification. _IEEE transactions on information theory_, 13(1):21-27, 1967. * Cuturi and Blondel (2017) Marco Cuturi and Mathieu Blondel. Soft-dtw: a differentiable loss function for time-series. In _International Conference on Machine Learning_, pp. 894-903. PMLR, 2017. * Drusch et al. (2012) Matthias Drusch, Umberto Del Bello, Sebastien Carlier, Olivier Colin, Veronica Fernandez, Ferran Gascon, Bianca Hoersch, Claudia Isola, Paolo Laberini, Philippe Martimort, et al. Sentinel-2: Esa's optical high-resolution mission for gmes operational services. _Remote sensing of Environment_, 120:25-36, 2012. * Duda et al. (1973) Richard O Duda, Peter E Hart, and David G Stork. _Pattern classification and scene analysis_, volume 3. Wiley New York, 1973. * Elman (1993) Jeffrey L Elman. Learning and development in neural networks: The importance of starting small. _Cognition_, 48(1):71-99, 1993. * Franceschi et al. (2019) Jean-Yves Franceschi, Aymeric Dieuleveut, and Martin Jaggi. Unsupervised scalable representation learning for multivariate time series. _Advances in neural information processing systems_, 32, 2019. * Franek et al. (2010) Lucas Franek, Daniel Duarte Abdala, Sandro Vega-Pons, and Xiaoyi Jiang. Image segmentation fusion using general ensemble clustering methods. In _Asian Conference on Computer Vision_, pp. 373-384. Springer, 2010. * Gao et al. (2021) Han Gao, Changcheng Wang, Guanya Wang, Haiqiang Fu, and Jianjun Zhu. A novel crop classification method based on ppfsvm classifier with time-series alignment kernel from dual-polarization sar datasets. _Remote sensing of environment_, 264:112628, 2021. * Ghahramani et al. (2017)Vivien Sainte Fare Garnot and Loic Landrieu. Lightweight temporal self-attention for classifying satellite images time series. In _International Workshop on Advanced Analytics and Learning on Temporal Data_, pp. 171-181. Springer, 2020. * Garnot and Landrieu [2021] Vivien Sainte Fare Garnot and Loic Landrieu. Panoptic segmentation of satellite image time series with convolutional temporal attention networks. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_, pp. 4872-4881, 2021. * Garnot et al. [2020] Vivien Sainte Fare Garnot, Loic Landrieu, Sebastien Giordano, and Nesrine Chehata. Satellite image time series classification with pixel-set encoders and temporal self-attention. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pp. 12325-12334, 2020. * Guo et al. [2022] Wenqi Guo, Weixiong Zhang, Zheng Zhang, Ping Tang, and Shichen Gao. Deep temporal iterative clustering for satellite image time series land cover analysis. _Remote Sensing_, 14(15):3635, 2022. * He et al. [2015] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In _Proceedings of the IEEE international conference on computer vision_, pp. 1026-1034, 2015. * Ho [1995] Tin Kam Ho. Random decision forests. In _Proceedings of 3rd international conference on document analysis and recognition_, volume 1, pp. 278-282. IEEE, 1995. * Ienco et al. [2017] Dino Ienco, Raffaele Gaetano, Claire Dupaquier, and Pierre Maurel. Land cover classification via multitemporal spatial data by deep recurrent neural networks. _IEEE Geoscience and Remote Sensing Letters_, 14(10):1685-1689, 2017. * Iounousse et al. [2015] Jawad Iounousse, Salah Er-Raki, Ahmed El Motassadeq, and Hassan Chehouani. Using an unsupervised approach of probabilistic neural network (pnn) for land use classification from multitemporal satellite images. _Applied Soft Computing_, 30:1-13, 2015. * Fawaz et al. [2020] Hassan Ismail Fawaz, Benjamin Lucas, Germain Forestier, Charlotte Pelletier, Daniel F Schmidt, Jonathan Weber, Geoffrey I Webb, Lhasane Idoumghar, Pierre-Alain Muller, and Francois Petitjean. Inceptiontime: Finding alexnet for time series classification. _Data Mining and Knowledge Discovery_, 34(6):1936-1962, 2020. * Julien and Sobrino [2019] Yves Julien and Jose A Sobrino. Optimizing and comparing gap-filling techniques using simulated ndvi time series from remotely sensed global data. _International Journal of Applied Earth Observation and Geoinformation_, 76:93-111, 2019. * Justice and Becker-Reshef [2007] Christopher O Justice and Inbal Becker-Reshef. Report from the workshop on developing a strategy for global agricultural monitoring in the framework of group on earth observations (geo). In _Availiable online: http://www. fao. org/gto/igol/docs/meeting-reports/07-geo-ag0703-workshop-report-nov07. pdf (accessed on 11 June 2015)_, volume 595, 2007. * Kalinicheva et al. [2020] Ekaterina Kalinicheva, Jeremie Sublime, and Maria Trocan. Unsupervised satellite image time series clustering using object-based approaches and 3d convolutional autoencoder. _Remote Sensing_, 12(11):1816, 2020. * application to modis lai products. _Biogeosciences_, 10(6):4055-4071, 2013. doi: 10.5194/bg-10-4055-2013. URL [https://bg.copernicus.org/articles/10/4055/2013/](https://bg.copernicus.org/articles/10/4055/2013/). * Karim et al. [2019] Fazle Karim, Somshubra Majumdar, Houshang Darabi, and Samuel Harford. Multivariate lstm-fcns for time series classification. _Neural Networks_, 116:237-245, 2019. * Khelifi and Mignotte [2016] Lazhar Khelifi and Max Mignotte. A novel fusion approach based on the global consistency criterion to fusing multiple segmentations. _IEEE Transactions on Systems, Man, and Cybernetics: Systems_, 47(9):2489-2502, 2016. * Khelifi et al. [2017]* Kingma and Ba (2015) Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. _ICLR_, 2015. * Kirillov et al. (2023) Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alexander C Berg, Wan-Yen Lo, et al. Segment anything. _arXiv preprint arXiv:2304.02643_, 2023. * Kondmann et al. (2021) Lukas Kondmann, Aysim Toker, Marc Russwurm, Andres Camero Unzueta, Devis Peressuti, Grega Milcinski, Nicolas Longepe, Pierre-Philippe Mathieu, Timothy Davis, Giovanni Marchisio, et al. Denethor: The dynamicearthnet dataset for harmonized, inter-operable, analysis-ready, daily crop monitoring from space. In _Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track_, 2021. * Kondmann et al. (2022) Lukas Kondmann, Sebastian Boeck, Rogerio Bonifacio, and Xiao Xiang Zhu. Early crop type classification with satellite imagery-an empirical analysis. _ICLR 3rd Workshop on Practical Machine Learning in Developing Countries_, 2022. * Kussul et al. (2017) Natalia Kussul, Mykola Lavreniuk, Sergii Skakun, and Andrii Shelestov. Deep learning classification of land cover and crop types using remote sensing data. _IEEE Geoscience and Remote Sensing Letters_, 14(5):778-782, 2017. * Lefevre et al. (2019) Sebastien Lefevre, David Sheeren, and Onur Tasar. A generic framework for combining multiple segmentations in geographic object-based image analysis. _ISPRS International Journal of Geo-Information_, 8(2):70, 2019. * Li (2019) Hailin Li. Multivariate time series clustering based on common principal component analysis. _Neurocomputing_, 349:239-247, 2019. * Li et al. (2020) Huapeng Li, Ce Zhang, Shuqing Zhang, and Peter M Atkinson. Crop classification from full-year fully-polarimetric l-band uavsar time-series using the random forest algorithm. _International Journal of Applied Earth Observation and Geoinformation_, 87:102032, 2020. * Li et al. (2012) Zhenguo Li, Xiao-Ming Wu, and Shih-Fu Chang. Segmentation using superpixels: A bipartite graph partitioning approach. In _2012 IEEE conference on computer vision and pattern recognition_, pp. 789-796. IEEE, 2012. * Lines et al. (2012) Jason Lines, Luke M Davis, Jon Hills, and Anthony Bagnall. A shapelet transform for time series classification. In _Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining_, pp. 289-297, 2012. * Loiseau et al. (2021) Romain Loiseau, Tom Monnier, Mathieu Aubry, and Loic Landrieu. Representing shape collections with alignment-aware linear models. In _2021 International Conference on 3D Vision (3DV)_, pp. 1044-1053. IEEE, 2021. * Loiseau et al. (2022) Romain Loiseau, Baptiste Bouvier, Yann Teytaut, Elliot Vincent, Mathieu Aubry, and Loic Landrieu. A model you can hear: Audio identification with playable prototypes. _ISMIR_, 2022. * MacQueen (1967) J MacQueen. Classification and analysis of multivariate observations. In _5th Berkeley Symp. Math. Statist. Probability_, pp. 281-297, 1967. * Maghoumi et al. (2021) Mehran Maghoumi, Eugene Matthew Taranta, and Joseph LaViola. Deepnag: Deep non-adversarial gesture generation. In _26th International Conference on Intelligent User Interfaces_, pp. 213-223, 2021. * Mbow et al. (2019) C. Mbow, C. Rosenzweig, L. G. Barioni, T. G. Benton, M. Herrero, M. Krishnapillai, E. Liwenga, P. Pradhan, M. G. Rivera-Ferre, T. Sapkota, F. N. Tubiello, and Y. Xu. _Food security_, chapter -, pp. -. Intergovernmental Panel on Climate Change, 2019. * Mohammadi et al. (2023) Sina Mohammadi, Mariana Belgiu, and Alfred Stein. Improvement in crop mapping from satellite image time series by effectively supervising deep neural networks. _ISPRS Journal of Photogrammetry and Remote Sensing_, 198:272-283, 2023. * Mo et al. (2019)Tom Monnier, Thibault Groueix, and Mathieu Aubry. Deep transformation-invariant clustering. _Advances in Neural Information Processing Systems_, 33:7945-7955, 2020. * Monnier et al. [2021] Tom Monnier, Elliot Vincent, Jean Ponce, and Mathieu Aubry. Unsupervised layered image decomposition into object prototypes. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_, pp. 8640-8650, 2021. * Nyborg et al. [2022] Joachim Nyborg, Charlotte Pelletier, and Ira Assent. Generalized classification of satellite image time series with thermal positional encoding. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops_, pp. 1392-1402, June 2022. * Pedregosa et al. [2011] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. Scikit-learn: Machine learning in Python. _Journal of Machine Learning Research_, 12:2825-2830, 2011. * Pelletier et al. [2019] Charlotte Pelletier, Geoffrey I Webb, and Francois Petitjean. Temporal convolutional neural network for the classification of satellite image time series. _Remote Sensing_, 11(5):523, 2019. * Petitjean et al. [2011] Francois Petitjean, Jordi Inglada, and Pierre Gancarskv. Clustering of satellite image time series under time warping. In _2011 6th International Workshop on the Analysis of Multi-temporal Remote Sensing Images (Multi-Temp)_, pp. 69-72. IEEE, 2011. * Petitjean et al. [2012] Francois Petitjean, Camille Kurtz, Nicolas Passat, and Pierre Gancarski. Spatio-temporal reasoning for the classification of satellite image time series. _Pattern Recognition Letters_, 33(13):1805-1815, 2012. ISSN 0167-8655. * Prosekov and Ivanova [2018] Alexander Y. Prosekov and Svetlana A. Ivanova. Food security: The challenge of the present. _Geoforum_, 91:73-77, 2018. ISSN 0016-7185. doi: [https://doi.org/10.1016/j.geoforum.2018.02.030](https://doi.org/10.1016/j.geoforum.2018.02.030). URL [https://www.sciencedirect.com/science/article/pii/S0016718518300666](https://www.sciencedirect.com/science/article/pii/S0016718518300666). * Rajan and Rayner [1995] Jebu J Rajan and Peter JW Rayner. Unsupervised time series classification. _Signal processing_, 46(1):57-74, 1995. * Rivera et al. [2020] Antonio Jesus Rivera, Maria Dolores Perez-Godoy, David Elizondo, Lipika Deka, and Maria Jose del Jesus. A preliminary study on crop classification with unsupervised algorithms for time series on images with olive trees and cereal crops. In _International Workshop on Soft Computing Models in Industrial and Environmental Applications_, pp. 276-285. Springer, 2020. * Rudin et al. [1992] Leonid I Rudin, Stanley Osher, and Emad Fatemi. Nonlinear total variation based noise removal algorithms. _Physica D: nonlinear phenomena_, 60(1-4):259-268, 1992. * Russwurm and Korner [2020] Marc Russwurm and Marco Korner. Self-attention for raw optical satellite time series classification. _ISPRS journal of photogrammetry and remote sensing_, 169:421-435, 2020. * Russwurm et al. [2020] Marc Russwurm, Charlotte Pelletier, Maximilian Zollner, Sebastien Lefevre, and Marco Korner. Breizhcrops: A time series dataset for crop type mapping. _International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences ISPRS (2020)_, 2020. * Russwurm et al. [2023] Marc Russwurm, Nicolas Courty, Remi Emonet, Sebastien Lefevre, Devis Tuia, and Romain Tavenard. End-to-end learned early classification of time series for in-season crop type mapping. _ISPRS Journal of Photogrammetry and Remote Sensing_, 196:445-456, 2023. * Sakoe and Chiba [1978] Hiroaki Sakoe and Seibi Chiba. Dynamic programming algorithm optimization for spoken word recognition. _IEEE transactions on acoustics, speech, and signal processing_, 26(1):43-49, 1978. * Schafer [2015] Patrick Schafer. The boss is concerned with time series classification in the presence of noise. _Data Mining and Knowledge Discovery_, 29(6):1505-1530, 2015. * Schafer and Leser [2017] Patrick Schafer and Ulf Leser. Multivariate time series classification with weasel+ muse. _arXiv preprint arXiv:1711.11343_, 2017. * Schafer et al. [2018]* Seto et al. (2015) Skyler Seto, Wenyu Zhang, and Yichen Zhou. Multivariate time series classification using dynamic time warping template selection for human activity recognition. In _2015 IEEE symposium series on computational intelligence_, pp. 1399-1406. IEEE, 2015. * Weber et al. (2019) Ron A Shapira Weber, Matan Eyal, Nicki Skafte, Oren Shriki, and Oren Freifeld. Diffeomorphic temporal alignment nets. _Advances in Neural Information Processing Systems_, 32, 2019. * Yekta et al. (2015) Mohammad Shokoohi Yekta, Jun Wang, and Eamonn Keogh. On the non-trivial generalization of dynamic time warping to the multi-dimensional case. In _Proceedings of the 2015 SIAM international conference on data mining_, pp. 289-297. SIAM, 2015. * Singhal and Seborg (2005) Ashish Singhal and Dale E Seborg. Clustering multivariate time-series data. _Journal of Chemometrics: A Journal of the Chemometrics Society_, 19(8):427-438, 2005. * Tang et al. (2022) Wensi Tang, Guodong Long, Lu Liu, Tianyi Zhou, Michael Blumenstein, and Jing Jiang. Omni-scale CNNs: a simple and effective kernel size configuration for time series classification. In _International Conference on Learning Representations_, 2022. URL [https://openreview.net/forum?id=PDYs7Z2XFGv](https://openreview.net/forum?id=PDYs7Z2XFGv). * Tarasiou et al. (2023) Michail Tarasiou, Erik Chavez, and Stefanos Zafeiriou. Vits for sits: Vision transformers for satellite image time series. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pp. 10418-10428, 2023. * Team (2017) Planet Team. Planet application program interface: In space for life on earth. _San Francisco, CA_, 2017:40, 2017. * Tonekaboni et al. (2021) Sana Tonekaboni, Danny Eytan, and Anna Goldenberg. Unsupervised representation learning for time series with temporal neighborhood coding. _arXiv preprint arXiv:2106.00750_, 2021. * Wang et al. (2005) Xiaozhe Wang, Kate A Smith, and Rob J Hyndman. Dimension reduction for clustering time series using global characteristics. In _International Conference on Computational Science_, pp. 792-795. Springer, 2005. * Wang et al. (2017) Zhiguang Wang, Weizhong Yan, and Tim Oates. Time series classification from scratch with deep neural networks: A strong baseline. In _2017 International joint conference on neural networks (IJCNN)_, pp. 1578-1585. IEEE, 2017. * Weikmann et al. (2021) Giulio Weikmann, Claudia Paris, and Lorenzo Bruzzone. Timesen2crop: A million labeled samples dataset of sentinel 2 image time series for crop-type classification. _IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing_, PP:1-1, 04 2021. doi: 10.1109/JSTARS.2021.3073965. * Woodcock et al. (2008) C. E. Woodcock, R. Allen, M. Anderson, A. Belward, R. Bindschadler, W. Cohen, F. Gao, S. N. Goward, D. Helder, E. Helmer, R. Nemani, L. Oreopoulos, J. Schott, Prasad S. Thenkabail, E. F. Vermote, James E. Vogelmann, M. A. Wulder, and R. Wynne. Free access to landsat imagery. _Science_, 320(5879):1011-, 2008. doi: 10.1126/science.320.5879.1011a. URL [http://pubs.er.usgs.gov/publication/70159396](http://pubs.er.usgs.gov/publication/70159396). * YM. et al. (2020) Asano YM., Rupprecht C., and Vedaldi A. Self-labelling via simultaneous clustering and representation learning. In _International Conference on Learning Representations_, 2020. URL [https://openreview.net/forum?id=Hyx-jyBFpr](https://openreview.net/forum?id=Hyx-jyBFpr). * Zhang et al. (2020) Xuchao Zhang, Yifeng Gao, Jessica Lin, and Chang-Tien Lu. Tapnet: Multivariate time series classification with attentional prototypical network. In _Proceedings of the AAAI Conference on Artificial Intelligence_, volume 34, pp. 6845-6852, 2020. * Zhang et al. (2014) Zheng Zhang, Ping Tang, Lianzhi Huo, and Zengguang Zhou. Modis ndvi time series clustering under dynamic time warping. _International Journal of Wavelets, Multiresolution and Information Processing_, 12(05):1461011, 2014. * Zheng et al. (2015) Baojuan Zheng, Soe W Myint, Prasad S Thenkabail, and Rimjhim M Aggarwal. A support vector machine to identify irrigated crop types using time-series landsat ndvi data. _International Journal of Applied Earth Observation and Geoinformation_, 34:103-112, 2015. * Zhong et al. (2019) Liheng Zhong, Lina Hu, and Hang Zhou. Deep learning based multi-temporal crop classification. _Remote sensing of environment_, 221:430-443, 2019. ## Appendix A - Prototype initialization NCC or K-means centroids are used to initialize our prototypes, thus impacting the performance of our method. Plus, it is not obvious what is the best way to run NCC or K-means when faced with time series with missing data. The centroids can be computed by only giving weight to existing data points or after a gap filling operation. In this section, we focus on the supervised case and investigate other simple gap filling methods and the respective performance of both NCC and our method. Following the notations of Section 3.3, we define the following gap filling methods: NoneNo gap filling is done and the data processed by the method correspond to the raw input data: \\(\\mathbf{x}=\\mathbf{x}_{\\text{raw}}\\) and \\(\\mathbf{m}=\\mathbf{m}_{\\text{raw}}\\). PreviousMissing time stamps take the value of the closest previous data point in the time series: \\[\\mathbf{x}[t]=\\mathbf{x}_{\\text{raw}}\\Big{[}\\underset{t^{\\prime}\\leq t}{ \\max}\\ \\mathbf{m}_{\\text{raw}}[t^{\\prime}]=1\\Big{]}, \\tag{10}\\] and \\[\\mathbf{m}[t]=\\mathbb{1}_{\\{t^{\\prime}<t|\\mathbf{m}_{\\text{raw}}[t^{\\prime}]= 1\\}\ eq\\emptyset}[t]. \\tag{11}\\] Moving averageThe value of each time stamps is set as the non-weighted average of the data points inside of a centered time window: \\[\\mathbf{x}[t]=\\frac{1}{\\mathbf{m}[t]}\\sum_{t^{\\prime}=t-\\sigma}^{t+\\sigma} \\frac{\\mathbf{x}_{\\text{raw}}[t^{\\prime}]}{2\\sigma+1}, \\tag{12}\\] where \\(\\sigma\\) is a hyperparameter set to 7 days in our experiments - same as in Equation 12. We also define the associated filtered mask \\(\\mathbf{m}\\) for \\(t\\in[1,T]\\) by: \\[\\mathbf{m}[t]=\\sum_{t^{\\prime}=t-\\sigma}^{t+\\sigma}\\frac{\\mathbf{m}_{\\text{ raw}}[t^{\\prime}]}{2\\sigma+1}, \\tag{13}\\] for \\(t\\in[1,T]\\) and with the same hyperparameter \\(\\sigma\\). Gaussian filterWe can consider the filtering of Equations 11 to 13 as a gap filling method. The NCC centroid corresponding to class \\(k\\) is then given by: \\[\\mathbf{C}_{k}[t]=\\frac{1}{N_{k}C}\\sum_{\\begin{subarray}{c}i=1\\\\ y_{i}=k\\end{subarray}}^{N}\\frac{\\mathbf{m}_{i}[t]}{\\sum_{t^{\\prime}=1}^{T} \\mathbf{m}_{i}[t^{\\prime}]}\\mathbf{x}_{i}. \\tag{14}\\] In Table 10, we report the MA of both NCC and our method on TS2C dataset, using these different gap filling settings to compute NCC centroids. For our method, the only difference between experiments is the initialization of the prototypes. Filling missing data with Gaussian filtering improves over no gap filling by almost +5pt of MA. We also see that the ranking of the different gap filling methods is preserved with our method which confirms the importance of the initialization of the prototypes when training our model. Qualitatively, Figure 10 illustrates that our approach does not effectively address the inadequate quality of NCC centroids when gap filling is not employed. In contrast, all three gap filling strategies yield similar learned potato prototypes, even when initialized with NCC centroids that display substantial differences. In Section 3.3, we also present a filtering scheme of input data to prevent learning from potential outliers. Note that this can also be done during the assignment step when running NCC. Table 10 also shows how this input filtering is necessary for both NCC and our method to reach their best performance. ## Appendix B - Choice of \\(K\\) The number of prototypes \\(K\\) under supervision exactly corresponds to the number of ground truth classes. Without ground truth labels, the number of prototypes is selected arbitrarily and should hopefully be higher than the number of expected true classes. In Figure 10 we report the MA of our method with and without offset for different numbers of learned prototypes on TS2C dataset. Being entirely unsupervised, there is no restriction on how prototypes relate to classes: complex classes can be represented by several prototypes and others only by a single prototypical time series as shown in Figure 10. The value \\(K=32\\) prototypes appears to provide a favorable balance between classification accuracy and the number of learned parameters. As a result, we conducted all our unsupervised experiments using this chosen value. ## Appendix C - Prediction aggregation Pixel-wise methods in the scope of this paper do not leverage any spatial information or context. Thus, it is expected for whole-image based approaches like UTAE to reach better performance. However it is interesting to look for simple, yet effective fashions to aggregate pixel-wise predictions at the field level in a postprocessing step. In this section, we aggregate predictions using (i) ground truth instance segmentation maps, (ii) sliding windows or (iii) instance segmentation maps obtained with Segment Anything Model (Kirillov et al., 2023). The following study is performed on PASTIS Fold 2. Ground truth instance segmentation (GTI).We can use the ground truth segmentation of agricultural parcels provided with PASTIS dataset to have an upper bound of what can be achieved in terms of field-level aggregation. For each given parcel, we use majority voting to assign to all pixels of the instance the corresponding label. Sliding patches (SW).We assign to a pixel the majority label inside a patch of size 5\\(\\times\\)5 centered on it. Segment Anything Model (SAM).SAM (Kirillov et al., 2023) is an image segmentation model trained on 1 billion image masks. It is able to generate masks for an entire image or from a given prompt. Here, we use it off-the-shelf without any additional learning to generate instance segmentation maps for each image of a given SITS. Combining these possibly contradictory segmentation maps is not easy and is the subject of several related works (Franek et al., 2010; Li et al., 2012; Khelifi and Mignotte, 2016; Lefevre et al., 2019). As in Lefevre et al. (2019), we first produce a fine segmentation SAM\\({}_{\\text{raw}}\\) by intersecting all the obtained maps: two pixels \\(p_{1}\\) and \\(p_{2}\\) belong in the same instance in the final result if and only if they belong in the same instance for all images of the time series _i.e._: \\[d(p1,p2)=0,\\] (C1) with \\(d\\) the number of images in the SITS where pixels \\(p_{1}\\) and \\(p_{2}\\) belong to different instances. Then we propose to only keep instances that are not empty when eroded with a 3\\(\\times\\)3 kernel. We distinguish the _filtered instances_ from the _remaining pixels_ on these SAM\\({}_{\\text{flit}}\\) filtered instance segmentation maps. Examples of SAM\\({}_{\\text{raw}}\\) and SAM\\({}_{\\text{flit}}\\) maps can be found on Figure C1b and C1c respectively. In Table C1, we show that our method is more accurate on pixels of filtered instances than on remaining pixels, confirming that these instances correspond to clear and consistent spatial structures. Finally, a remaining pixel \\(p\\) is assigned to the closest filtered instance containing a pixel \\(p^{\\prime}\\) that minimizes \\(d(p,p^{\\prime})\\). Example of final SAM-based instance maps are shown on C1d. We again use majority voting to assign to all pixels of an instance the corresponding label. We now compare quantitatively and qualitatively the prediction aggregation methods described above applied to our supervised method to the whole-image based approach UTAE (Garnot and Landrieu, 2021) on Fold 2 test set of PASTIS. On Figure C2, see how such post-processing steps allow to leverage spatial context in order to remove noisy predictions. While SW seems to rather smooth the raw semantic predictions than actually aggregating them at the field-level, SAM leads to results that are visually close to those obtained when using the ground-truth instances. Quantitatively though, we observe in Table C2 a neat gap between SAM and GTI (9.1% in OA and 5.5% in MA) which encourage to search for better instance proposing methods. Still, SAM post-processing lead to a +2% increase in both OA and MA, demonstrating that obtained segments are semantically consistent. Finally, using GTI, our approach outperforms UTAE by +2.6% in OA but is behind by -6.2% in MA. Here the post-processing especially help classify the majority classes.
Improvements in Earth observation by satellites allow for imagery of ever higher temporal and spatial resolution. Leveraging this data for agricultural monitoring is key for addressing environmental and economic challenges. Current methods for crop segmentation using temporal data either rely on annotated data or are heavily engineered to compensate the lack of supervision. In this paper, we present and compare datasets and methods for both supervised and unsupervised pixel-wise segmentation of satellite image time series (SITS). We also introduce an approach to add invariance to spectral deformations and temporal shifts to classical prototype-based methods such as K-means and Nearest Centroid Classifier (NCC). We study different levels of supervision and show this simple and highly interpretable method achieves the best performance in the low data regime and significantly improves the state of the art for unsupervised classification of agricultural time series on four recent SITS datasets. Our complete code is available at [https://github.com/ElliotVincent/AgriTSC](https://github.com/ElliotVincent/AgriTSC).
Condense the content of the following passage.
arxiv/f40d6d3b_927b_4bda_84fb_fd57f09bce7e.md
Regional Constellation Reconfiguration Problem: Integer Linear Programming Formulation and Lagrangian Heuristic Method Hang Woon Lee This work was presented at the AAS/AIAA Astrodynamics Specialist Conference, Virtual, August 9-11, 2021 as Paper AAS 21-719. West Virginia University, Morgantown, WV, 26506 Koki Ho Assistant Professor, Department of Mechanical and Aerospace Engineering; [email protected]. Member AIAA (Corresponding Author). Georgia Institute of Technology, Atlanta, GA, 30332 ###### Wes Virginia University, Morgantown, WV, 26506 Georgia Institute of Technology, Atlanta, GA, 30332 ## Nomenclature Orbital Elements \\(a\\) = semi-major axis \\(e\\) = eccentricity \\(inc\\) = inclination \\[M =\\] mean anomaly \\[u =\\] argument of latitude \\[\\omega =\\] argument of periapsis \\[\\Omega =\\] right ascension of the ascending node Parameters and Decision Variables \\[b =\\] coverage timeline \\[c =\\] assignment cost \\[r =\\] coverage threshold \\[T =\\] mission planning horizon period \\[v =\\] visibility profile \\[V =\\] visibility matrix \\[x =\\] constellation pattern variable \\[y =\\] coverage state variable \\[Z =\\] objective function value \\[\\varphi =\\] assignment variable \\[\\pi =\\] coverage reward \\[\\varepsilon =\\] epsilon-constraint method parameter \\[\\lambda =\\] Lagrange multiplier \\[\\theta =\\] elevation angle \\[\\theta =\\] subgradient method step size Sets and Indices \\[\\mathcal{E} =\\] set of edges \\[\\mathcal{G} =\\] graph \\[\\mathcal{I} =\\] set of satellites (index \\[i\\] ) \\[\\mathcal{J} =\\] set of orbital slots (index \\[j\\] ) \\[\\mathcal{N} =\\] exchange neighborhood \\[\\mathcal{P} =\\] set of target points (index \\[p\\] ) \\[\\mathcal{S} =\\] set of subconstellations (index \\[s\\] ) \\[\\mathcal{T} =\\] set of time steps (index \\[t\\] ) \\[\\mathbb{R}_{\\geq 0} =\\] set of non-negative real numbers \\[\\mathbb{Z}_{\\geq 0} =\\] set of non-negative integers # Introduction Satellite constellation systems often face varying mission requirements and environments during their operations. These variations may arise from changes in the area of interest (e.g., disaster monitoring [1], temporary reconnaissance [2], and theater situational awareness missions), or from modifications to the desired coverage performance, such as switching from sporadic coverage to uninterrupted, single-fold coverage. Moreover, the systems themselves may need to adapt due to the addition of new satellites, for example, through staged deployment [3, 4], or the loss of existing satellites due to failures [5] and/or end-of-life decommissions. Under such circumstances, it is logical for system operators to consider options to \"reconfigure\" an existing constellation system to maximize the utility of active on-orbit assets rather than launching an entirely new constellation. We define constellation reconfiguration as the process of transforming an existing configuration into another to maintain the system in an optimal state, given a set of new mission requirements [6, 7]. The design of a reconfiguration process is nontrivial and involves interdisciplinary fields of studies, such as satellite constellation design theory, orbital transfer trajectory optimization, and mathematical programming, to enable a robust constellation reconfiguration framework. Of particular interest to this paper is the topic of satellite constellation reconfiguration in the context of Earth observations (EO). Many present-day EO satellite systems are monolithic or small-scale constellation systems distributed in near-polar low Earth orbits (LEO), mostly in sun-synchronous orbits to leverage consistent illumination conditions. Near-polar orbits enable EO satellite systems to scan different parts of the globe in each orbit, making them ideal for detecting changes in the Earth's land cover, vegetation, and civil infrastructures. However, the long revisit time for a particular target makes near-polar orbits unsuitable for missions that require rapid adaptive mission planning and enhanced coverage, such as satellite-based emergency mapping, surveillance, and reconnaissance missions, to name a few [8, 9]. Of latest attention to the EO community has been the concept of agile satellites that assume attitude control capability, which is deemed to enhance the overall system responsiveness and scheduling efficiency [10]. Recently, several works have explored the concept of maneuverable satellites in the domain of EO satellite systems as a new paradigm to bolster the system observation capacity by directly manipulating the orbits [11, 12, 13, 14, 1]. In this paper, we investigate the concept of reconfiguration as a means for system adaptability and responsiveness that adds a new dimension to the operation of next-generation EO satellite constellation systems. The problem of satellite constellation reconfiguration consists of two different, yet coupled, problems: the _constellation design problem_ and the _constellation transfer problem_[6, 15, 16]. The former deals with the optimal design of a (destination) constellation configuration that satisfies a set of mission requirements; the latter is concerned with the minimum-cost transportation of satellites from one configuration to another provided the knowledge of both end states. Although we may approach these two interdependent problems independently in a sequential manner (i.e., a destination configuration is first designed and followed by the optimal assignment of satellites to new orbital slots), theoutcome of such an open-loop procedure may result in a suboptimal reconfiguration process as a whole [6; 16]. Without taking into account the satellite transportation aspect in design, the optimized new configuration may be too costly or, in fact, infeasible to achieve. This background motivates us to concurrently consider constellation design and transfer aspects in satellite constellation reconfiguration. The problem of concurrent constellation design and transfer optimization is highly complex and challenging. While it is well known that the constellation transfer problem can be formulated as an assignment problem [17], formulating the constellation design problem faces a unique mathematical programming challenge due to (i) the potentially complex (spatiotemporally-varying) regional coverage and (ii) the reconfiguration problem with the cardinality constraint. First, the constellation design problem for complex regional coverage may need to incorporate design attributes such as heterogeneity among member satellites (e.g., different orbits and hardware specs) and asymmetry in satellite distributions. The classical constellation patterns, such as the streets-of-coverage [18; 19], Walker patterns [20; 21; 22], and the tetrahedron elliptical constellation [23], are limiting due to symmetry and sparsity in satellite distribution, especially with a small number of satellites. Ref. [24] has shown that, for complex regional coverage, relaxing symmetry and homogeneity assumptions of the classical methods enables the exploration of larger design space and hence leads to the discovery of more efficient constellation pattern sets. However, incorporating asymmetric patterns into the constellation reconfiguration while considering the transfer cost remains a challenging problem, unaddressed in the literature. Second, in the context of satellite constellation reconfiguration, the design of a destination configuration may be restricted to a given number of satellites, which we refer to as the cardinality constraint. This is a logical assumption to make because, without the enforcement of the cardinality constraint, the optimal design of a destination configuration may require substantially more satellites than are readily available for orbital maneuvers. Launching a set of new satellites to fulfill this deficit within a limited time window can be challenging from a financial and operational perspective. Therefore, addressing the reconfiguration problem while adhering to the cardinality constraint adds an extra layer of complexity. The challenge in solving the satellite constellation reconfiguration problem lies not only in integrating design and transfer aspects but also in devising a solution approach that is computationally efficient and yields high-quality solutions, particularly for mission scenarios that require a rapid system response. The way we formulate the integrated design-transfer model determines its mathematical properties and the pool of applicable solution algorithms, which in turn affects the time complexity of retrieving solutions. Several satellite constellation reconfiguration studies have been conducted, covering both constellation design and transfer in various problem settings [1; 2; 5; 11; 25]. These studies have demonstrated the value of concurrent optimization, but their mathematical problem formulations are generally nonlinear and often employ meta-heuristic algorithms. While these algorithms can be efficient in obtaining high-quality solutions, they can be computationally expensive for highly-constrained problems and cannot certify the optimality (or the optimality gap) of the obtained solutions. Therefore, the principal challenge we face in this work involves streamlining the entire pipeline, from formulating the constellation design problem to integrating constellation design and transfer aspects and developing a computationally-efficient solution method. The contributions of this paper are as follows. We present an integer linear program (ILP) formulation of the design-transfer problem, referred to as the _Regional Constellation Reconfiguration Problem_ (RCRP). This formulation incorporates both constellation design and constellation transfer aspects, which are typically considered independent and serial in current state-of-the-art techniques. The RCRP utilizes the maximal covering location problem formulation found in facility location problems for constellation design, and the assignment problem for constellation transfer, both of which are ILPs. By integrating these two problems, a larger design space is explored and operators are provided with trade-off analysis between transportation cost and coverage performance. The proposed model supports various mission concepts of operations that arise in regional coverage missions. The presented RCRP formulation enables the use of using mixed-integer linear programming (MILP) methods, such as the branch-and-bound algorithm, to obtain globally-optimal reconfiguration solutions. However, this approach becomes intractable for moderately-sized instances. To address this challenge, a Lagrangian relaxation-based solution method is proposed to approach large-scale optimization. This method relaxes a set of constraints to reveal and exploit the special substructure of the problem, making it easier to solve. The results of the computational experiments demonstrate the near-optimality of the Lagrangian heuristic solutions, compared to solutions obtained by a commercial solver, with significantly faster runtime. The remainder of this paper is organized as follows. Section II provides an overview of the constellation-coverage model and discusses the optimization formulations for constellation design and transfer problems. Section III presents a mathematical formulation of the integrated design-transfer problem and examines its characteristics. Section IV introduces the developed Lagrangian relaxation-based solution method for addressing the proposed problem formulation. Section V conducts computational experiments to demonstrate the effectiveness of the developed method and provides an illustrative example applied to the case of federated disaster monitoring. Finally, Section VI concludes this paper. ## II Constellation Design and Transfer Problems In this section, we construct a constellation-coverage model (Section II.A), propose an optimization problem formulation for the constellation design problem (Section II.B), and review a constellation transfer problem model by Ref. [17] (Section II.C). The materials discussed in this section lay the foundation for the integrated design-transfer model in Section III. Several comments on the notations are noted. The asterisk symbol in superscript \\((\\cdot)^{*}\\) denotes the optimality of a variable \\((\\cdot)\\). \\(Z(\\cdot)\\) denotes the optimal objective function value of a given problem with parameters \\((\\cdot)\\). \\(Z_{\\text{LP}}\\) denotes the optimal value of a given problem with integrality constraints dropped, hence the name linear programming (LP) relaxation bound. \\(\\text{Co}(\\cdot)\\) denotes the convex hull of a set \\((\\cdot)\\), and \\(|\\cdot|\\) denotes the cardinality of a set \\((\\cdot)\\). ### Constellation-Coverage Model We introduce the constellation-coverage model that relates the configuration of a constellation system with its coverage performance. In this model, the finite time horizon of period \\(T\\) is discretized into a set of time steps with a step size \\(\\Delta t\\). Let \\(\\mathcal{T}\\coloneqq\\{0,1,\\ldots,m-1\\}\\), where \\(m\\Delta t=T\\), be the set of time step indices \\(t\\) such that the set \\(\\{t(\\Delta t):t\\in\\mathcal{T}\\}\\) is the discrete-time finite horizon. The set of orbital slot indices is denoted by \\(\\mathcal{J}\\). Each orbital slot \\(j\\in\\mathcal{J}\\) is defined by a unique set of orbital elements \\(\\mathbf{e}_{j}=(a_{j},e_{j},inc_{j},\\omega_{j},\\Omega_{j},M_{j})\\). Here, \\(a\\), \\(e\\), \\(inc\\), \\(\\omega\\), \\(\\Omega\\), and \\(M\\) each represents the semi-major axis, eccentricity, inclination, argument of periapsis, right ascension of ascending node (RAAN), and mean anomaly of an orbit, respectively. For circular orbits, we use the argument of latitude \\(u\\). We also let \\(\\mathcal{P}\\) be the set of target point indices \\(p\\). #### 1. Model Definitions For ease of description, and without loss of generality, we consider the model for a single target point. **Definition 1** (Visibility matrix).: Let \\(V_{tj}\\) denote the Boolean visibility state, which equals 1 if a satellite in orbital slot \\(j\\) covers the target point at time step \\(t\\). We let \\(\\mathbf{V}=(V_{tj}\\in\\mathbb{Z}_{\\geq 0}^{2}:t\\in\\mathcal{T},j\\in\\mathcal{J})\\) denote a visibility matrix where \\(\\mathbb{Z}_{\\geq 0}\\) denotes the set of non-negative integers. To construct \\(\\mathbf{V}\\), the following parameters need to be specified for each orbital slot: the orbital elements \\(\\mathbf{e}_{j}\\) at the epoch, the minimum elevation angle threshold \\(\\theta_{\\min}\\) for a target point (and/or the field-of-view of a satellite sensor), the coordinates of a target point, and the epoch at which the finite time horizon is referenced. With these parameters, the orbital slot is numerically propagated under the governing equations of motion (e.g., \\(J_{2}\\)-perturbed two-body motion) for a finite time horizon of period \\(T\\). At each time step \\(t\\), a Boolean visibility masking is applied to construct an element of the visibility matrix, \\(V_{tj}\\). **Definition 2** (Constellation pattern vector).: A constellation pattern vector \\(\\mathbf{x}=(x_{j}\\in\\{0,1\\}:j\\in\\mathcal{J})\\) specifies the relative distribution of satellites in a given system (or simply, the configuration of a constellation system). Each element of \\(\\mathbf{x}\\) is defined as: \\[x_{j}\\coloneqq\\begin{cases}1,&\\text{if a satellite occupies orbital slot $j$}\\\\ 0,&\\text{otherwise}\\end{cases}\\] **Definition 3** (Coverage timeline).: Let \\(b_{t}\\) be the number of satellite(s) in view from the target point at time step \\(t\\). Then, we let \\(\\mathbf{b}=(b_{t}\\in\\mathbb{Z}_{\\geq 0}:t\\in\\mathcal{T})\\) denote a coverage timeline. Here, the visibility of a satellite from a target point follows from the Boolean visibility masking. _Remark 1_ (Linear property).: We can relate visibility matrix \\(\\mathbf{V}\\), constellation pattern vector \\(\\mathbf{x}\\), and coverage timelineas a linear system. Mathematically, \\[b_{t}=\\sum_{j\\in\\mathcal{J}}V_{tj}x_{j} \\tag{1}\\] In this model, the set \\(\\mathcal{J}\\) can comprise orbital slots with different orbital characteristics without any predefined rule. Because satellites in these orbital slots experience different degrees of orbital perturbations over time, the constellation-coverage model is only valid within the specified time horizon of period \\(T\\). There will be a loss of fidelity in the constellation and coverage relationship beyond the specified time horizon. Such a case is indeed suitable for the planning of many temporary mission operations. However, some cases require persistent coverage of a region of interest for a long-term horizon. To account for it, we make several assumptions about the constellation-coverage model. In what follows, we review the assumptions and the definitions of the Access-Pattern-Coverage (APC) decomposition model by Ref. [24]. #### Ii-A2 Special Case: APC Decomposition To guarantee persistent regional coverage, Ref. [24] introduced a particular constellation-coverage model called the APC decomposition (named after the three finite discrete-time sequences of the special case model: the visibility (Access) profile, constellation Pattern vector, and Coverage timeline) by making two assumptions about the set of orbital slots \\(\\mathcal{J}\\): (i) the repeating ground track (RGT) orbits and (ii) the common ground track constellation. Specifically, the conditions are: 1. A ground track is the trace of a satellite's sub-satellite points on the surface of a planetary body. A satellite on an RGT orbit makes \\(N_{\\text{P}}\\) number of revolutions in \\(N_{\\text{D}}\\) number of nodal periods. There is a finite time horizon of period \\(T\\) (often called a period of repetition) during which a satellite repeats its closed relative trajectory exactly and periodically. Expressing this condition [26], we get: \\[T=N_{\\text{P}}T_{\\text{S}}=N_{\\text{D}}T_{\\text{G}}\\] where \\(N_{\\text{P}}\\) and \\(N_{\\text{D}}\\) are positive integers. \\(T_{\\text{S}}\\) is the nodal period of a satellite due to both nominal motion and perturbations and \\(T_{\\text{G}}\\) is the nodal period of Greenwich. 2. All satellites in a common ground track constellation share identical semi-major axis \\(a\\), eccentricity \\(e\\), inclination \\(inc\\), and argument of periapsis \\(\\omega\\) but each satellite \\(i\\) independently holds a pair of right ascension of ascending node (RAAN) \\(\\Omega_{i}\\) and initial mean anomaly \\(M_{i}\\) that satisfy the following distribution rule [27]: \\[N_{\\text{P}}\\Omega_{i}+N_{\\text{D}}M_{i}=\\text{constant mod }2\\pi\\] (2) With these assumptions, we add the following new definitions to the model to accommodate the special case. **Definition 4** (Reference visibility profile).: Let \\(v_{t}\\) denote the Boolean visibility state that equals 1 if a reference satellite covers a target point at time step \\(t\\) (0 otherwise). Then, we denote \\(\\mathbf{v}=(v_{t}\\in\\{0,1\\}:t\\in\\mathcal{T})\\) the reference visibility profile. **Definition 5** (Visibility circulant matrix).: A visibility circulant matrix \\(\\mathbf{V}\\) is the \\(m\\times m\\) matrix whose columns are the cyclic permutations of \\(\\mathbf{v}\\): \\[\\mathbf{V}=\\text{circ}(\\mathbf{v})=\\begin{bmatrix}v_{0}&v_{m-1}&\\cdots&v_{1}\\\\ v_{1}&v_{0}&\\cdots&v_{2}\\\\ \\vdots&\\vdots&\\ddots&\\vdots\\\\ v_{m-1}&v_{m-2}&\\cdots&v_{0}\\end{bmatrix}\\] where the \\((t,j)\\) entry of \\(\\mathbf{V}\\) is denoted with the modulo operator as \\(V_{t,j}=v_{(t-j)\\text{ mod }m}\\); \\(\\text{circ}(\\cdot)\\) is the circulant operator that takes \\(\\mathbf{v}\\) as the argument and generates a circulant matrix as defined above. _Remark 2_ (Circular convolution operation [24]).: We can relate reference visibility profile \\(\\mathbf{v}\\), constellation pattern vector \\(\\mathbf{x}\\), and coverage timeline \\(\\mathbf{b}\\) in the manner prescribed by a _circular convolution operation_. Mathematically, \\[b_{t}=\\sum_{j\\in\\mathcal{J}}v_{(t-j)\\text{ mod }m}x_{j} \\tag{3}\\] Following from the definition of the \\((t,j)\\) entry of \\(\\mathbf{V}\\) in Definition 5, Eq. (3) can be written as a linear system in terms of a reference visibility circulant matrix: \\(\\mathbf{b}=\\mathbf{V}\\mathbf{x}\\), which is in the form of Eq. (1). There are two notable benefits to this special case. One unique advantage of this special case is that it only requires knowledge of the reference visibility profile and the distribution of satellites along the common relative trajectory to quantify the satellite coverage state \\(b_{t}\\) of a target point at time step \\(t\\). The construction of \\(\\mathbf{V}\\) is significantly faster than the generic case due to the use of the circulant operator. Another advantage is that, as will be discussed later in this paper, having \\(\\mathbf{V}\\) as a circulant matrix can lead to useful mathematical properties. In particular, in Section II.B, we show that the upper bound of the LP relaxation of the maximum coverage problem can be analytically computed if \\(\\mathbf{V}\\) is circulant. Although not directly relevant to the main contents of this paper, circulant matrices can be used to leverage efficient solution methods for certain classes of problems (e.g., set covering problems with circulant matrices, as demonstrated in references [28, 29]). ### Constellation Design: Maximum Coverage Problem In this subsection, we introduce the Maximum Coverage Problem (MCP), which models the constellation design aspect of a constellation reconfiguration process. The MCP is based on the constellation-coverage model presented in Section II.A and serves as one of two essential components of the proposed RCRP formulation, which will be introduced in Section III. Additionally, we elucidate some of the interesting properties of this MCP formulation. Consider a problem setting where \\(\\mathcal{T}\\) is the set of time step indices and \\(\\mathcal{J}\\) is the set of orbital slot indices. Without loss of generality, we consider the problem for a single target point of interest. Given a finite time horizon of period \\(T\\), the time-dependent observation reward \\(\\mathbf{\\pi}=(\\pi_{t}\\in\\mathbb{R}_{\\geq 0}:t\\in\\mathcal{T})\\) is defined for a target point of interest. The goal is to locate \\(n\\) satellites in \\(\\mathcal{J}\\) such that the total observation reward obtained by covering the target point is maximized. To obtain the reward \\(\\pi_{t}\\) at time step \\(t\\), the target point must be covered. The target point is considered covered if there is at least \\(r_{t}\\) satellite(s) in view; the positive time-dependent coverage threshold \\(\\mathbf{r}=(r_{t}\\in\\mathbb{Z}_{\\geq 0}:t\\in\\mathcal{T})\\) is a user-supplied parameter vector. We can model this system by defining two sets of decision variables: the constellation pattern variables \\(\\mathbf{x}\\) and coverage state variables \\(\\mathbf{y}=(y_{t}\\in\\{0,1\\}:t\\in\\mathcal{T})\\), and a set of inequalities that links \\(\\mathbf{x}\\) and \\(\\mathbf{y}\\). The decision variable \\(x_{j}=1\\) if a satellite occupies orbital slot \\(j\\) (\\(x_{j}=0\\) otherwise; see Definition 2). Each element \\(y_{t}\\) of the coverage state variables takes the value of unity if and only if the coverage threshold of the target point is satisfied at time step \\(t\\) (\\(y_{t}=0\\) otherwise). Mathematically, \\[y_{t}=\\begin{cases}1,&\\text{if }b_{t}=\\sum_{j\\in\\mathcal{J}}V_{tj}x_{j}\\geq r _{t}\\\\ 0,&\\text{otherwise}\\end{cases} \\tag{4}\\] As can be seen in Eq. (4), the coverage state of the target point is conditionally dependent on the configuration of a constellation system. We can linearize this relationship by introducing a set of inequalities that links \\(\\mathbf{x}\\) and \\(\\mathbf{y}\\) for all \\(t\\in\\mathcal{T}\\): \\[\\sum_{j\\in\\mathcal{J}}V_{tj}x_{j}\\geq r_{t}y_{t},\\quad\\forall t\\in\\mathcal{T}\\] With these conditions as constraints, MCP aims to maximize the coverage reward of a satellite configuration with a given number of satellites, \\(n\\). The preliminary version of the mathematical formulation of MCP was introduced in our earlier work [30], which employs the big-M method to linearize the conditional constraints. We present an improved formulation of MCP that achieves a tighter integrality gap. **Formulation 1** (Maximum coverage problem).: MCP is formulated as an integer linear program: \\[\\text{(MCP)}\\quad Z=\\max \\sum_{t\\in\\mathcal{T}}\\pi_{t}y_{t}\\] (5a) s.t. \\[\\sum_{j\\in\\mathcal{J}}V_{tj}x_{j}\\geq r_{t}y_{t},\\quad\\forall t\\in \\mathcal{T} \\tag{5b}\\] \\[\\sum_{j\\in\\mathcal{J}}x_{j}=n\\] (5c) \\[x_{j}\\in\\{0,1\\}, \\forall j\\in\\mathcal{J}\\] (5d) \\[y_{t}\\in\\{0,1\\}, \\forall t\\in\\mathcal{T} \\tag{5e}\\]where \\(Z\\) denotes the optimal value of MCP. The objective function (5a) maximizes the total reward earned by covering a given target point. Constraints (5b) couples the configuration of a system to its coverage state of the target point. Constraint (5c) is the cardinality constraint that restricts the number of satellites to a fixed value of \\(n\\). Constraints (5d) and (5e) define the domains of decision variables. The 0-1 integrality constraint on \\(y_{t}\\) can be relaxed to \\(0\\leq y_{t}\\leq 1\\) when \\(r_{t}=1\\). This avoids the use of unnecessary integer definitions and could potentially facilitate the branch-and-bound algorithm necessary to find the optimal solution [31]. Note that for \\(r_{t}>1\\), \\(y_{t}\\) may take a fractional value (for instance, if \\(r_{t}=2\\) and \\(\\sum_{j\\in\\mathcal{J}}V_{tj}x_{j}=1\\), then \\(y_{t}\\) can take 0.5), therefore the integrality constraints on \\(y_{t}\\) must be enforced. Note that for \\(r_{t}=1,\\forall t\\in\\mathcal{T}\\), the MCP can be shown equivalent to the maximal covering location problem (MCLP) that emerges in many problem contexts such as the facility location problem. The MCLP seeks to locate a number of facilities such that the weighted coverage of demand nodes is maximized; each facility is pre-specified with a service radius to which it can provide coverage. We can use the following analogy: the satellites are the facilities and the time steps are the demand nodes. (Conversely, this suggests the general varying-radius \\(\\mathbf{r}\\)-fold coverage formulation of MCLP.) Unfortunately, the equivalence in the formulations informs us that MCP is NP-hard because of the NP-hardness of the MCLP [32], which can be deduced using the argument of the reduction from the MCLP to the MCP. For more information on the mathematical formulation and the applications of the MCLP, readers are encouraged to refer to the original study by Church and ReVelle [33]. Expressing \\(Z_{\\text{LP}}\\) in terms of an optimal LP solution (\\(\\mathbf{x}^{*},\\mathbf{y}^{*}\\)), we get: \\[Z_{\\text{LP}}=\\sum_{t\\in\\mathcal{T}}\\pi_{t}y_{t}^{*} \\tag{6}\\] where \\(y_{t}^{*}\\) is determined from \\(x_{j}^{*}\\) as \\[y_{t}^{*}=\\min\\left(\\frac{1}{r_{t}}\\sum_{j\\in\\mathcal{J}}V_{tj}x_{j}^{*},1\\right) \\tag{7}\\] Equation (7) follows from the fact that MCP is a maximization problem--\\(y_{t}\\) variables will take their maximum values as bounded by Constraints (5b). The second argument in the \\(\\min(\\cdot)\\) operator bounds the maximum of \\(y_{t}\\) to one, conforming with Eq. (4). The discussion on the LP relaxation bound of MCP will be revisited later in Section IV for the proposed solution method. Therefore, we provide additional implications for \\(Z_{\\text{LP}}\\) in this subsection to ensure completeness. First, we notice that \\(y_{t}^{*}\\) in Eq. (6) requires knowledge of the optimal LP solution \\(x_{j}^{*}\\). However, under special conditions, we can closely approximate \\(Z_{\\text{LP}}\\) by computing the upper bound \\(\\hat{Z}_{\\text{LP}}\\) such that no knowledge of \\(x_{j}^{*}\\) is needed. To show this, we derive \\(\\hat{Z}_{\\text{LP}}\\) from Eq. (6) by moving the summation inside the \\(\\min(\\cdot)\\) function. Expanding the first argument further to separate the first column of \\(\\mathbf{V}\\) and defining \\(\\xi_{t}:=\\pi_{t}/r_{t}\\) as the reward-to-requirement ratio, we obtain Eq. (8): \\[\\hat{Z}_{\\text{LP}} \\coloneqq\\min\\left(\\underbrace{n\\sum_{t\\in\\mathcal{T}}\\xi_{t}V_{t 1}}_{(1)}+\\underbrace{\\sum_{j\\in\\mathcal{J}\\setminus\\{1\\}}x_{j}^{*}\\left[ \\sum_{t\\in\\mathcal{T}}\\xi_{t}V_{tj}-\\sum_{t\\in\\mathcal{T}}\\xi_{t}V_{t1}\\right] }_{(2)},\\sum_{t\\in\\mathcal{T}}\\pi_{t}\\right) \\tag{8}\\] \\[\\geq Z_{\\text{LP}}\\] where we group the first argument of the \\(\\min(\\cdot)\\) function into two terms. If \\(\\xi_{t}=\\xi\\) for all \\(t\\in\\mathcal{T}\\) and \\(\\mathbf{V}\\) is a circulant matrix, then the terms within the bracket in Term (2) cancel out. Most problem instances we deal with in this paper assume \\(r_{t}=r,\\forall t\\in\\mathcal{T}\\) (time-invariant, \\(r\\)-fold continuous coverage) and \\(\\pi_{t}=\\pi,\\forall t\\in\\mathcal{T}\\) (uniform coverage reward) such that Term (2) vanishes. Therefore, we can conveniently express \\(\\hat{Z}_{\\text{LP}}=\\min(n\\sum_{t\\in\\mathcal{T}}\\xi_{t}v_{t},\\sum_{t\\in \\mathcal{T}}\\pi_{t})\\) as a function of known parameters \\(n\\), \\(\\xi\\), and \\(\\mathbf{v}\\) and without the needing to run LP. **Example 1** (5-satellite MCP).: Let \\(\\mathbf{\\omega}_{0}=(a,e,inc,\\Omega,u)=(12\\,758.5\\,\\text{km},0,50^{\\circ},50^{ \\circ},0^{\\circ})\\) be the orbital elements of the reference satellite defined in the J2000 frame. This corresponds to the RGT ratio of \\(N_{\\text{P}}/N_{\\text{D}}=6/1\\), that is, a satellite makes six revolutions in one nodal day. Assume a single target point of interest \\(p\\) with the geodetic coordinate \\((40^{\\circ}\\text{N},100^{\\circ}\\text{W})\\). The minimum elevation angle threshold \\(\\vartheta_{\\min}\\) for the target is set to \\(10\\,\\text{deg}\\). Suppose we wish to maximize the coverage over this target point with five satellites. For the MCP-specific parameters, we let \\(r_{t}=1,\\forall t\\in\\mathcal{T}\\) and \\(\\pi_{t}=1,\\forall t\\in\\mathcal{T}\\); this simplifies the MCP to the coverage percentage maximization problem. Solving the MCP to optimality, we get the optimum of \\(Z=398\\), which translates into \\(79.6\\%\\) temporal coverage of target point \\(p\\) by the optimal five-satellite configuration during the given repeat period \\(T\\). Interpreting the optimal solution \\(\\mathbf{x}^{*}\\), all satellites have identical \\(a\\), \\(e\\), and \\(inc\\) values but each satellite \\(i\\) holds the following pair of \\((\\Omega_{t},u_{i})\\): satellite \\(1\\) has \\((\\Omega_{1},u_{1})=(92.48^{\\circ},105.12^{\\circ})\\), satellite \\(2\\) has \\((\\Omega_{2},u_{2})=(178.16^{\\circ},311.04^{\\circ})\\), satellite \\(3\\) has \\((\\Omega_{3},u_{3})=(196.16^{\\circ},203.04^{\\circ})\\), satellite \\(4\\) has \\((\\Omega_{4},u_{4})=(281.12^{\\circ},53.28^{\\circ})\\), and satellite \\(5\\) has \\((\\Omega_{5},u_{5})=(6.80^{\\circ},259.20^{\\circ})\\). Note that one can check that these satellites conform with the distribution rule shown in Eq. (2) (by replacing \\(M_{i}\\) with \\(u_{i}\\) for circular orbits). Because \\(\\mathbf{r}\\) and \\(\\mathbf{\\pi}\\) are time-invariant, we can easily approximate \\(Z_{\\text{LP}}\\) by computing Term (1) of Eq. (8), resulting in \\(\\hat{Z}_{\\text{LP}}=410\\). In fact, by directly solving the LP relaxation problem, we obtain \\(Z_{\\text{LP}}=410\\), which is identical to \\(\\hat{Z}_{\\text{LP}}\\). The results are visualized in Fig. 1. ### Constellation Transfer: Assignment Problem The reconfiguration of a constellation incurs costs. In this subsection, we investigate the application of the Assignment Problem (AP) as a means to model the constellation transfer component of the reconfiguration process. Furthermore, we analyze the special mathematical properties of AP, which we subsequently utilize to devise a computationally efficient solution presented in Section IV. The transfer problem can be described over a bipartite graph \\(\\mathcal{G}=(\\mathcal{I}\\cup\\mathcal{J},\\mathcal{E})\\), in which the nodes of the set \\(\\mathcal{I}\\) describe the locations of the satellites while the nodes of the set \\(\\mathcal{J}\\) describe the locations of the orbital slots. Each edge \\((i,j)\\in\\mathcal{E}\\) is associated with the weight, or the cost \\(c_{ij}\\) of transferring satellite \\(i\\) to orbital slot \\(j\\), commonly represented by \\(\\Delta v\\) required or the time-of-flight. Within this framework, the transfer problem can be further divided into two main components: (i) the combinatorial optimization to find the minimum-cost assignment of satellites from one configuration to another and (ii) the orbital transfer trajectory design between a given (satellite, orbital slot) pair. The first component is concerned with the minimum-cost bipartite matching, which can be formulated as an assignment problem [17] as shown in Formulation 2. The second component deals with the construction of the cost matrix by evaluating the weights of edges in the bipartite graph setting. One can quantify each weight of the edges by solving an orbital boundary value problem. Because enumerating every edge can be time-consuming, several studies have proposed a rapid closed-form approximation of the true cost matrices [6, 34]. In this paper, we limit the scope of our work to high-thrust systems due to their benefit of timely reconfiguration, although low-thrust systems can also be considered as an alternative option. **Formulation 2** (Assignment problem).: Let \\(\\mathcal{I}=\\{1,\\ldots,n\\}\\) denote the set of workers and \\(\\mathcal{J}=\\{1,\\ldots,m\\}\\) denote the set of projects. The cost of assigning worker \\(i\\) to project \\(j\\) is represented by \\(c_{ij}\\). In the case of an unbalanced AP, the goal is to find the minimum-cost assignment of \\(n\\) workers to \\(m\\) projects such that all workers are assigned to projects, Fig. 1: MCP solution for Example 1. but not all projects are assigned with workers (i.e., when \\(n<m\\)). AP can be formulated as an integer linear program: \\[\\text{(AP)}\\quad\\text{min} \\sum_{i\\in\\mathcal{I}}\\sum_{j\\in\\mathcal{J}}c_{ij}\\varphi_{ij}\\] s.t. \\[\\sum_{j\\in\\mathcal{J}}\\varphi_{ij}=1, \\forall i\\in\\mathcal{I}\\] \\[\\sum_{i\\in\\mathcal{I}}\\varphi_{ij}\\leq 1, \\forall j\\in\\mathcal{J}\\] \\[\\varphi_{ij}\\in\\{0,1\\}, \\forall i\\in\\mathcal{I},\\forall j\\in\\mathcal{J}\\] where the decision variable \\(\\varphi_{ij}=1\\) if worker \\(i\\) is assigned to project \\(j\\) (\\(\\varphi_{ij}=0\\) otherwise). AP has garnered significant attention in the field of satellite constellation reconfiguration research as an optimization model for a constellation transfer problem. Introduced in a study by de Weck et al. [17], a constellation transfer problem can be intuitively modeled as AP using the following analogy: the satellites as the workers and the orbital slots as the projects; the coefficient \\(c_{ij}\\) as the cost (e.g., the fuel consumption) of transferring satellite \\(i\\) to orbital slot \\(j\\). The objective is to determine the minimum-cost assignment of \\(n\\) satellites to \\(m\\) orbital slots. In this paper, we adopt this transcription of the AP formulation in the modeling of a constellation transfer problem. For more information about the specific cost matrix generation used in this paper, refer to Ref. [35]. We briefly discuss the concepts of totally unimodular (TU) matrices and integral polyhedra, which are useful in leading to the discussion of the assignment problem. These concepts will come in handy later in this paper. **Definition 6** (Total unimodularity).: An integral matrix \\(\\mathbf{A}\\) is TU if every square sub-matrix of \\(\\mathbf{A}\\) has determinant equal to 0, 1, or -1. **Theorem 1** (Hoffman-Kruskal [36]).: _Let \\(\\mathbf{A}\\) be an integral matrix. The polyhedron \\(\\{\\mathbf{x}:\\mathbf{Ax}\\leq\\mathbf{b},\\mathbf{x}\\geq\\mathbf{0}\\}\\) is integral for all integral vector \\(\\mathbf{b}\\) if and only if \\(\\mathbf{A}\\) is TU._ One special feature of AP is that the problem satisfies Theorem 1. That is, the constraint matrix of AP, also known as the incidence matrix of a bipartite graph, is totally unimodular and the right-hand vector is integral. As a result, the extreme points of the corresponding polytope are integral. We say that such a problem possesses the _integrality property_. Consequently, the problem can be efficiently solved as a linear program (e.g., using the simplex or the interior-point methods) by relaxing the integrality constraints, also known as the linear programming relaxation, and still obtain integral optimal solutions. Other specialized algorithms such as the polynomial-time Hungarian algorithm [37] (also known as the Kuhn-Munkres algorithm; the asymptotic complexity is known to be \\(\\mathcal{O}(m^{3})\\) for a square \\(m\\times m\\) matrix) or an auction algorithm (with pseudopolynomial complexity [38] and polynomial complexity using \\(\\epsilon\\)-scaling [39]) are also available. **III. Regional Constellation Reconfiguration Problem** **A. Problem Description** Suppose a group of heterogeneous* satellites is undertaking a reconfiguration process to form a new configuration to maximize the observation reward of a newly emerged set of spot targets, denoted as \\(\\mathcal{P}\\). Each target \\(p\\) in \\(\\mathcal{P}\\) is associated with a time-dependent observation reward \\(\\pi_{tp},\\forall t\\in\\mathcal{T}\\) where \\(\\mathcal{T}\\) is the set of time step indices. The reconfiguration process involves (i) the design of the maximum-reward destination configuration and (ii) the minimum-cost assignment of satellites between the initial and destination configurations. The goal of the problem is to identify a set of non-dominated solutions in the objective space spanned by these two competing objectives (i) and (ii). Footnote *: The term heterogeneity embodies a general mission scenario of a federated system of satellites with different hardware specifications, orbital elements, and/or fuel states. We will refer to this problem as the Regional Constellation Reconfiguration Problem or RCRP in short. We use the term \"regional constellations\" to emphasize heterogeneity and asymmetry as distinctive design philosophies, in contrast to the homogeneity and symmetry of traditional global constellations. **B. Mathematical Formulation** In this subsection, we propose a mathematical formulation of the RCRP. In the proposed formulation, we consider a general case that accommodates multiple target points and multiple subconstellations [24]. A subconstellation is a group of satellites in a given constellation system that share common orbital characteristics such as the semi-major axis, eccentricity, inclination, and argument of periapsis. Consequently, a constellation system can therefore consist of multiple subconstellations (an example would be a multi-layered constellation). Let \\(\\mathcal{I}\\) be the set of satellite indices (index \\(i\\)), \\(\\mathcal{J}\\) be the set of orbital slot indices (index \\(j\\)), \\(\\mathcal{P}\\) be the set of target point indices (index \\(p\\)), \\(\\mathcal{S}\\) be the set of subconstellation indices (index \\(s\\)), and \\(\\mathcal{T}\\) be the set of time step indices (index \\(t\\)). By extending the concept of subconstellations to orbital slots, the set of all orbital slot indices \\(\\mathcal{J}\\) can be partitioned into \\(|\\mathcal{S}|\\) subsets such that \\(\\mathcal{J}=\\bigcup_{s\\in\\mathcal{S}}\\mathcal{J}_{s}\\) where \\(\\mathcal{J}_{s}\\subseteq\\mathcal{J}\\) denotes the set of orbital slot indices of subconstellation \\(s\\in\\mathcal{S}\\), and \\(\\mathcal{S}\\) is the index set of \\(\\mathcal{J}_{s}\\). For the case with the RGT orbit assumption, we enforce the _synchronous condition_ to guarantee identical periods of repetition for all subconstellations: \\(T_{s}=T,\\forall s\\in\\mathcal{S}\\) where \\(T_{s}\\) is the period of repetition for subconstellation \\(s\\). We also define the parameters \\(c_{ijs}\\) as the cost of assigning satellite \\(i\\) to orbital slot \\(j\\) of subconstellation \\(s\\) (\\(c_{ijs}\\geq 0\\)), \\(\\pi_{tp}\\) as the reward for covering target point \\(p\\) at time step \\(t\\) (\\(\\pi_{tp}\\geq 0\\)), \\(r_{tp}\\) as the coverage threshold for target point \\(p\\) at time step \\(t\\) (\\(r_{tp}\\geq 1\\)), and \\(b_{tp}\\) as the number of satellite(s) in view from target point \\(p\\) at time step \\(t\\) (\\(b_{tp}\\geq 0\\)). Furthermore, we define the parameter \\(V_{tips}\\) as follows: \\[V_{tips}=\\begin{cases}1,&\\text{if a satellite in orbital slot $j$ of subconstellation $s$ covers target point $p$ at time step $t$}\\\\ 0,&\\text{otherwise}\\end{cases}\\] The decision variables of interest include \\(\\varphi_{tips}\\) and \\(y_{t}{}_{p}\\), defined as follows, respectively: \\[\\varphi_{tips}=\\begin{cases}1,&\\text{if satellite $i$ is allocated to orbital slot $j$ of subconstellation $s$}\\\\ 0,&\\text{otherwise}\\end{cases}\\] \\[y_{t}{}_{p}=\\begin{cases}1,&\\text{if target point $p$ is covered at time step $t$ ($b_{t}{}_{p}\\geq r_{t}{}_{p}$)}\\\\ 0,&\\text{otherwise}\\end{cases}\\] With the above notations, the mathematical formulation of RCRP is as follows \\[\\text{(RCRP)}\\] min \\[\\sum_{s\\in\\mathcal{S}}\\sum_{j\\in\\mathcal{J}_{s}}\\varphi_{tips}=1, \\forall i\\in\\mathcal{I}\\] (9a) s.t. \\[\\sum_{i\\in\\mathcal{I}}\\varphi_{tips}\\leq 1, \\forall j\\in\\mathcal{J}_{s},\\forall s\\in\\mathcal{S} \\tag{9c}\\] \\[\\sum_{s\\in\\mathcal{S}}\\sum_{j\\in\\mathcal{J}_{s}}\\sum_{i\\in \\mathcal{I}}V_{tips}\\varphi_{tips}\\geq r_{t}{}_{p}y_{t}{}_{p}, \\forall t\\in\\mathcal{T},\\forall p\\in\\mathcal{P}\\] (9d) \\[\\varphi_{tips}\\in\\{0,1\\}, \\forall i\\in\\mathcal{I},\\forall j\\in\\mathcal{J}_{s},\\forall s \\in\\mathcal{S}\\] (9e) \\[y_{t}{}_{p}\\in\\{0,1\\}, \\forall t\\in\\mathcal{T},\\forall p\\in\\mathcal{P} \\tag{9f}\\] RCRP is formulated as a bi-objective ILP. The first objective function in (9a) minimizes the total cost of a constellation reconfiguration process, while the second objective function in (9a) maximizes the total reward earned by covering a set of target points. Constraints (9b) and (9c) are the AP-based constraints; Constraints (9b) ensure that every satellite is assigned to an orbital slot and Constraints (9c) restrict at most one satellite to be occupied per orbital slot. Constraints (9d) are the MCP-based constraints; these constraints ensure that the target point \\(p\\) is covered at time step \\(t\\) only if there exists at least \\(r_{t}{}_{p}\\) satellite(s) in view. Note that the cardinality constraint of MCP [i.e., Constraint (5c) of Formulation 1] is omitted in this formulation because it is implied by the satellite indices set \\(\\mathcal{I}=\\{1,\\ldots,n\\}\\) and the AP constraints. Constraints (9e) and (9f) define the domains of decision variables. Notice the decision variables of RCRP--they are in the form of the AP decision variables; the reasoning behind this choice is explained. The decision variable \\(\\varphi_{ijs}\\) of AP indicates an assignment of satellite \\(i\\) to orbital slot \\(j\\) of subconstellation \\(s\\), while the decision variable \\(x_{js}\\) of MCP indicates whether a satellite occupies orbital slot \\(j\\) of subconstellation \\(s\\). Therefore, it follows naturally that \\(\\varphi_{ijs}\\) are the elemental decision variables because we can deduce \\(x_{js}\\) from \\(\\varphi_{ijs}\\) (see Fig. 2). The following relationship couples these two different sets of decision variables along with Constraints (9b) and (9c): \\[x_{js}=\\sum_{i\\in\\bar{I}}\\varphi_{ijs},\\quad\\forall j\\in\\mathcal{J},\\forall s \\in\\mathcal{S} \\tag{10}\\] where both \\(\\varphi_{ijs}\\) and \\(x_{js}\\) are binary variables. This coupled relationship in Eq. (10) enables an integrated ILP formulation that simultaneously considers both the constellation transfer problem and the constellation design problem. ### Model Characteristics The RCRP formulation possesses the following characteristics. First, RCRP is NP-hard because of the embedded MCP structure (cf. Formulation 1). This deduction follows from the NP-hardness of MCLP [32], which has shown to be a particular case of MCP (see discussion in Section II.B). Second, the AP structure [Constraints (9b),(9c), and (9e)] is preserved in RCRP with the decision variables being AP-based. In this perspective, the complicating constraints are Constraints (9d). The RCRP formulation combines the constellation transfer problem with the AP formulation and the constellation design problem with the MCP formulation. The former exhibits a special structure--the integrality property--that enables an efficient solution approach. The latter, however, is a combinatorial optimization problem, making the use of exact methods, such as the branch-and-bound algorithm, computationally expensive. In light of this observation, we develop a solution method in Section IV that capitalizes on the characteristics of the RCRP formulation. Fig. 2: Decision variables of AP and MCP and their relationship. **IV. Lagrangian Relaxation-Based Solution Method** This section develops a solution method for the RCRP, which is a bi-objective combinatorial optimization problem. To approach the bi-objective formulation, we use the \\(\\varepsilon\\)-constraint method [40] to transform RCRP into a single-objective optimization problem. This is then solved in series by varying the \\(\\varepsilon\\) value. The transformed single-objective problem can be solved using a commercial MILP solver; however, this approach can become computationally challenging even for moderately-sized instances. Motivated by this background and the need to rapidly characterize reconfiguration trade-offs for timely reconfiguration, we propose a computationally-efficient Lagrangian relaxation-based solution method that leverages the unique structure of the model. **A. \\(\\varepsilon\\)-constraint Reformulation** The objective of RCRP is to identify a set of non-dominated solutions, as specified by its bi-objective formulation. To solve this problem, we reformulate RCRP as a single-objective optimization problem via the \\(\\varepsilon\\)-constraint method by casting one of the two objective functions into a constraint with an upper (or lower) bound \\(\\varepsilon\\). Solving a series of single-objective \\(\\varepsilon\\)-constrained problems to optimality given the sequence of \\(\\varepsilon\\) values yields the set of non-dominated solutions, also known as the Pareto front, of the original problem. Selecting an appropriate objective function for the constraint transformation is important because the choice made in this step affects the downstream algorithmic efforts. Applying the \\(\\varepsilon\\)-constraint method to RCRP, we transform the cost minimization objective function into a constraint that is bounded from above by \\(\\varepsilon\\)[Constraint (11)]. In a physical sense, \\(\\varepsilon\\) represents the maximum allowable aggregated cost of reconfiguration, hence the name _aggregated resource constraint_ (ARC) for Constraint (11). The following is the single-objective model with the ARC: \\[\\text{(RCRP-ARC)}\\quad Z(\\varepsilon)=\\min -\\sum_{p\\in\\mathcal{P}}\\sum_{t\\in\\mathcal{T}}\\pi_{tp}y_{tp}\\] s.t. \\[\\sum_{s\\in\\mathcal{S}}\\sum_{j\\in\\mathcal{J}_{s}}\\sum_{i\\in I}c_{ ijs}\\varphi_{ijs}\\leq\\varepsilon\\] \\[\\text{Constraints (\\ref{eq:constraint_constraint_constraint_constraint_ ### Lagrangian Relaxation The Lagrangian relaxation is a decomposition-based optimization technique used to approach complex problems by dualizing complicating constraints, thereby exposing the remaining \"relatively easy\" structure for efficient solving (see Ref. [41] for a general overview of the topic). Specifically, in our case, the complicating constraints can be viewed as those of MCP, Constraints (9d), primarily due to the intact AP structure (Section III.C) present in the relaxed problem, along with Constraint (11). For a more in-depth discussion on selecting between competing relaxations, we invite readers to refer to Appendix B. To retrieve the _Lagrangian problem_ (LR) of RCRP-ARC, we dualize Constraints (9d): \\[\\text{(LR)}\\quad Z_{\\text{D}}(\\varepsilon,\\mathbf{\\lambda})=\\min -\\sum_{p\\in\\mathcal{P}}\\sum_{t\\in\\mathcal{T}}\\pi_{tp}y_{tp}+\\sum_ {p\\in\\mathcal{P}}\\sum_{t\\in\\mathcal{T}}\\lambda_{tp}\\left[r_{tp}y_{tp}-\\sum_{s \\in\\mathcal{S}}\\sum_{j\\in\\mathcal{J}_{s}}\\sum_{i\\in\\bar{I}}V_{tjps}\\varphi_{ijs}\\right]\\] s.t. \\[\\sum_{s\\in\\mathcal{S}}\\sum_{j\\in\\mathcal{J}_{s}}\\varphi_{ijs}=1, \\forall i\\in\\mathcal{I}\\] \\[\\sum_{i\\in\\bar{I}}\\varphi_{ijs}\\leq 1, \\forall j\\in\\mathcal{J}_{s},\\forall s\\in\\mathcal{S}\\] \\[\\sum_{s\\in\\mathcal{S}}\\sum_{j\\in\\mathcal{J}_{s}}\\sum_{i\\in\\bar{ I}}c_{ijs}\\varphi_{ijs}\\leq\\varepsilon\\] \\[\\varphi_{ijs}\\in\\{0,1\\}, \\forall i\\in\\mathcal{I},\\forall j\\in\\mathcal{J}_{s},\\forall s\\in \\mathcal{S}\\] \\[y_{tp}\\in\\{0,1\\}, \\forall t\\in\\mathcal{T},\\forall p\\in\\mathcal{P}\\] where \\(\\mathbf{\\lambda}=(\\lambda_{tp}\\in\\mathbb{R}_{\\geq 0}:t\\in\\mathcal{T},p\\in \\mathcal{P})\\) is a vector of Lagrange multipliers associated with Constraints (9d), and \\(Z_{\\text{D}}(\\varepsilon,\\mathbf{\\lambda})\\) denotes the optimal value of LR. _Remark 3_.: For all non-negative \\(\\mathbf{\\lambda}\\), we have \\(Z_{\\text{D}}(\\varepsilon,\\mathbf{\\lambda})\\leq Z(\\varepsilon)\\). It is easy to see this because for a given optimal solution \\((\\mathbf{\\varphi}^{*},\\mathbf{y}^{*})\\) to RCRP-ARC, we observe that the following series of inequalities hold: \\[Z(\\varepsilon) \\geq-\\sum_{p\\in\\mathcal{P}}\\sum_{t\\in\\mathcal{T}}\\pi_{tp}y^{*}_{ tp}+\\sum_{p\\in\\mathcal{P}}\\sum_{t\\in\\mathcal{T}}\\lambda_{tp}\\left[r_{tp}y^{*}_{tp}- \\sum_{s\\in\\mathcal{S}}\\sum_{j\\in\\mathcal{J}_{s}}\\sum_{i\\in\\bar{I}}V_{tjps} \\varphi^{*}_{ijs}\\right]\\] \\[\\geq Z_{\\text{D}}(\\varepsilon,\\mathbf{\\lambda})\\] where the first inequality follows from adding the non-positive term to \\(Z(\\varepsilon)\\). The second inequality results from the fact that relaxing Constraint (9d) may expand the feasible region and potentially allow for the discovery of an improving solution that further reduces the objective function value. **C. Lagrangian Dual Problem and Subgradient Method** Following from Remark 3, we observe that the lower bound \\(Z_{\\rm D}(\\varepsilon,\\lambda)\\) can be tightened up (i.e., maximized) by solving for the optimal \\(\\lambda^{*}\\). Such a problem is called the _Lagrangian dual problem_ (D) and is formulated as follows: \\[\\text{(D)}\\quad Z_{\\rm D}(\\varepsilon)=\\max_{\\lambda}Z_{\\rm D}(\\varepsilon, \\lambda)\\] The Lagrangian dual problem is a non-differentiable optimization problem because \\(Z_{\\rm D}(\\varepsilon,\\lambda)\\) is a piecewise linear, concave function of \\(\\lambda\\). To solve this problem, we employ the subgradient method [42], which has been demonstrated as an effective method for non-differentiable optimization problems [43]. The subgradient method is an iterative algorithm in the spirit of the gradient ascent method for determining the maximum solution of a continuously differentiable function. Algorithm 2 provides the pseudocode for the subgradient optimization. The subgradient method starts by initializing the Lagrange multipliers, \\(\\lambda^{0}\\). At iteration \\(k\\), the Lagrangian problem LR\\({}^{k}\\) is solved using the provided parameters, \\(\\varepsilon\\), \\(\\lambda^{k}\\), \\(c\\), \\(\\pi\\), \\(r\\), and \\(v\\). With the optimal solution \\((\\varphi^{k},y^{k})\\) to LR\\({}^{k}\\), we apply a local search-based heuristic method to obtain an estimate \\(\\hat{Z}(\\varepsilon)\\) of the optimal value \\(Z(\\varepsilon)\\). To ensure the best estimate of the optimal value is maintained and to aid in the convergence of the subgradient method, the best \\(\\hat{Z}(\\varepsilon)\\) up to iteration \\(k\\) is stored in memory as the incumbent optimum. Next, the subgradient \\(\\boldsymbol{g}^{k}\\) of \\(Z_{\\rm D}(\\varepsilon,\\lambda^{k})\\) at \\(\\lambda^{k}\\) is computed: \\[\\boldsymbol{g}^{k}=\\left(r_{t}p_{t}^{k}-\\sum_{s\\in\\mathcal{S}}\\sum_{j\\in \\mathcal{J}_{s}}\\sum_{i\\in\\mathcal{I}}V_{tjps}\\varphi_{tjs}^{k}:t\\in\\mathcal{ T},p\\in\\mathcal{P}\\right)\\] If the subgradient of \\(Z_{\\rm D}(\\varepsilon,\\lambda^{k})\\) at \\(\\lambda^{k}\\) is \\(\\boldsymbol{0}\\), then \\(\\lambda^{k}\\) is an optimal Lagrange multiplier vector, and the algorithm terminates. The algorithm may also terminate with suboptimal Lagrangian multipliers if any of the following termination criteria are triggered: the maximum iteration count, the gap tolerance between \\(\\hat{Z}(\\varepsilon)\\) and \\(Z_{\\rm D}(\\varepsilon,\\lambda^{k})\\), and the step size tolerance. With \\(\\lambda^{0}=\\boldsymbol{0}\\), the Lagrangian relaxation bound starts with \\(Z_{\\rm D}(\\varepsilon,\\lambda^{0})=-\\sum_{p\\in\\mathcal{P}}\\sum_{t\\in\\mathcal{ T}}\\pi_{tp}\\) and improves as the subgradient method progresses. In the case of premature termination due to reaching the maximum iteration limit, the obtained \\(\\lambda\\) may be suboptimal, resulting in \\(Z_{\\rm D}(\\varepsilon,\\lambda)<Z_{\\rm D}(\\varepsilon)\\). With the knowledge of the optimal dual variables of the LP relaxation problem, one can use them for \\(\\lambda^{0}\\). However, this approach is not ideal as it requires running the LP relaxation problem beforehand, which can be computationally expensive for large instances. Unless the termination flag is triggered, the algorithm reiterates the procedure with the new set of Lagrange multipliers. The rule for updating the Lagrange multipliers is as follows: \\[\\lambda^{k+1}\\coloneqq\\max(\\boldsymbol{0},\\lambda^{k}+\\theta_{k}\\boldsymbol{ g}^{k})\\]where \\(\\max(\\cdot)\\) is the element-wise maximum to guarantee the non-negativity of \\(\\lambda_{tp}\\). Unless the dualized constraints are equality constraints, which can have associated multipliers unrestricted in sign, the multipliers need to be non-negative to penalize the violated constraints correctly [44]. The step size \\(\\theta_{k}\\) commonly used in practice is: \\[\\theta_{k}\\coloneqq\\frac{\\hat{Z}(\\varepsilon)-Z_{\\mathrm{D}}( \\varepsilon,\\lambda^{k})}{\\|\\mathbf{g}^{k}\\|^{2}}\\alpha_{k} \\tag{12}\\] where \\(\\|\\cdot\\|\\) is the Euclidean norm, and \\(\\alpha_{k}\\) is a scalar satisfying \\(0<\\alpha_{k}\\leq 2\\). The proof of convergence of the above step size formula is referred to Ref. [43]. As recommended by Fisher [41], the starting value of \\(\\alpha_{k}\\) is set to \\(\\alpha_{0}=2\\) and is halved if \\(Z_{\\mathrm{D}}(\\varepsilon,\\lambda)\\) fails to increase in a number of iterations. While there are different types of \\(\\theta_{k}\\) proposed in literature, the step size formula in Eq. (12) has performed particularly well in our problem settings. The subgradient method suffers from several drawbacks, such as the zigzagging phenomenon and slow convergence to the optimal multipliers \\(\\lambda^{*}\\). Several studies have proposed variants of the subgradient method, such as the surrogate Lagrangian relaxation method and the bundle method, to alleviate these issues. Interested readers can refer to Ref. [45] for additional materials on methods for non-differentiable problems. There exist two computational bottlenecks in this algorithm. One at computing the lower bound \\(Z_{\\mathrm{D}}(\\varepsilon,\\lambda^{k})\\) and another at computing the upper bound \\(\\hat{Z}(\\varepsilon)\\), both of which occur at every iteration. To make each iteration of the subgradient method more efficient, we provide efficient ways to compute \\(Z_{\\mathrm{D}}(\\varepsilon,\\lambda^{k})\\) and \\(\\hat{Z}(\\varepsilon)\\). ### Lower Bound: Lagrangian Problem Decomposition At each iteration of the subgradient optimization, \\(Z_{\\mathrm{D}}(\\varepsilon,\\lambda^{k})\\) is computed. By relaxing the complicating constraints [Constraints (9d)], which are also the linking constraints, we observe that Problem (LR) can be decomposed into two subproblems based on the variable types, \\(\\mathbf{\\varphi}\\) and \\(\\mathbf{y}\\): (LR1), an assignment problem with a side constraint, and (LR2), an unconstrained binary integer linear program. (LR1) \\[Z_{\\mathrm{D}1}(\\varepsilon,\\lambda)=\\min -\\sum_{p\\in\\mathcal{P}}\\sum_{t\\in\\mathcal{T}}\\lambda_{tp}\\Bigg{[} \\sum_{s\\in\\mathcal{S}}\\sum_{j\\in\\mathcal{J}_{s}}\\sum_{i\\in\\mathcal{I}}V_{tjps }\\varphi_{ijs}\\Bigg{]}\\] s.t. \\[\\sum_{s\\in\\mathcal{S}}\\sum_{j\\in\\mathcal{J}_{s}}\\varphi_{ijs}=1, \\forall i\\in\\mathcal{I}\\] \\[\\sum_{i\\in\\mathcal{I}}\\varphi_{ijs}\\leq 1, \\forall j\\in\\mathcal{J}_{s},\\forall s\\in\\mathcal{S}\\] \\[\\sum_{s\\in\\mathcal{S}}\\sum_{j\\in\\mathcal{J}_{s}}\\sum_{i\\in \\mathcal{I}}c_{ijs}\\varphi_{ijs}\\leq\\varepsilon\\] \\[\\varphi_{ijs}\\in\\{0,1\\}, \\forall i\\in\\mathcal{I},\\forall j\\in\\mathcal{J}_{s},\\forall s\\in \\mathcal{S}\\] 1. If Constraint (11) is not redundant, the Lagrangian relaxation may provide a tighter bound than the LP relaxation bound: \\(Z_{\\text{LP}}(\\varepsilon)\\leq Z_{\\text{D}}(\\varepsilon)\\). The value of \\(\\varepsilon\\) dictates the tightness of these bounds. 2. If Constraint (11) is redundant, Problem (LR) possesses the integrality property, which is a sufficient condition for \\(Z_{\\text{LP}}(\\varepsilon)=Z_{\\text{D}}(\\varepsilon)\\)[46]. Reiterating, Constraint (11) being redundant is only a sufficient condition for the equal bounds, and therefore it does not necessarily imply \\(Z_{\\text{LP}}(\\varepsilon)<Z_{\\text{D}}(\\varepsilon)\\) for all \\(\\varepsilon<\\varepsilon_{\\text{r}}\\). In some instances, we observe that there exists \\(\\varepsilon_{\\text{c}}<\\varepsilon_{\\text{r}}\\) such that \\(Z_{\\text{LP}}(\\varepsilon)<Z_{\\text{D}}(\\varepsilon)\\) for \\(\\varepsilon<\\varepsilon_{\\text{c}}\\) (and \\(Z_{\\text{LP}}(\\varepsilon)=Z_{\\text{D}}(\\varepsilon)\\) for \\(\\varepsilon\\geq\\varepsilon_{\\text{c}}\\)). Note that if \\(Z_{\\text{LP}}=Z_{\\text{D}}\\), then the optimal Lagrange multipliers are equal to the dual variables associated with Constraints (9d) in the LP relaxation of RCRP-ARC. Revisit the earlier discussion on the LP relaxation bound for MCP discussed in Section II.B. There is a special case, that is, \\(\\xi_{t}=\\xi,\\forall t\\in\\mathcal{T}\\) and \\(\\mathbf{V}\\) being a circulant matrix, for which the value of \\(\\hat{Z}_{\\text{LP}}\\) is independent of optimal LP solution \\(\\mathbf{x}^{*}\\). This analysis is readily extensible to the RCRP-ARC formulation. Consequently, for \\(\\varepsilon\\geq\\varepsilon_{\\text{r}}\\), \\(\\xi_{tp}=\\xi_{p},\\forall t\\in\\mathcal{T}\\), and all relevant \\(\\mathbf{V}\\) matrices being circulant, the approximated LP relaxation bound for RCRP-ARC is simply [cf. Eq. (8)]: \\[\\hat{Z}_{\\text{LP}}\\coloneqq\\max\\left(-\\left|\\mathcal{I}\\right|\\sum_{p\\in \\mathcal{P}}\\sum_{t\\in\\mathcal{T}}\\xi_{tp}v_{t\\,p},-\\sum_{p\\in\\mathcal{P}} \\sum_{t\\in\\mathcal{T}}\\pi_{tp}\\right)\\] where \\(\\xi_{tp}\\coloneqq\\pi_{tp}/r_{tp}\\). A simple and straightforward way is to accept the subgradient assignment \\(\\mathbf{\\varphi}^{k}\\) as the valid primal solution and obtain the conforming coverage state \\(\\hat{\\mathbf{y}}^{k}(\\mathbf{\\varphi}^{k})\\) (note that we distinguish it from \\(\\mathbf{y}^{k}\\), without a tilde), which is simply the coverage state of the constellation configuration obtained from the set of assignments \\(\\mathbf{\\varphi}^{k}\\): \\[\\tilde{y}^{k}_{tp}(\\varphi^{k}_{ijs})=\\begin{cases}1,&\\text{if }\\sum_{s\\in S} \\sum_{j\\in J_{s}}\\sum_{i\\in I}V_{tjps}\\varphi^{k}_{ijs}\\geq r_{tp}\\\\ 0,&\\text{otherwise}\\end{cases} \\tag{13}\\] This approach always yields feasible solutions to the primal problem because it circumvents the inconsistency between \\(\\mathbf{\\varphi}^{k}\\) and \\(\\mathbf{y}^{k}\\). The other way around, obtaining \\(\\tilde{\\mathbf{\\varphi}}^{k}(\\mathbf{y}^{k})\\) from a given \\(\\mathbf{y}^{k}\\), is infeasible in most cases or requires a combinatorial optimization approach in otherwise rarely feasible cases. A more sophisticated approach is to candidate solutions in \\(\\mathcal{N}(\\mathbf{\\varphi})\\). Here, the figure of merit used is \\(\\hat{Z}(\\varepsilon)\\). If the best candidate solution \\(\\mathbf{\\varphi}^{*}\\) outperforms the incumbent solution, it is accepted as the new incumbent solution, and \\(\\hat{Z}(\\varepsilon)\\) and the neighborhood are updated accordingly. The search process then reiterates with the new neighborhood. If not, the local search halts and the local optimum is obtained. The algorithm returns the local optimal solution \\(\\mathbf{\\varphi}^{*}\\), its conforming \\(\\tilde{\\mathbf{y}}^{*}(\\mathbf{\\varphi}^{*})\\) [Eq. (13)], and the corresponding upper bound value \\(\\hat{Z}(\\varepsilon)\\). The trade-off between solution quality and computational effort exists in Line 3 of Algorithm 3 at deciding the local optimum of the current neighborhood. A brute-force evaluation of all candidate solutions can be costly for large instances. As such, it is practical to search only within a subset of the neighborhood \\(\\mathcal{N}^{\\prime}\\subseteq\\mathcal{N}\\), which can be either randomly generated or defined by a pre-determined rule. Moreover, a first-come-first-served scheme can be applied to select the first local solution that improves the incumbent solution, effectively reducing the dimension of the search space. It is important to note that the radius of the neighborhood can substantially influence this trade-off. Employing a more general \\(\\kappa\\)-exchange neighborhood local search (\\(\\kappa>1\\)) can improve the quality of the heuristic solution for large instances, but at the cost of increased computational effort. ### Extension: RCRP with Individual Resource Constraints Various extensions can be made to the proposed formulation and method. One practically important example is discussed here. Constraint (11) in the RCRP-ARC formulation limits the aggregated resource consumed by all system satellites. Along a similar line, but instead, we can formulate a variant of RCRP to enforce the maximum individual resource consumption on each satellite. This modeling approach has significant practical implications, as not all satellites have identical fuel states prior to a reconfiguration. For example, one can envision a realistic scenario of bringing a group of satellites with different fuel states together to form a federation for a new Earth observation mission [47]. We formulate the RCRP with _individual resource constraints_ (RCRP-IRC) as follows: \\[\\text{(RCRP-IRC)}\\ \\ \\ \\ -\\sum_{p\\in\\mathcal{P}}\\sum_{t\\in \\mathcal{T}}\\pi_{tp}y_{tp}\\] \\[\\text{s.t.}\\ \\ \\ \\sum_{s\\in\\mathcal{S}}\\sum_{j\\in\\mathcal{J}_{k}}c_{ij }\\varphi_{ijs}\\leq\\varepsilon_{i},\\quad\\forall i\\in\\mathcal{I}^{\\prime}\\] (15) \\[\\ \\ \\ \\ \\ \\ variable type, and the primal heuristic can be readily applied with the modified definition of \\(\\Phi\\). It should be noted that the analyses presented in Section IV.D can be extended to RCRP-IRC with the appropriate values of \\(\\varepsilon_{r,i}\\). ## V Computational Experiments We conduct computational experiments to evaluate the performance of the proposed Lagrangian relaxation-based solution method. In particular, we focus on analyzing the solution quality and the computational efficiency of the Lagrangian heuristic in comparison to the results obtained by a mixed-integer programming (MIP) solver. We first perform the design of experiments in Section V.A and then compare the results obtained by the Lagrangian heuristic and a commercial software package in Section V.B. The primary computational experiments are performed using RCRP-ARC for RGT orbits. In Section V.C, we provide an illustrative example to demonstrate the versatility of the proposed framework by extending it to a more general case of non-RGT orbits and RCRP-IRC. ### A. Test Instances We generate test instances by varying the cardinalities of the sets \\(\\mathcal{I}\\), \\(\\mathcal{J}\\), \\(\\mathcal{T}\\), and \\(\\mathcal{P}\\). Each test instance selects one value from each of the following sets: \\(|\\mathcal{I}|\\in\\{10,20\\}\\), \\(|\\mathcal{J}|\\in\\{500,1000,2000\\}\\), and \\(|\\mathcal{P}|\\in\\{10,20,30\\}\\). We assume, without loss of generality, that \\(|\\mathcal{T}|=|\\mathcal{J}|\\). Table 1 presents the sizes of 18 randomly generated test instances for the RCRP problem. The instance pool ranges from the smallest, which contains up to \\(8.9\\times 10^{26}\\) potentially feasible reconfiguration processes, to the largest, which contains up to \\(9.5\\times 10^{65}\\) potentially feasible reconfiguration processes. Note that in Table 1, the constraint counts exclude the domain definitions of decision variables. For each RCRP test instance, we generate 10 RCRP-ARC sub-instances with varying values of \\(\\varepsilon\\). Without loss of generality, we set \\(\\varepsilon_{\\max}=\\max c_{ijs}\\) and create a sequence of 10 steps in the interval \\([0,\\varepsilon_{\\max}]\\). In total, we evaluate 18 instances for RCRP, which is equivalent to assessing 180 instances of RCRP-ARC. Our goal is to capture a wide spectrum of orbital characteristics of orbital slots and \\(\\vartheta_{\\min}\\). To do so, we randomly generate 18 parameter sets from the parameter space \\(\\{N_{\\text{P}}\\in\\mathbb{Z}_{\\geq 0}:30\\leq N_{\\text{P}}\\leq 45\\}\\times\\{inc \\in\\mathbb{R}_{\\geq 0}:0^{\\circ}\\leq inc\\leq 120^{\\circ}\\}\\times\\{\\vartheta_{\\min} \\in\\mathbb{R}_{\\geq 0}:5^{\\circ}\\leq\\vartheta_{\\min}\\leq 20^{\\circ}\\}\\). Therefore, each RCRP instance has a unique set of \\(N_{\\text{P}}\\), \\(inc\\), and \\(\\vartheta_{\\min}\\) values. For all other parameters, the following values or generation rules are used. For the common orbital characteristics of \\(\\mathcal{J}\\), we let \\(N_{\\text{D}}=3\\) and \\(e=0\\) (circular orbits). Each orbital slot \\(j\\in\\mathcal{J}\\) has a pair of \\((\\Omega_{j},u_{j})\\) that satisfies the distribution rule of the RGT common ground track constellation [24]: \\(N_{\\text{P}}(\\Omega_{j}-\\Omega_{0})+N_{\\text{D}}(u_{j}-u_{0})=0\\) mod \\(2\\pi\\) where we let \\(\\Omega_{0}=0^{\\circ}\\), \\(u_{0}=0^{\\circ}\\). We assume \\(r_{tp}=1\\) for all \\(t\\) and \\(p\\). The cost matrix is produced using the combined plane change and the Hohmann transfer maneuvers, as well as phasing maneuvers [35]. Additionally, we randomly generate the initial positions of \\(|\\mathcal{I}|\\) satellites in \\(\\mathcal{J}\\) from a discrete uniform distribution between 0 and \\(|\\mathcal{J}|-1\\). The target points are randomly distributed globally, bounded latitudinally by the inclination of a given test instance; the longitude and latitude each take a value from a uniform distribution. In summary, we can characterize the orbital slots as circular low Earth orbits covering a wide spectrum of inclination from prograde to retrograde (specifically, the altitude ranges between 478.86 km and 2729.95 km) and the minimum elevation angle threshold ranging from 5\\({}^{\\circ}\\) to 20\\({}^{\\circ}\\). We compare the results of the Lagrangian heuristic and a commercial MIP solver, Gurobi optimizer 9.1.1. Gurobi, the state-of-the-art solver for MILP problems, is chosen as the benchmark because there is no known specialized solver for RCRP. Note that existing algorithms for constellation reconfiguration are not suitable for this purpose as they do not account for the added layer of MCP constraints present in the RCRP formulation. Thus, to establish a benchmark for the performance of the Lagrangian heuristic, it is judicious to compare it against a general-purpose but widely-used MIP solver such as the Gurobi optimizer. The Gurobi optimizer utilizes an array of MIP techniques, including but not limited to, presolve, branch-and-bound, cutting plane, heuristics, and parallelism, at various phases of optimization to enhance the optimization efficiency with regard to both computation runtime and solution quality. All computational experiments are coded in MATLAB and executed on a platform with an Intel Core i7-9700 3.00 GHz CPU processor (8 cores and 8 threads) and 32 GB memory. In all cases, we let the Gurobi optimizer utilize all 8 cores. The default Gurobi parameters are used except for the duality gap tolerance of 0.5 % (for both the baseline Gurobi case and (LR1)) and the runtime limit of 3600 s. If the Gurobi optimizer has not converged within the runtime limit, it returns the best incumbent primal solution found thus far. For the Lagrangian heuristic, we limit the 1-exchange neighborhood local search with the size \\(|\\mathcal{N}^{\\prime}|\\leq 10|\\mathcal{I}|\\) for the primal heuristic and the Gurobi optimizer for solving Problem (LR1). In generic terms lower bound (LB) and upper bound (UB), we define the duality gap as \\begin{table} \\begin{tabular}{r r r r r r r r} \\hline \\hline \\multirow{2}{*}{Instance} & \\multirow{2}{*}{\\(|I|\\)} & \\multirow{2}{*}{\\(|J|,|T|\\)} & \\multirow{2}{*}{\\(|P|\\)} & \\multicolumn{4}{c}{Size of RCRP-ARC} \\\\ \\cline{5-8} & & & & \\(\\boldsymbol{\\varphi}\\) variables & \\(\\boldsymbol{y}\\) variables & Total variables & Total constraints \\\\ \\hline 1 & 10 & 500 & 1 \\(\\mathrm{DG}=|\\mathrm{LB}-\\mathrm{UB}|/|\\mathrm{UB}|\\). To assess the quality of \\(\\hat{Z}\\) obtained by the Lagrangian heuristic relative to \\(Z_{\\mathrm{G}}\\) obtained by the Gurobi optimizer, we define the _relative performance metric_, \\(\\mathrm{RP}=(\\hat{Z}-Z_{\\mathrm{G}})/Z_{\\mathrm{G}}\\), unrestricted in sign. If \\(\\mathrm{RP}>0\\), the optimum obtained by the Lagrangian heuristic outperforms that of the Gurobi optimizer. If \\(\\mathrm{RP}<0\\), the optimum obtained by the Gurobi optimizer outperforms that of the Lagrangian heuristic. If \\(\\mathrm{RP}=0\\), the obtained optimums of both methods are the same. ### Computational Experiment Results Out of 180 RCRP-ARC test instances, we present detailed analyses for 36 instances. Table 2 reports the computational results for test instances with \\(\\varepsilon/\\varepsilon_{\\mathrm{max}}=0.3\\), illustrating scenarios where resources are limited. For 9 \"small\" instances, the baseline Gurobi optimizer successfully identified optimal solutions, or those within the specified duality gap tolerance of 0.5 %, within the specified runtime limit of 3600 s. However, as the size of instances increases, we start to observe the Gurobi optimizer triggering the runtime limit. Particularly, for instances 17 and 18, we see a significant duality gap of 11.03 % and 70.15 %, respectively. Examining the results of the Lagrangian heuristic, we observe that all 18 instances were solved in less than 462.24 s. Comparing the feasible primal solutions to RCRP-ARC, there are 10 instances in which Gurobi solutions performed better than the Lagrangian heuristic solutions. However, the differences are at most 1.77 %. The Lagrangian heuristic outperformed the Gurobi optimizer for 6 instances with the largest recorded margin of 26.21 % (instance 18) and obtained optimal solutions for 2 instances. Table 3 presents the results for cases where resources are abundant, specifically for instances with \\(\\varepsilon/\\varepsilon_{\\mathrm{max}}=0.8\\). All parameters are the same as those we have shown previously in Table 2, except for the \\(\\varepsilon\\) value. An increase in the value of \\(\\varepsilon/\\varepsilon_{\\mathrm{max}}\\) leads to an enlargement of the feasible solution set. Out of 18 instances, the Gurobi optimizer only solved one instance (instance 5) to the optimality within the runtime limit and one instance (instance 13) to the tolerance-optimality by the runtime limit. The Lagrangian heuristic solved all instances with the maximum runtime of 810.97 s. The duality gaps obtained by the Lagrangian heuristic are comparably larger than those with the lower \\(\\varepsilon\\) because \\(Z_{\\mathrm{D}}\\) converges to \\(Z_{\\mathrm{LP}}\\). However, it is important to note that the Lagrangian relaxation bound is theoretically no worse than the LP relaxation bound (assuming converged multipliers), which suggests that the integrality gap of the problem is significant. For 12 out of 18 instances, the primal solutions obtained by the Lagrangian heuristic outperform the best incumbent primal solution of the Gurobi optimizer found by the runtime limit. The outperformance of the Lagrangian heuristic over the Gurobi optimizer is notably significant for instances 14-18, with the relative performance metric ranging from 11.68 % to 25.84 %. The underperformance of the Lagrangian heuristic is also observable, with the relative performance metric ranging up to 1.36 %. We present the results of all 180 test instances graphically. Figures 4 and 5 visualize the computational results, showcasing the approximated Pareto fronts of both methods. In these figures, all metrics, \\(\\hat{Z}_{\\mathrm{LP}}\\), \\(Z_{\\mathrm{D}}\\), \\(Z_{\\mathrm{G}}\\), and \\(\\hat{Z}\\), are normalized and have their signs flipped for ease of physical interpretation. Note that generating the true Paretofront of a given RCRP instance requires solving all associated RCRP-ARC instances to optimality.+ Without an optimality certificate (which is typically proven by the duality gap), the results, \\(Z_{\\text{D}}\\) and \\(Z_{\\text{G}}\\), in Figs. 4 and 5 are deemed approximations of Pareto fronts; the dominated solutions are still included for completeness. Figure 5 corroborates the outperformance of the Lagrangian heuristic for large instances. For instances 1, 5, and 13, we observe that \\(\\hat{Z}_{\\text{LP}}\\) effectively certifies that \\(Z_{\\text{D}}\\) has either converged or is not optimal, albeit its usefulness appears to be limited. Figures 6 and 7 compare the computation runtime between the 8-core Gurobi optimizer and the Lagrangian heuristic with no parallel computing implementation (except that we solve (LR1) using the 8-core Gurobi optimizer). In most cases, we see that the Gurobi optimizer reached the runtime limit of 3600 s. A notable case is instance 5, in which the initial solution is near-optimal and no significant maneuvers are needed to maximize the total coverage reward.++ Footnote ‡: Strictly speaking, the Pareto front in the discrete-time domain is also the approximation of the true Pareto front in the continuous-time domain. Figure 4: Computational results for instances 1–9. Note that all metrics are normalized and flipped in sign. Figure 5: Computational results for instances 1–18. Note that all metrics are normalized and flipped in sign. Figure 6: Runtime results for instances 1–9. The runtime limit of \\(3600\\,\\mathrm{s}\\) is enforced. Figure 7: Runtime results for instances 10–18. The runtime limit of \\(3600\\,\\mathrm{s}\\) is enforced. #### 4.2.1 Problem Setup Suppose a group of seven satellites in different circular orbits (parameters shown in the left half of Table 4) is tasked with a reconfiguration process to form a federation for a 15-day satellite-based emergency mapping mission to monitor active disaster events and support post-disaster relief operations. The spot targets of interest are Getty, California (\\(34.09^{\\circ}\\mathrm{N},118.47^{\\circ}\\mathrm{W}\\)), Ashekri, Nigeria (\\(11.96^{\\circ}\\mathrm{N},12.93^{\\circ}\\mathrm{E}\\)), and Hunga Tonga-Hunga Ha'apai, Tonga (\\(21.18^{\\circ}\\mathrm{S},175.19^{\\circ}\\mathrm{W}\\)). The coverage rewards are randomly generated following the standard uniform distribution in the range of \\([0,1]\\). We also let \\(r_{tp}=1,\\forall t\\in\\mathcal{T}\\), \\(p\\in\\mathcal{P}\\) and all targets enforce \\(\\vartheta_{\\min}=10^{\\circ}\\). We consider a set of orbital slots, \\(\\mathcal{J}=\\{\\mathcal{J}_{1},\\ldots,\\mathcal{J}_{\\mathcal{J}}\\}\\), where \\(\\mathcal{J}_{i}\\) denotes the set of orbital slots that are \\(\\Delta v\\)-compatible with satellite \\(i\\). This means that the cost of transferring satellite \\(i\\) to orbital slot \\(j\\) in \\(\\mathcal{J}_{i}\\) is less than or equal to the specified \\(\\Delta v\\) value. Each \\(\\mathcal{J}_{i}\\) comprises orbital slots that allow satellite \\(i\\) to perform one of the following four options: (i) change inclination, (ii) change RAAN, (iii) make a coplanar phasing maneuver, or (iv) stay in its orbit. To generate orbital slots for the first option, we determine the boundary inclination values and generate inclinations uniformly within the range of \\([inc_{i,\\mathrm{LB}},inc_{i,\\mathrm{UB}}]\\) given \\(\\varepsilon_{i}\\). Similarly, for the second option, we find the boundary values and generate RAAN values uniformly within the range of \\([\\Omega_{i,\\mathrm{LB}},\\Omega_{i,\\mathrm{UB}}]\\). For the phasing maneuver, orbital elements of orbital slots are the same as the satellites except for the arguments of latitude \\(u\\), which is uniformly distributed in \\([0,360^{\\circ})\\). Lastly, \\begin{table} \\begin{tabular}{r r r r r r r r r r r} \\hline \\hline \\multirow{2}{*}{Instance} & \\multicolumn{3}{c}{Gurobi (8-core)\\({}^{\\dagger}\\)} & \\multicolumn{6}{c}{Lagrangian Heuristic} & \\multicolumn{3}{c}{RP\\({}^{\\sharp}\\)\\%} \\\\ \\cline{2-11} & LB & UB & (\\(Z_{\\mathrm{Q}}\\)) & Runtime\\({}^{\\dagger}\\),\\({}^{\\dagger}\\) & DG, \\% & LB (\\(Z_{\\mathrm{D}}\\)) & Runtime, s & UB (\\(\\hat{Z}\\)) & Runtime, s & Total runtime\\({}^{\\ddagger}\\)s & DG, \\% & \\\\ \\hline [MISSING_PAGE_POST] \\hline \\hline \\end{tabular} \\({}^{\\sharp}\\) Gurobi optimizer utilizes a combination of branch-and-bound, cutting planes, presolve, heuristics, and parallelism. \\({}^{\\dagger}\\) Hypen (-) indicates the trigger of the runtime limit of 3600 s. \\({}^{\\ddagger}\\) Includes runtimes for LB, UB, and the subgradient method intermediate steps. \\({}^{\\lx@sectionsign}\\) Positive RP indicates the outperformance of the Lagrangian heuristic. \\end{table} Table 3: Computational results for RCRP-ARC test instances with \\(\\varepsilon/\\varepsilon_{\\max}=0.8\\). we add \\(|\\mathcal{I}|\\) initial orbits to \\(\\mathcal{J}\\) to allow for no maneuvering option for satellites. Globally, regardless of maneuvering options, we let \\(a_{j}=a_{i}\\) and \\(e_{j}=0\\). For the first and second options, we generate \\(u_{j}\\) uniformly distributed in the range of \\([0,360^{\\circ})\\) similar to the phasing maneuver option. Note that while \\(\\mathcal{J}_{i}\\) is constructed for satellite \\(i\\), other satellites beside satellite \\(i\\) may transfer to orbital slots in \\(\\mathcal{J}_{i}\\) as long as the resource constraints are not violated. For orbit propagation, we use the SGP4 (Simplified General Perturbations-4) model to account for the differential secular and periodic rates in the change of orbital elements that affect satellite states and visibility matrix during the entire specified mission time horizon \\(T=15\\,\\mathrm{days}\\). It is important to note that satellites will continue to be subject to differential orbit perturbations beyond the considered time horizon, and thus, station-keeping maneuvers may be warranted if repeatability of the coverage state is desired. For the cost matrix and to account for the possibility of an altitude change, we use the combined plane change and the Hohmann transfer maneuvers and the coplanar phasing maneuvers as outlined in Chapters 6.5.1 and 6.6.1 in Ref. [35], respectively. The phasing angle is set to \\(180^{\\circ}\\) as the worst-case value, and thus, the corresponding \\(\\Delta v^{\\prime}_{\\mathrm{phasing}}\\) serves as the upper limit for the actual \\(\\Delta v_{\\mathrm{phasing}}\\) required for phasing for a given phasing time. This assumption is made to account for potential inaccuracies in phasing modeling. This allows us to decouple plane changes (inclination and RAAN changes) from phasing, and hence, we only need to enforce resource constraints for plane change maneuvers in the optimization. To calculate the complete \\(\\Delta v\\), we add \\(\\Delta v^{\\prime}_{\\mathrm{phasing}}\\) to \\(\\Delta v_{\\mathrm{pc}}\\), the cost of the plane change maneuver. It is important to factor in the phasing cost when determining the \\(\\varepsilon\\) allocated for a plane change. The size of the RCRP-IRC instance is as follows. We let \\(|\\mathcal{J}_{i}|=1,801,\\forall i\\in\\mathcal{I}\\), and thus, we have \\(|\\mathcal{J}|=12,607\\). We also let the time step size to be \\(120\\,\\mathrm{s}\\), and consequently, \\(|\\mathcal{T}|=10,800\\). With \\(|\\mathcal{I}|=7\\) and \\(|\\mathcal{P}|=3\\), the instance has 88,249 assignment variables, 32,400 coverage state variables, and 45,021 constraints (excluding the decision variable domain definitions). #### 2.2.2 Numerical Results Letting \\(\\varepsilon_{i}=$1\\,\\mathrm{km}/\\mathrm{s}$,\\forall i\\in\\mathcal{I}$\\) as the \\(\\Delta v\\) budgets allocated for plane change maneuvers, and using the Lagrangian heuristic method with a neighborhood size of \\(|\\mathcal{N}^{\\prime}|\\leq 50|\\mathcal{I}|\\) to solve RCRP-IRC, we obtained \\(\\hat{Z}=-3198.19\\) in 329.60 s with the duality gap of 8.01 %. Note that the initial configuration has a score of \\(Z=-2632.75\\). The LH optimum improved the initial score by 21.48 %. The final configuration is specified in Table 4 (right half). The \\(\\Delta v_{\\mathrm{pc}}\\) column shows the results from the plane change maneuversSS. Footnote §: To assess the overall \\(\\Delta v\\), we would need to add \\(\\Delta v_{\\mathrm{phasing}}\\), which can be obtained by trading off the time required for the phasing (the longer the phasing time, the lower the phasing cost, and vice versa). Interestingly, not all satellites fully used their allocated \\(\\Delta v_{\\mathrm{pc}}\\) budgets even though there was no penalty for using it up to the limit. Satellites 2-6 lowered their inclinations to maximize coverage, as the targets were in a low-latitude zone. Satellites 2-5 performed the maximum inclination change possible. In contrast, sat RAAN, despite having a near-polar inclination. Satellite 7 was assigned to an orbital slot generated for satellite 1; the maneuver was possible due to the close proximity in \\(\\Delta v\\) required. This resulted in a decrease in altitude by 14 km and a slight change in RAAN, where the former effectively reduced the swath width of a sensor. Figure 8 illustrates the reconfiguration process results in \\((\\Omega,inc)\\) space. The horizontal and vertical lines each respectively indicate the range of RAAN and inclination values a satellite (in blue) is allowed to transfer given the resource constraint. By solving the same instance of RCRP-IRC with the Gurobi optimizer using the same setting as in Section V.A, we obtained \\(Z_{\\text{G}}=-3185.51\\) at the end of the runtime limit of 3600 s, with a duality gap of 7.32 %. This solution is underperforming compared to \\(\\hat{Z}\\). When the runtime limit was extended to 86 400 s, the Gurobi optimizer resulted in \\(Z_{\\text{G}}=-3214.934\\) with a duality gap of 5.26 %. This solution improved the LH solution by 0.52 %. As the Gurobi optimizer was given more time to converge, it found a more optimal solution than the LH method after 24 660 s into the optimization. However, the large scale of the problem prevented the Gurobi optimizer from finding the optimal solution or proving the optimality of the incumbent solution by closing the gap within a day of runtime. The progress of the Gurobi optimizer is shown in Fig. 9. Finally, we conducted an additional experiment with only phasing maneuvers. The both LH and Gurobi optimizer produced the same solution, \\(\\hat{Z}=-2849.78\\), in 70.18 s and 1765.68 s, respectively. The Gurobi optimizer spent extra time proving the optimality of the solution. The obtained solution improved the initial configuration by 8.24 %; this result corroborates the effectiveness of reconfiguration, even when it involves performing only rephasing among satellites. The obtained optimal solution is as follows: \\(u_{1}=48^{\\circ},u_{2}=192^{\\circ},u_{3}=60^{\\circ},u_{4}=96^{\\circ},u_{5}=168^ {\\circ},u_{6}=336^{\\circ},\\) and \\(u_{7}=348^{\\circ}\\). Figure 8: Range of reachable orbital slots (shown in line), initial (blue circle) and final (red square) configurations. \\begin{table} \\begin{tabular}{r r r r r r r r r r} \\hline \\hline Satellite & \\multicolumn{4}{c}{Initial configuration} & \\multicolumn{4}{c}{Final configuration (LH solution)} & \\multicolumn{1}{c}{\\(\\Delta\\)\\(v_{\\rm pc}\\), km/s} \\\\ \\cline{2-10} & \\(a\\), km & \\(inc\\), deg. & \\(\\Omega\\), deg. & \\(u\\), deg. & \\(a\\), km & \\(inc\\), deg. & \\(\\Omega\\), deg. & \\(u\\), deg. & \\\\ \\hline 1 & 7,161.83 & 95.04 & 48.83 & 275.54 & 7,161.83 & 95.04 & 46.96 (-1.86) & 48.00 & 0.24 \\\\ 2 & 7,175.16 & 44.33 & 70.13 & 216.91 & 7,175.16 & 36.63 (-7.69) & 70.13 & 0.00 & 1.00 \\\\ 3 & 7,122.86 & 54.16 & 57.89 & 222.07 & 7,122.86 & 46.49 (-7.66) & 57.89 & 192.00 & 1.00 \\\\ 4 & 7,106.60 & 44.87 & 40.38 & 240.04 & 7,106.60 & 37.21 (-7.60) & 40.38 & 228.00 & 1.00 \\\\ 5 & 7,155.26 & 50.92 & 149.45 & 266.71 & 7,155.26 & 43.23 (-7.68) & 149.45 & 324.00 & 1.00 \\\\ 6 & 7,108.03 & 54.06 & 108.50 & 146.05 & 7,108.03 & 47.45 (-6.60) & 108.50 & 336.00 & 0.86 \\\\ 7 & 7,175.77 & 93.70 & 49.18 & 62.90 & 7161.83 (-13.94) & 100.60 (+6.91) & 48.83 (-0.35) & 204.00 & 0.90 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 4: Problem setting (left) and obtained Lagrangian heuristic solution (right). The values in the parenthesis indicate the change in value and all orbital elements are defined at the epoch, J2000. Figure 9: Primal solution and dual bound over time. ## VI Conclusions This paper proposes an integrated constellation design and transfer model for solving the RCRP. Given a set of target points each associated with a time-varying coverage reward and a time-varying coverage threshold, the problem aims to maximize the total reward obtained during a specified time horizon and to minimize the total cost of satellite transfers. The bi-objective formulation results in a trade-off analysis, potentially a Pareto front analysis if all \\(\\varepsilon\\) instances are solved to optimality, in the objective space spanned by the aggregated cost and the total coverage reward. Furthermore, as demonstrated in the illustrative example, the formulation can accommodate different types of orbits, not necessarily restricting orbital slots to be RGT orbits. The use of non-RGT orbits requires a user to specify the time horizon \\(T\\) for which the formulation is valid. The ILP formulation of RCRP-ARC enables users to utilize commercial software packages for convenient handling and obtaining tolerance-optimal solutions. However, for large-scale real-world instances, the problem suffers from the explosion of a combinatorial solution space. To overcome this challenge and to produce high-quality feasible primal solutions, we developed a Lagrangian relaxation-based heuristic method that combines the subgradient method with the 1-exchange neighborhood local search, exploiting the special substructure of the problem. The computational experiments in Section V demonstrate the effectiveness of the proposed method, particularly for large-scale instances, producing near-optimal solutions with significantly reduced computational runtime compared to the reference solver. We believe that the developed method provides an important step toward the realization of the concept of reconfiguration as a means for system adaptability and responsiveness, adding a new dimension to the operation of the next-generation satellite constellation systems. ## Appendix A Algorithms This appendix section lists the pseudocode of the algorithms discussed in the paper. ``` Input:\\(\\boldsymbol{c},\\boldsymbol{\\pi},\\boldsymbol{r},\\boldsymbol{v}\\) Output: List of \\(\\text{Z}(\\varepsilon)\\) values 1 Initialize \\(\\varepsilon\\leftarrow\\varepsilon_{0}\\) by solving AP repeat 2\\(\\text{Z}(\\varepsilon)\\leftarrow\\text{RCRP-ARC}\\) 3\\(\\varepsilon\\leftarrow\\varepsilon+\\varepsilon_{\\text{step}}\\) 4until termination flag is triggered ``` **Algorithm 1**\\(\\varepsilon\\)-constraint method ## Appendix B Selecting between Competing Relaxations There exist different types of Lagrangian relaxations for RCRP, and the choice of \\(\\varepsilon\\)-constraint transformation affects the complexity of the downstream algorithmic efforts and the mathematical properties. At first glance, one may observethat the coverage reward maximization objective function can be recast as an \\(\\varepsilon\\)-constraint. Similar to the one proposed in this paper, the Lagrangian relaxation problem would be separable into two subproblems based on the type of variables. In such a case, the \\(\\mathbf{\\varphi}\\) subproblem can be solved as an LP, and the \\(\\mathbf{y}\\) subproblem would be a relatively easy constrained ILP. Therefore, the lower bound calculation would still be computationally efficient. However, the main difference lies in the computation of \\(\\hat{Z}(\\varepsilon)\\). Unlike the one discussed earlier, \\(\\tilde{\\mathbf{y}}^{k}(\\mathbf{\\varphi}^{k})\\) computed from \\(\\mathbf{\\varphi}^{k}\\) would not necessarily satisfy the \\(\\varepsilon\\)-constraint. Hence, additional considerations must come into play in obtaining a feasible primal solution. One viable approach is to solve the reduced formulation of RCRP, which fixes and parameterizes a subset of assignments from \\(\\mathbf{\\varphi}^{k}\\) while optimizing the complement set. However, this approach becomes computationally expensive for instances with high \\(\\varepsilon\\) values. This approach was explored in our preliminary work [48]. One could attempt to relax an alternative set of constraints that may yield a tighter Lagrangian relaxation bound than the one proposed in this subsection. However, Lagrangian relaxation problems with Constraints (9d) retained may be unsuitable for embedding into an algorithm of iterative nature due to computational complexity. The rationale for the relaxation of Constraints (9d) can also be found in a study by Galvao and ReVelle [49], which reported a successful application of the Lagrangian relaxation of the linking constraints for MCLP; in their problem context, the Lagrangian relaxation problem possesses the integrality property. ## Appendix C: Small RCRP Instance In this appendix section, we compare the performance of the developed Lagrangian heuristic method with the Gurobi optimizer, a mixed-integer programming solver. In this case, we set \\(|\\mathcal{I}|=5\\), \\(|\\mathcal{J}|=200\\), \\(|\\mathcal{T}|=200\\), and \\(|\\mathcal{P}|=10\\). Target points are randomly distributed within the maximum latitude bounds determined by the inclination of the satellites' orbits. Let \\(\\mathbf{e}_{0}=(a,e,inc,\\Omega,u)=(8176.5\\,\\mathrm{km},0,75^{\\circ},50^{\\circ}, 0^{\\circ})\\). We also set \\(r_{tp}=1,\\forall t\\in\\mathcal{T},p\\in\\mathcal{P}\\) and all targets enforce \\(\\theta_{\\min}=7^{\\circ}\\). \\(\\mathcal{J}\\) follows the common RGT constellation distribution rule. We use the default duality gap of \\(0.01\\,\\%\\) for the Gurobi optimizer. The results indicate that the Gurobi optimizer successfully converges to optimal solutions for all ten RCRP-ARC instances within the runtime limit, thereby identifying the true Pareto front of the RCRP. The maximum runtime reported is \\(83.69\\,\\mathrm{s}\\) for \\(\\varepsilon/\\varepsilon_{\\max}=0.7\\). The Lagrangian heuristic method does not find optimal solutions for RCRP-ARC with \\(\\varepsilon/\\varepsilon_{\\max}\\geq 0.5\\), and the maximum relative underperformance is \\(0.95\\,\\%\\). Pareto front and runtime analyses are shown in Fig. 10. These results demonstrate that, for small-scale instances, the use of conventional MILP methods can effectively characterize the Pareto front. ## Acknowledgment This material is partially based upon work supported by the National Science Foundation Graduate Research Fellowship Program under Grant No. DGE-2039655. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. The authors would like to express their gratitude to Sungwoo Kim and Da Eun Shim at Georgia Tech, and the anonymous reviewers, for their insightful suggestions. Fig. 10: Comparison of the Lagrangian heuristic method and the commercial optimizer for a small RCRP instance. ## References * [1] He, X., Li, H., Yang, L., and Zhao, J., \"Reconfigurable Satellite Constellation Design for Disaster Monitoring Using Physical Programming,\" _International Journal of Aerospace Engineering_, Vol. 2020, 2020, p. 8813685. [https://doi.org/10.1155/2020/8813685](https://doi.org/10.1155/2020/8813685). * [2] Chen, Y., Mahalec, V., Chen, Y., Liu, X., He, R., and Sun, K., \"Reconfiguration of satellite orbit for cooperative observation using variable-size multi-objective differential evolution,\" _European Journal of Operational Research_, Vol. 242, No. 1, 2015, pp. 10-20. [https://doi.org/10.1016/j.ejor.2014.09.025](https://doi.org/10.1016/j.ejor.2014.09.025). * [3] de Weck, O. L., de Neufville, R., and Chaize, M., \"Staged Deployment of Communications Satellite Constellations in Low Earth Orbit,\" _Journal of Aerospace Computing, Information, and Communication_, Vol. 1, No. 3, 2004, pp. 119-136. [https://doi.org/10.2514/1.6346](https://doi.org/10.2514/1.6346). * [4] Arnas, D., and Linares, R., \"Uniform Satellite Constellation Reconfiguration,\" _Journal of Guidance, Control, and Dynamics_, Vol. 45, No. 7, 2022, pp. 1241-1254. [https://doi.org/10.2514/1.G006514](https://doi.org/10.2514/1.G006514). * [5] Ferringer, M. P., Spencer, D. B., and Reed, P., \"Many-objective reconfiguration of operational satellite constellations with the Large-Cluster Epsilon Non-dominated Sorting Genetic Algorithm-II,\" _2009 IEEE Congress on Evolutionary Computation_, 2009, pp. 340-349. [https://doi.org/10.1109/CEC.2009.4982967](https://doi.org/10.1109/CEC.2009.4982967). * [6] Davis, J. J., \"Constellation reconfiguration: Tools and analysis,\" Ph.D. thesis, Texas A&M University, 2010. * [7] Fakoor, M., Bakhtiari, M., and Soleymani, M., \"Optimal design of the satellite constellation arrangement reconfiguration process,\" _Advances in Space Research_, Vol. 58, No. 3, 2016, pp. 372-386. [https://doi.org/10.1016/j.asr.2016.04.031](https://doi.org/10.1016/j.asr.2016.04.031). * [8] Denis, G., de Boissezon, H., Hosford, S., Pasco, X., Montfort, B., and Ranera, F., \"The evolution of Earth Observation satellites in Europe and its impact on the performance of emergency response services,\" _Acta Astronautica_, Vol. 127, 2016, pp. 619-633. [https://doi.org/10.1016/j.actaastro.2016.06.012](https://doi.org/10.1016/j.actaastro.2016.06.012). * [9] Voigt, S., Giulio-Tonolo, F., Lyons, J., Kucera, J., Jones, B., Schneiderhan, T., Platzeck, G., Kaku, K., Hazarika, M. K., Czaran, L., Li, S., Pedersen, W., James, G. K., Proy, C., Muthike, D. M., Bequignon, J., and Guha-Sapir, D., \"Global trends in satellite-based emergency mapping,\" _Science_, Vol. 353, No. 6296, 2016, pp. 247-252. [https://doi.org/10.1126/science.aad8728](https://doi.org/10.1126/science.aad8728). * [10] Wang, X., Wu, G., Xing, L., and Pedrycz, W., \"Agile Earth Observation Satellite Scheduling Over 20 Years: Formulations, Methods, and Future Directions,\" _IEEE Systems Journal_, Vol. 15, No. 3, 2021, pp. 3881-3892. [https://doi.org/10.1109/ISYST.2020.2997050](https://doi.org/10.1109/ISYST.2020.2997050). * [11] Paek, S. W., Kim, S., and de Weck, O., \"Optimization of Reconfigurable Satellite Constellations Using Simulated Annealing and Genetic Algorithm,\" _Sensors_, Vol. 19, No. 4, 2019. [https://doi.org/10.3390/s19040765](https://doi.org/10.3390/s19040765). * [12] McGrath, C. N., and Macdonald, M., \"General Perturbation Method for Satellite Constellation Reconfiguration Using Low-Thrust Maneuvers,\" _Journal of Guidance, Control, and Dynamics_, Vol. 42, No. 8, 2019, pp. 1676-1692. [https://doi.org/10.2514/1.G003739](https://doi.org/10.2514/1.G003739). * [13] Zhang, Z., Zhang, N., Jiao, Y., Baoyin, H., and Li, J., \"Multi-Tree Search for Multi-Satellite Responsiveness Scheduling Considering Orbital Maneuvering,\" _IEEE Transactions on Aerospace and Electronic Systems_, 2021, pp. 1-1. [https://doi.org/10.1109/TAES.2021.3129723](https://doi.org/10.1109/TAES.2021.3129723). * [14] Morgan, S. J., McGrath, C. N., and de Weck, O. L., \"Optimization of Multispacecraft Maneuvers for Mobile Target Tracking from Low Earth Orbit,\" _Journal of Spacecraft and Rockets_, Vol. 0, No. 0, pp. 1-10. [https://doi.org/10.2514/1.A35457](https://doi.org/10.2514/1.A35457), URL [https://doi.org/10.2514/1.A35457](https://doi.org/10.2514/1.A35457). * [15] Appel, L., Guelman, M., and Mishne, D., \"Optimization of satellite constellation reconfiguration maneuvers,\" _Acta Astronautica_, Vol. 99, 2014, pp. 166-174. [https://doi.org/10.1016/j.actaastro.2014.02.016](https://doi.org/10.1016/j.actaastro.2014.02.016). * [16] Legge Jr, R. S., \"Optimization and valuation of recongurable satellite constellations under uncertainty,\" Ph.D. thesis, Massachusetts Institute of Technology, 2014. * [17] de Weck, O. L., Scialom, U., and Siddiqi, A., \"Optimal reconfiguration of satellite constellations with the auction algorithm,\" _Acta Astronautica_, Vol. 62, No. 2, 2008, pp. 112-130. [https://doi.org/10.1016/j.actaastro.2007.02.008](https://doi.org/10.1016/j.actaastro.2007.02.008). * [18] Luders, R. D., \"Satellite networks for continuous zonal coverage,\" _ARS Journal_, Vol. 31, No. 2, 1961, pp. 179-184. [https://doi.org/10.2514/8.5422](https://doi.org/10.2514/8.5422). * [19] Luders, R., and Ginsberg, L., \"Continuous zonal coverage-a generalized analysis,\" _Mechanics and Control of Flight Conference_, 1974, p. 842. [https://doi.org/10.2514/6.1974-842](https://doi.org/10.2514/6.1974-842). * [20] Walker, J. G., \"Circular orbit patterns providing continuous whole earth coverage,\" Tech. rep., Royal Aircraft Establishment Farnborough (United Kingdom), 1970. * [21] Walker, J. G., \"Continuous whole-earth coverage by circular-orbit satellite patterns,\" Tech. rep., Royal Aircraft Establishment Farnborough (United Kingdom), 1977. * [22] Walker, J. G., \"Satellite constellations,\" _Journal of the British Interplanetary Society_, Vol. 37, 1984, pp. 559-572. * [23] Draim, J. E., \"A common-period four-satellite continuous global coverage constellation,\" _Journal of Guidance, Control, and Dynamics_, Vol. 10, No. 5, 1987, pp. 492-499. [https://doi.org/10.2514/3.20244](https://doi.org/10.2514/3.20244). * [24] Lee, H., Shimizu, S., Yoshikawa, S., and Ho, K., \"Satellite Constellation Pattern Optimization for Complex Regional Coverage,\" _Journal of Spacecraft and Rockets_, Vol. 57, No. 6, 2020, pp. 1309-1327. [https://doi.org/10.2514/1.A34657](https://doi.org/10.2514/1.A34657). * [25] Zhu, K.-J., Li, J.-F., and Baoyin, H.-X., \"Satellite scheduling considering maximum observation coverage time and minimum orbital transfer fuel cost,\" _Acta Astronautica_, Vol. 66, No. 1, 2010, pp. 220-229. [https://doi.org/10.1016/j.actaastro.2009.05.029](https://doi.org/10.1016/j.actaastro.2009.05.029). * [26] Mortari, D., Wilkins, M. P., and Bruccoleri, C., \"The Flower Constellations,\" _The Journal of the Astronautical Sciences_, Vol. 52, No. 1, 2004, pp. 107-127. [https://doi.org/10.1007/BF03546424](https://doi.org/10.1007/BF03546424). * [27] Avendano, M. E., Davis, J. J., and Mortari, D., \"The 2-D lattice theory of flower constellations,\" _Celestial Mechanics and Dynamical Astronomy_, Vol. 116, No. 4, 2013, pp. 325-337. [https://doi.org/10.1007/s10569-013-9493-8](https://doi.org/10.1007/s10569-013-9493-8). * [28] Bartholdi, J. J., Orlin, J. B., and Ratliff, H. D., \"Cyclic Scheduling via Integer Programs with Circular Ones,\" _Operations Research_, Vol. 28, No. 5, 1980, pp. 1074-1085. [https://doi.org/10.1287/opre.28.5.1074](https://doi.org/10.1287/opre.28.5.1074). * [29] Bartholdi, J. J., \"A Guaranteed-Accuracy Round-off Algorithm for Cyclic Scheduling and Set Covering,\" _Operations Research_, Vol. 29, No. 3, 1981, pp. 501-510. [https://doi.org/10.1287/opre.29.3.501](https://doi.org/10.1287/opre.29.3.501). * [30] Lee, H., and Ho, K., \"Binary Integer Linear Programming Formulation for Optimal Satellite Constellation Reconfiguration,\" _AAS/AIAA Astrodynamics Specialist Conference_, 2020. * [31] ReVelle, C., Scholssberg, M., and Williams, J., \"Solving the maximal covering location problem with heuristic concentration,\" _Computers & Operations Research_, Vol. 35, No. 2, 2008, pp. 427-435. [https://doi.org/10.1016/j.cor.2006.03.007](https://doi.org/10.1016/j.cor.2006.03.007), part Special Issue: Location Modeling Dedicated to the memory of Charles S. ReVelle. * [32] Megiddo, N., Zemel, E., and Hakimi, S. L., \"The Maximum Coverage Location Problem,\" _SIAM Journal on Algebraic Discrete Methods_, Vol. 4, No. 2, 1983, pp. 253-261. [https://doi.org/10.1137/0604028](https://doi.org/10.1137/0604028). * [33] Church, R., and ReVelle, C., \"The Maximal Covering Location Problem,\" _Papers in Regional Science_, Vol. 32, No. 1, 1974, pp. 101-118. [https://doi.org/10.1111/j.1435-5597.1974.tb00902.x](https://doi.org/10.1111/j.1435-5597.1974.tb00902.x). * [34] Avendano, M., and Mortari, D., \"A closed-form solution to the minimum \\(\\Delta V_{\\text{tot}}^{2}\\) Lambert's problem,\" _Celestial Mechanics and Dynamical Astronomy_, Vol. 106, No. 1, 2009, p. 25. [https://doi.org/10.1007/s10569-009-9238-x](https://doi.org/10.1007/s10569-009-9238-x). * [35] Vallado, D., _Fundamentals of Astrodynamics and Applications_, Space technology library, Microcosm Press, 2013, Chap. 6. * [36] Hoffman, A. J., and Kruskal, J. B., _Integral Boundary Points of Convex Polyhedra_, Princeton University Press, 1956, Vol. 38, pp. 223-246. [https://doi.org/10.1515/9781400881987-014](https://doi.org/10.1515/9781400881987-014). * [37] Kuhn, H. W., \"The Hungarian method for the assignment problem,\" _Naval Research Logistics Quarterly_, Vol. 2, No. 1-2, 1955, pp. 83-97. [https://doi.org/10.1002/nav.3800020109](https://doi.org/10.1002/nav.3800020109). * [38] Bertsekas, D. P., \"A new algorithm for the assignment problem,\" _Mathematical Programming_, Vol. 21, No. 1, 1981, pp. 152-171. [https://doi.org/10.1007/BF01584237](https://doi.org/10.1007/BF01584237). * [39] Bertsekas, D. P., and Eckstein, J., \"Dual coordinate step methods for linear network flow problems,\" _Mathematical Programming_, Vol. 42, No. 1, 1988, pp. 203-243. [https://doi.org/10.1007/BF01589405](https://doi.org/10.1007/BF01589405). * [40] Yacov, H. Y., Lasdon, L. S., and Wismer, D. A., \"On a Bicriterion Formulation of the Problems of Integrated System Identification and System Optimization,\" _IEEE Transactions on Systems, Man, and Cybernetics_, Vol. 1, No. 3, 1971, pp. 296-297. [https://doi.org/10.1109/TSMC.1971.4308298](https://doi.org/10.1109/TSMC.1971.4308298). * [41] Fisher, M. L., \"The Lagrangian Relaxation Method for Solving Integer Programming Problems,\" _Management Science_, Vol. 50, No. 12_supplement, 2004, pp. 1861-1871. [https://doi.org/10.1287/mnsc.1040.0263](https://doi.org/10.1287/mnsc.1040.0263). * [42] Held, M., and Karp, R. M., \"The traveling-salesman problem and minimum spanning trees: Part II,\" _Mathematical Programming_, Vol. 1, No. 1, 1971, pp. 6-25. [https://doi.org/10.1007/BF01584070](https://doi.org/10.1007/BF01584070). * [43] Held, M., Wolfe, P., and Crowder, H. P., \"Validation of subgradient optimization,\" _Mathematical Programming_, Vol. 6, No. 1, 1974, pp. 62-88. [https://doi.org/10.1007/BF01580223](https://doi.org/10.1007/BF01580223). * [44] Bertsimas, D., and Tsitsiklis, J., _Introduction to linear optimization_, Athena Scientific, 1997. * [45] Guignard, M., _Lagrangian Relaxation_, Springer US, Boston, MA, 2013, pp. 845-860. [https://doi.org/10.1007/978-1-4419-1153-7_1168](https://doi.org/10.1007/978-1-4419-1153-7_1168). * [46] Geoffrion, A. M., _Lagrangean relaxation for integer programming_, Springer Berlin Heidelberg, Berlin, Heidelberg, 1974, pp. 82-114. [https://doi.org/10.1007/BFb0120690](https://doi.org/10.1007/BFb0120690). * [47] Golkar, A., and Lluch i Cruz, I., \"The Federated Satellite Systems paradigm: Concept and business case evaluation,\" _Acta Astronautica_, Vol. 111, 2015, pp. 230-248. [https://doi.org/10.1016/j.actaastro.2015.02.009](https://doi.org/10.1016/j.actaastro.2015.02.009). * [48] Lee, H., and Ho, K., \"A Lagrangian Relaxation-Based Heuristic Approach to Regional Constellation Reconfiguration Problem,\" _AAS/AIAA Astrodynamics Specialist Conference_, 2021. * [49] Galvao, R. D., and ReVelle, C., \"A Lagrangean heuristic for the maximal covering location problem,\" _European Journal of Operational Research_, Vol. 88, No. 1, 1996, pp. 114-123. [https://doi.org/10.1016/0377-2217](https://doi.org/10.1016/0377-2217)(94)00159-6.
A group of satellites, with either homogeneous or heterogeneous orbital characteristics and/or hardware specifications, can undertake a reconfiguration process due to variations in operations pertaining to Earth observation missions. This paper investigates the problem of optimizing a satellite constellation reconfiguration process against two competing mission objectives: (i) the maximization of the total coverage reward and (ii) the minimization of the total cost of the transfer. The decision variables for the reconfiguration process include the design of the new configuration and the assignment of satellites from one configuration to another. We present a novel bi-objective integer linear programming formulation that combines constellation design and transfer problems. The formulation lends itself to the use of generic mixed-integer linear programming (MILP) methods such as the branch-and-bound algorithm for the computation of provably-optimal solutions; however, these approaches become computationally prohibitive even for moderately-sized instances. In response to this challenge, this paper proposes a Lagrangian relaxation-based heuristic method that leverages the assignment problem structure embedded in the problem. The results from the computational experiments attest to the near-optimality of the Lagrangian heuristic solutions and a significant improvement in the computational runtime compared to a commercial MILP solver.
Provide a brief summary of the text.
arxiv/f416a707_49ea_40da_9e8f_78cae5b328eb.md
# Leveraging band diversity for feature selection in EO data Sadia Hussain0000-0002-5507-1063 1Bharti School of Telecommunication Technology and Management, IIT Delhi, Hauz Khas, New Delhi, India [email protected] Brejesh Lall0000-0003-2677-3071 1Bharti School of Telecommunication Technology and Management, IIT Delhi, Hauz Khas, New Delhi, India [email protected] 2Electrical engineering department, IIT Delhi, Hauz Khas, New Delhi, India [email protected] ## 1 Introduction A hyperspectral imaging device collects spectral information using hundreds of narrow bands and combines it with digital imagery. When an object interacts with light at different wavelengths, this technology captures the unique physical and chemical characteristics of the material. As a result, hyperspectral imaging finds numerous applications in earth observation, including precision agriculture, climate monitoring, and remote sensing. However, this technology comes with several challenges. With information spread across a wide range of narrow bands, similar yet distinct objects can be mistakenly categorized as the same. Additionally, there is a bottleneck in transmitting these images when captured by a sensor, which often necessitates processing these extensive bands. This processing may degrade either the spatial resolution, the spectral resolution, or both. The large size of hyperspectral imaging (HSI) data introduces multiple processing issues, such as increased computational cost, complexity in image analysis, and a scarcity of available training data. Collecting large datasets can be difficult due to limitations of acquisition devices, and storing this vast amount of informationcan also be cumbersome. Consequently, in restoration-based problems, machine learning or computer vision-based solutions often prove inadequate at these early stages due to the need to process a large number of bands. In hyperspectral images, the high dimensionality results from the large number of bands. Effective hyperspectral band management is crucial for revealing the unique features of objects. Consequently, two main approaches are commonly found in the literature: band selection (or feature selection) and band extraction (or feature extraction). Band selection involves using significant or representative bands. Based on the physical and chemical characteristics of the material, these representative bands are selected to compactly represent hyperspectral images. On the other hand, band extraction reduces the higher dimensionality of hyperspectral bands into a lower dimensionality, which can cause hyperspectral images to lose their physical significance. In this position paper, we devise a unified band selection approach for controlling hyperspectral bands by applying a grouping strategy. This grouping strategy enhances the performance of any machine learning-based method used for resolution enhancement. Our paper outlines three important areas of grouping diversity by employing: (1) Sampling Method: We use Determinantal Point Processes to provide insights for the selection of diverse bands. (2) Spectral Correlations: We group strongly correlated bands together based on their spectral characteristics. (3) Spectral Angle Mapping analysis: For overlapping bands within a group, we use the spectral angle mapper to disentangle bands based on more precise similarity measurements. This unified approach aims to optimize the use of hyperspectral bands, improving the accuracy and efficiency of hyperspectral image analysis in various applications. ## 2 Related Works A lot of literature is found in Image classification using the above two approaches. In band selection methods further three sub-categories have been devised based Figure 1: Schematic view of the proposed approach. on the derivation of subset of bands [14]. These are as follows: subsets derived on the basis of subset evaluation criteria [3, 15, 2, 8, 19], availability of prior information further sub-categorized on the basis of supervised selection criteria [5, 18], unsupervised selection criteria [13, 6] and selection strategy (individual [4, 7] and other evaluation techniques) used to create the band subset. Efficient band selection plays a crucial role in extracting meaningful information from vast datasets. By selecting a subset of relevant spectral bands, researchers can reduce data dimensionality, enhance computational efficiency, and improve the performance of downstream analysis tasks such as restoration tasks, classification and target detection. However, achieving optimal band selection poses significant challenges, necessitating innovative approaches that leverage spectral grouping techniques. [16], [11] address this need with novel methodologies. The former introduces a method based on neighborhood grouping to efficiently identify relevant bands, while the latter proposes leveraging differences between inter-groups for band selection. [1] presents a learning-based optimization approach for band selection tailored specifically for classification tasks. Despite their innovative approaches, both approaches face challenges interms of computational efficiency, interpretability, and scalability. Furthermore, in the context of hyperspectral image restoration, particularly superresolution, no definitive solutions have emerged regarding grouping methodologies. This underscores a critical gap in current research highlighting the need for further exploration and development in this area. Addressing these challenges is crucial for an efficient grouping algorithm, advancing hyperspectral imaging capabilities and realizing the full potential of band selection techniques in various applications. Recent advancements in hyperspectral image analysis focus on correlation matrices, which reveal spectral dependencies and enable high-texture detail within groups of spectrally dependent vectors. Traditionally, linear predictions [12] have dominated hyperspectral data analysis, but recent studies advocate for interval sampling [17] for superior group formation and enhanced interpolation outcomes. Our study proposes an explicit grouping method based on correlation coefficients to establish a standardized framework for hyperspectral super-resolution. By integrating correlation analysis into our grouping strategy, each band within a Figure 2: (a) Correlation matrix for NTIRE2022 Dataset (b) CAVE Dataset (c) Chikusei Dataset (d) Sentinel Dataset (e) Landsat Dataset correlated group is rigorously evaluated for its significance in super-resolution. These explicit groups, guided by correlation analysis, feed into deep neural network architectures, optimizing interpolation learning efficiency. Our innovation lies in combining Determinantal Point Process (DPP) with correlation analysis to form coherent band subsets, minimizing redundancy and maximizing diversity. This integration enriches interpolation learning by capturing intricate spectral relationships effectively. Challenges arise when grouped bands still overlap significantly despite spectral correlation. To mitigate this, our approach employs Spectral Angle Mapper (SAM) to resolve overlaps based on the lowest SAM values, refining the grouping strategy. This modular integration enhances cohesion, learning robustness, and efficiency in handling complex hyperspectral data, promoting seamless interaction within the network and improving restoration outcomes. ``` 1:Input:\\(X\\), \\(S\\) 2:Output:\\(Z\\) 3:\\(Z\\in\\mathbb{R}^{N\\times WH}\\) 4:\\(X\\in\\mathbb{R}^{N\\times wh}\\) 5: Initialize \\(B\\in\\mathbb{R}^{N\\times n}\\) 6: Initialize \\(M\\in\\mathbb{R}^{n\\times WH}\\) 7:\\(X=ZS\\) 8:for each band \\(i,j\\)do 9:\\(b_{i}=f_{i}(t)\\) 10:\\(b_{j}=f_{j}(t)\\) 11:\\(R_{b,b}(i,j)=\\frac{E[b_{i},b_{j}]}{\\sigma_{i}\\sigma_{j}}\\) 12:endfor 13:\\(Z=B\\cdot M\\) 14:Solve for \\(B\\) and \\(M\\) ``` **Algorithm 1** Spectral correlation estimation ## 3 Methodology This section elaborates our proposed approach for Band grouping. Our method comprises of three primary components for grouping bands in a data: extracting primary grouping information based on correlation analysis, extracting critical band information using DPP, solving for overlapping bands using spectral angle mapper information. The technical specifics of these components are elucidated in the following discussion. ### Spectral Correlation in HSI Correlation functions in hyperspectral imaging provide valuable insights into the relationships between spectral bands and spatial locations in hyperspectraldata. These correlation functions can help understand how the spectral characteristics of different bands are related and can be used for various purposes, including feature selection, dimensionality reduction, and image interpretation. As illustrated in Algorithm 1, we denote the high-resolution hyperspectral image (HR-HSI) data as \\(Z\\in\\mathbb{R}^{W\\times H\\times N}\\) and the low-resolution counterpart (LR-HSI) as \\(X\\in\\mathbb{R}^{w\\times h\\times N}\\). The LR-HSI \\(X\\) is obtained from \\(Z\\) through spatial downsampling using matrix \\(S\\) such that \\(X=ZS\\), where \\(S\\in\\mathbb{R}^{WH\\times wh}\\). To recover \\(Z\\), which captures the spatial and spectral details of the original HR-HSI, we propose a method involving spectral basis \\(B\\in\\mathbb{R}^{N\\times n}\\) and a correlation matrix \\(M\\in\\mathbb{R}^{n\\times WH}\\): \\(Z=BM\\). Here, \\(B\\) represents spectral basis vectors, and \\(M\\) incorporates correlation coefficients. This formulation aims to find optimal values for \\(B\\) and \\(M\\) to reconstruct \\(Z\\). Understanding the spatial and spectral correlations inherent in hyperspectral images is crucial. Bands that are closer together often exhibit similar patterns due to underlying scene properties. This correlation structure is assessed through auto-correlation measures \\(R_{b,b}(i,j)\\) in Algorithm 1. ``` 1:Input: Hyperspectral data \\(L\\) 2:Output: Subset of bands \\(Y\\) of size \\(k\\) 3: Initialize \\(k\\)-DPP for sampling band subsets 4: Set cardinality constraint \\(k\\) 5: Compute the probability \\(P_{L}^{k}(Y)\\) of selecting a subset \\(Y\\) of size \\(k\\) from \\(L\\) using k-DPP: \\[P_{L}(\\mathbf{Y}=Y)=\\frac{\\det(L_{Y})}{\\sum_{\\mathbf{Y}\\subseteq\\gamma}\\det(L_ {Y})}=\\frac{\\det(L_{Y})}{\\det(L+I_{N})}\\] 6: Decompose \\(L_{Y}\\) into eigenvectors \\(S\\) and eigenvalues \\(\\lambda_{n}\\) 7: Select subset of eigenvectors based on eigenvalues: \\[P(n\\in S)=\\frac{\\lambda_{n}}{e_{k}^{n}}\\sum_{|S^{\\prime}|=k-1}\\prod_{n^{\\prime} \\in S^{\\prime}}\\lambda_{n^{\\prime}}=\\lambda_{n}\\frac{e_{k-1}^{n-1}}{e_{k}^{n}}\\] 8: Ensure selected subset represents the most relevant and informative components 9:returnSubset of bands \\(Y\\) of size \\(k\\) ``` **Algorithm 2** k-DPP Sampling As illustrated in Algorithm 1, each element in \\(R_{b,b}(i,j)\\) represents the correlation between band \\(i\\) and band \\(j\\). This analysis aids in identifying which bands carry similar information essential for feature selection. A higher value indicates that these bands change in a similar manner across pixels, suggesting they may capture similar spectral information. Conversely, a lower value or near-zero covariance indicates that the bands change independently. ### Determinantal Point Processes Our proposed method extends the original k-DPP approach [9, 10] for sampling diverse band subsets from hyperspectral data, represented by the correlation matrix \\(L\\) as illustrated in Algorithm 2. It ensures a subset of size \\(k\\) with diverse bands via the probability \\(P_{L}^{k}(Y)\\), which conditions on subsets' diversity captured by \\(L_{Y}\\) and the eigenvalues \\(\\lambda_{n}\\), ensuring relevance and diversity in the selected subsets. Eigenvectors \\(S\\) are selected based on eigenvalues \\(\\lambda_{n}\\) to capture dataset variability effectively. ### SAM for overlapping Bands In the context of grouping strategies in HSI, SAM is employed to handle overlapping bands within a group. Overlapping bands can complicate the analysis as they may contain redundant information or obscure the distinct spectral features. By applying SAM as illustrated in Algorithm 3, we can measure the precise similarity between bands within a group and disentangle those that are too similar. ``` 1:functionCalculateSAM(\\(\\vec{v}_{1},\\vec{v}_{2}\\)) 2:\\(numerator\\leftarrow\\vec{v}_{1}\\cdot\\vec{v}_{2}\\)\\(\\triangleright\\) Dot product of vectors 3:\\(denominator\\leftarrow\\|\\vec{v}_{1}\\|\\cdot\\|\\vec{v}_{2}\\|\\)\\(\\triangleright\\) Product of norms 4:\\(angle\\leftarrow\\arccos\\big{(}\\frac{numerator}{denominator}\\big{)}\\)\\(\\triangleright\\) Calculate angle 5:return\\(angle\\) 6:endfunction 7:\\(diverse\\_set\\leftarrow\\{k\\}\\)\\(\\triangleright\\) Diverse set elements 8:\\(sam\\_values\\leftarrow\\vec{0}_{|diverse\\_set|\\times 31}\\)\\(\\triangleright\\) Initialize SAM values matrix 9:for\\(i\\in\\{1,\\ldots,|diverse\\_set|\\}\\)do 10:for\\(j\\in\\{1,\\ldots,31\\}\\)do 11:if\\(diverse\\_set[i]\ eq j\\)then 12:\\(sam\\_values[i,j]\\leftarrow\\) CalculateSAM(\\(band\\_data_{1},band\\_data_{2}\\)) 13:else 14:\\(sam\\_values[i,j]\\leftarrow\\) NaN 15:endif 16:endfor 17:endfor ``` **Algorithm 3** Calculate Spectral Angle Mapper (SAM) ## 4 Conclusion In conclusion, our proposed approach, which utilizes Determinantal Point Processes (DPP) and spectral angle mapping (SAM), offers a promising direction for addressing the complexities of hyperspectral imaging (HSI). By applying these methodologies, we aim to enhance the effectiveness of spectral band selection and optimize hyperspectral image reconstruction. This approach seeks to mitigate redundancy while striving to improve the efficiency and accuracy of analysis techniques. Future research into advanced grouping strategies will be essential for overcoming remaining computational and interpretative hurdles, thereby unlocking the full potential of HSI in various earth observation applications. ## References * [1] Ayna, C.O., Mdrafi, R., Du, Q., Gurbuz, A.C.: Learning-based optimization of hyperspectral band selection for classification. Remote Sensing **15**(18), 4460 (2023) * [2] Bhardwaj, K., Patra, S.: An unsupervised technique for optimal feature selection in attribute profiles for spectral-spatial classification of hyperspectral images. ISPRS journal of photogrammetry and remote sensing **138**, 139-150 (2018) * [3] Chang, C.I., Du, Q., Sun, T.L., Althouse, M.L.: A joint band prioritization and band-decorrelation approach to band selection for hyperspectral image classification. IEEE transactions on geoscience and remote sensing **37**(6), 2631-2641 (1999) * [4] Datta, A., Ghosh, S., Ghosh, A.: Combination of clustering and ranking techniques for unsupervised band selection of hyperspectral images. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing **8**(6), 2814-2823 (2015) * [5] Guo, B., Damper, R.I., Gunn, S.R., B. Nelson, J.D.: Improving hyperspectral band selection by constructing an estimated reference map. Journal of Applied Remote Sensing **8**(1), 083692-083692 (2014) * [6] Jia, S., Ji, Z., Qian, Y., Shen, L.: Unsupervised band selection for hyperspectral imagery classification without manual band removal. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing **5**(2), 531-543 (2012) * [7] Jia, S., Tang, G., Zhu, J., Li, Q.: A novel ranking-based clustering approach for hyperspectral band selection. IEEE Transactions on Geoscience and Remote Sensing **54**(1), 88-102 (2015) * [8] Keshava, N.: Distance metrics and band selection in hyperspectral processing with applications to material identification and spectral libraries. IEEE Transactions on Geoscience and remote sensing **42**(7), 1552-1565 (2004) * [9] Kulesza, A., Taskar, B.: k-dpps: Fixed-size determinantal point processes. In: Proceedings of the 28th International Conference on Machine Learning (ICML-11). pp. 1193-1200 (2011) * [10] Kulesza, A., Taskar, B., et al.: Determinantal point processes for machine learning. Foundations and Trends(r) in Machine Learning **5**(2-3), 123-286 (2012) * [11] Li, S., Peng, B., Fang, L., Zhang, Q., Cheng, L., Li, Q.: Hyperspectral band selection via difference between intergroups. IEEE Transactions on Geoscience and Remote Sensing **61**, 1-10 (2023). [https://doi.org/10.1109/TGRS.2023.3242239](https://doi.org/10.1109/TGRS.2023.3242239) * [12] Manolakis, D., Lockwood, R., Cooley, T.: On the spectral correlation structure of hyperspectral imaging data. In: IGARSS 2008-2008 IEEE International Geoscience and Remote Sensing Symposium. vol. 2, pp. II-581. IEEE (2008) * [13] Sawant, S.S., Manoharan, P.: Unsupervised band selection based on weighted information entropy and 3d discrete cosine transform for hyperspectral image classification. International Journal of Remote Sensing **41**(10), 3948-3969 (2020) * [14] Sawant, S.S., Prabukumar, M.: A survey of band selection techniques for hyperspectral image classification. Journal of Spectral Imaging **9** (2020) * [15] Tschannerl, J., Ren, J., Yuen, P., Sun, G., Zhao, H., Yang, Z., Wang, Z., Marshall, S.: Mimr-dgsa: Unsupervised hyperspectral band selection based on information theory and a modified discrete gravitational search algorithm. Information Fusion **51**, 189-200 (2019) * [16] Wang, Q., Li, Q., Li, X.: A fast neighborhood grouping method for hyperspectral band selection. IEEE Transactions on Geoscience and Remote Sensing **59**(6), 5028-5039 (2021). [https://doi.org/10.1109/TGRS.2020.3011002](https://doi.org/10.1109/TGRS.2020.3011002) * [17] Wang, X., Cheng, Y., Mei, X., Jiang, J., Ma, J.: Group shuffle and spectral-spatial fusion for hyperspectral image super-resolution. IEEE Transactions on Computational Imaging **8**, 1223-1236 (2022)* [18] Yang, H., Du, Q., Su, H., Sheng, Y.: An efficient method for supervised hyperspectral band selection. IEEE Geoscience and Remote Sensing Letters **8**(1), 138-142 (2010) * [19] Zhang, W., Li, X., Zhao, L.: A fast hyperspectral feature selection method based on band correlation analysis. IEEE Geoscience and Remote Sensing Letters **15**(11), 1750-1754 (2018)
Hyperspectral imaging (HSI) is a powerful earth observation technology that captures and processes information across a wide spectrum of wavelengths. Hyperspectral imaging provides comprehensive and detailed spectral data that is invaluable for a wide range of reconstruction problems. However due to complexity in analysis it often becomes difficult to handle this data. To address the challenge of handling large number of bands in reconstructing high quality HSI, we propose to form groups of bands. In this position paper we propose a method of selecting diverse bands using determinantal point processes in correlated bands. To address the issue of overlapping bands that may arise from grouping, we use spectral angle mapper analysis. This analysis can be fed to any Machine learning model to enable detailed analysis and monitoring with high precision and accuracy. Keywords:Earth Observation Hyperspectral Super-resolution Machine Learning.
Provide a brief summary of the text.
arxiv/f5e65a8b_b051_4d7a_ae14_b4f29d09a9cc.md
# Highlights DeepAAT: Deep Automated Aerial Triangulation for Fast UAV-based Mapping Zequan Chen,Jianping Li,Qusheng Li,Zhen Dong,Bisheng Yang * Incorporating a spatial-spectral feature aggregation module, boosts the network's ability to perceive the spatial distribution of cameras and enhances the global regression capability for camera poses. * Introducing an outlier rejection module according to global consistency, which effectively generates a reliability evaluation score for each feature correspondence. * DeepAAT can efficiently process hundreds of UAV images simultaneously, marking a significant breakthrough in enhancing the applicability of deep learning-based AAT algorithms. # DeepAAT: Deep Automated Aerial Triangulation for Fast UAV-based Mapping Zequan Chena Jianping Lia,* Qusheng Lib Zhen Donga and Bisheng Yanga ## 1 Introduction Automated Aerial Triangulation (AAT) is a basic task in photogrammetry and holds substantial research significance (Tanathong and Lee, 2014). It serves as the initial step in the 3D reconstruction pipeline of aerial images (Zhong et al., 2023). AAT's primary role involves simultaneously recovering the camera poses and reconstructing sparse 3D points in the scene. These foundational outputs facilitate subsequent dense image matching and 3D modeling procedures Jiang et al. (2021). The derived camera poses and scene models find diverse applications in digital mapping Hasheminasab et al. (2022), virtual reality Jiang et al. (2020), and smart city Zhou et al. (2020). With a research history spanning decades (Schenk, 1997), classical AAT algorithms can be primarily categorized into two groups: incremental style and global style Schonberger and Frahm (2016). Furthermore, the evolution of deep learning has given rise to numerous supervised AAT algorithms Xiao et al. (2022). The existing AAT methods are reviewed as follows. ### Classic Automated Aerial Triangulation The first step of the classic AAT is to perform feature extraction and matching of all input images. The following steps are different for the global style and incremental style. Global AAT can predict all camera poses and scene structure at once (Govindu, 2004). In AAT algorithms, Bundle Adjustment (BA) (Triggs et al., 2000) is the most time-consuming part. Global AAT only requires to execute BA once, resulting in higher efficiency. However, it can be difficult to eliminate outliers, resulting in poor robustness and scene integrity. Incremental AAT was first proposed by Snavely et al. (2006), with the key lying in selecting a good initial matching image pair (Beder and Steffen, 2006). Afterward, incremental AAT adds a new image to the system sequentially, followed by Perspective-n-Points (PnP) (Lepetit et al., 2009), Triangulation (Hartley and Sturm, 1997), and local BA. Incremental AAT requires multiple BA, resulting in low reconstruction efficiency in situations with a large number of images (Zhu et al., 2017). In addition, due to the accumulation of errors, the reconstructed scene is prone to drift issues. Compared to the general scenes, UAV images exhibit distinctive characteristics, including large volumes, high resolutions, and significant overlap. Within the realm of classic AAT algorithms, incremental methods have emerged as the standard approach for UAV image AAT due to their superior robustness against outliers and ability to provide comprehensive results. Addressing the challenges posed by large-scale UAV images, most SOTA AAT methods typically involve employing a divide-and-conquer strategy. This strategy begins by segmenting the UAV image into blocks based on GPS information, followed by the fusion of all modules to yield globally consistent large-scale results. Noteworthy contributions in this field include the work by Chen et al. (2020), which employed the maximum spanning tree to expand images after dividing the scene map into smaller segments with a certain degree of overlap, thereby enhancing connectivity and scene map integrity. Similarly, Xu et al. (2021) introduced a hierarchical approach that constructed a binary tree using images as leaf nodes, subsequently fusing spatial triads and scenes from the bottom up. Thismethod offers advantages in terms of robustness, accuracy, and efficiency. Likewise, Bhowmick et al. (2017) initially organized images into hierarchical trees using clustering methods, then addressed the AAT problem for large-scale images by reconstructing each small image set and merging them into a common reference framework. Snavely et al. (2008) partitioned extensive scenes by calculating the small bone skeleton set and reconstructing the skeleton set of the image. This approach reduces the number of parameters under consideration and enhances reconstruction efficiency. To sum up, global AAT offers high efficiency but suffers from poor robustness and scene integrity. On the other hand, incremental AAT exhibits high robustness and accuracy but tends to have relatively lower time efficiency. ### Supervised Automated Aerial Triangulation Recognizing the limitations encountered by classical AAT algorithms, an increasing number of studies are exploring the application of deep learning methods to address these challenges. Many existing deep learning methods directly regress the depth map and pose of a monocular camera (Zhou et al., 2017), which usually highly rely on prior information for prediction. In addition, due to not considering the correlation between depth and pose, the generalization ability of these networks is limited, making it difficult to obtain ideal prediction results. BA-Net (Tang and Tan, 2018) attempts to use feature metric BA to solve the AAT problem. It makes end-to-end training possible by designing a differentiable LM (Levenberg-Marquardt) optimization algorithm, but the LM algorithm occupies a large amount of memory and has low computational efficiency. DeepSfM (Wei et al., 2020) can simultaneously regress the pose and depth maps corresponding to the image; however, it requires coarse poses and depth maps for initialization and has high GPU requirements, making it difficult to scale up for high-resolution images and large-scale environments. DeepMLE (Xiao et al., 2022) does not require initial values as input, which expresses the two-view AAT problem as maximum likelihood estimation, learning the relative pose of the two views by maximizing the likelihood function of correlation. Similarly, for the problem of binocular vision estimation, Wang et al. (2021) proposed a dense optical flow estimation network for predicting between two frames, which includes a scale-invariant depth estimation module that can simultaneously calculate the relative pose of the camera based on the corresponding relationship of 2D optical flow. DRO (Gu et al., 2021) is an optimization method based on recurrent neural networks that iteratively updates camera pose and image depth to minimize feature measurement errors. Zhuang and Chandraker (2021) used a self-attention graph neural network to enhance strong interactions between different corresponding relationships and potentially model complex relationships between points to drive learning. MOAC (Wu et al., 2022) introduces a grouped dual cost enhancement module, which enhances the spatial semantic information and channel relationships of costs, making the optimization more robust to noise. Moran et al. (2021) proposed a new approach to solve AAT problems using deep learning. They use matched feature points as input, and after permutation equivariant networks, predict the pose of each camera and 3D points in the scene. Compared to many existing deep learning AAT methods, it can be applied to large-scale reconstruction tasks in an unsupervised manner. However, there are mainly two drawbacks to it. The first one is that it cannot eliminate incorrectly matched point pairs, which means all the input pairs should be correct and is usually difficult to achieve. Another one is that its prediction results are still not satisfactory because of the limited generalizability. In summary, most of the existing supervised methods can only handle a small number of low-resolution images, and their regression performance is also poor, lacking usability and practicality. Hence, the proposed DeepAAT addresses the existing challenges encountered by both classic and learning-based AAT algorithms, and presents a meticulously designed deep network tailored for UAV imagery, emphasizing efficiency, scene completeness, and practical applicability. The main contributions of this study are threefold: (1) DeepAAT incorporates a spatial-spectral feature aggregation module, specifically combining both the spatial layout and spectral characteristics of an image set. This module boosts the network's ability to perceive the spatial arrangement of cameras and enhances the global regression capability for poses. (2) DeepAAT introduces an outlier rejection module according to global consistency, which effectively generates a reliability evaluation score for each feature correspondence. This approach facilitates the efficient and precise elimination of erroneous matching pairs, thereby ensuring accuracy and reliability throughout the entire 3D reconstruction process. (3) DeepAAT can efficiently process hundreds of UAV images simultaneously, marking a significant breakthrough in enhancing the applicability of deep learning-based AAT algorithms. Furthermore, through a block fusion strategy, DeepAAT can be effectively scaled up for large-scale scenarios. The rest of this paper is structured as follows. The preliminaries for our system are provided in Section 2. A brief system overview including the hardware and software structure is provided in Section 3. A detailed description of DeepAAT is presented in Section 4 and experiments are conducted on UAV image datasets in Section 5. Conclusion and future work are drawn in Section 6. ## 2 Preliminary ### Problem Definition of Automated Aerial Triangulation The task of AAT refers to estimating the camera poses and 3D scene points corresponding to the 2D observations on the images. In classic photogrammetry, it is well studied and understood that the relative camera poses and 3D scene points can be solved only with the 2D observations (He and Habib, 2018). The absolute camera poses and 3D scene points respectively to the geodesic framework can be then obtained with Ground Control Points (GCPs) or the GPS mounted on the UAV (Li et al., 2019). Assume that the stationary targeting survey area is viewed by \\(M\\) images, which are captured by the camera with known pre-calibrated intrinsic parameter \\(\\mathbf{K}\\) at different places along the UAV survey mission. The \\(M\\) unknown camera poses are represented by a set of projection matrices \\(\\mathcal{P}=\\{\\mathbf{P}_{m}|m=1, ,M\\}\\). Each projection matrix \\(\\mathbf{P}_{m}\\) with the size of \\(3\\times 4\\) is constructed by camera rotation \\(\\mathbf{R}_{m}\\in SO(3)\\) (corresponding quaternion is \\(\\mathbf{q}_{m}\\)) and position \\(\\mathbf{t}_{m}\\in\\mathbb{R}^{3}\\) according to \\(\\mathbf{P}_{m}=[\\mathbf{R}_{m}|\\mathbf{t}_{m}]\\). Given \\(N\\) DC scene points in the targeting survey area \\(\\mathcal{F}=\\{\\mathbf{F}_{n}|n=1, ,N\\}\\), each 3D scene point is written by \\(\\mathbf{F}_{n}=[\\mathbf{F}_{n}^{1},\\mathbf{F}_{n}^{2},\\mathbf{F}_{n}^{3},1]^{ \\top}\\) in homogeneous coordinates. If \\(\\mathbf{F}_{n}\\) can be observed by the \\(m^{th}\\) image, its projection on the \\(m\\)'s image is given by Eq.(1). As the depth information \\(\\lambda_{m,n}\\) is lost during the projection, \\(\\mathbf{f}_{m,n}\\) is an up-to-scale bearing vector. \\[\\mathbf{f}_{m,n}=[\\mathbf{f}_{m,n}^{1},\\mathbf{f}_{m,n}^{2},1]^{ \\top}=\\frac{1}{\\lambda_{m,n}}\\mathbf{K}\\mathbf{P}_{m}\\mathbf{F}_{n}. \\tag{1}\\] In a typical AAT procedure, the initial step involves 2D feature detection and matching between pairwise images, which is carried out using the widely used Scale-Invariant Feature Transform (SIFT) (Lowe, 2004) or other robust feature detectors and descriptors (Dusmanu et al., 2019). This step is not the main focus of this work. Subsequently, a set of 2D feature tracks denoted as \\(\\mathcal{T}=\\{\\mathbf{T}_{n}|n=1, ,N\\}\\), is used as the input for our algorithm. It should be noted that track \\(\\mathbf{T}_{n}\\) corresponds to the 3D feature \\(\\mathbf{F}_{n}\\) and is constructed by a set of 2D observations from different images using Eq.(2): \\[\\mathbf{T}_{n}=\\{\\mathbf{f}_{m,n}|m\\in\\mathcal{J}_{n}\\}, \\tag{2}\\] where \\(\\mathcal{J}_{n}\\) denotes the set of images that can observe the 3D feature \\(\\mathbf{F}_{n}\\). Then the tracks could be used to recover the camera poses and 3D scene points like the existing incremental (Schonberger and Frahm, 2016) or global (Moulon et al., 2013) AAT strategies. ### Projective Factorization Despite the mainstream AAT methods listed above, projective factorization (Sturm and Triggs, 1996) is also a long-established method in AAT. We provide a brief introduction to projective factorization, as it forms the foundation for the operation of the proposed network. The complete image projections, namely the 2D feature tracks \\(\\mathcal{T}\\), can be gathered into a measurement matrix \\(\\mathbf{W}_{\\text{mes}}\\) in Eq.(3): \\[\\mathbf{W}_{\\text{mes}}\\equiv\\begin{bmatrix}\\lambda_{1,1}\\mathbf{ f}_{1,1}&\\lambda_{1,2}\\mathbf{f}_{1,2}&\\cdots&\\lambda_{1,N}\\mathbf{f}_{1,N}\\\\ \\lambda_{2,1}\\mathbf{f}_{2,1}&\\lambda_{2,2}\\mathbf{f}_{2,2}&\\cdots&\\lambda_{2,N }\\mathbf{f}_{2,N}\\\\ \\vdots&\\vdots&\\ddots&\\vdots\\\\ \\lambda_{M,1}\\mathbf{f}_{M,1}&\\lambda_{M,2}\\mathbf{f}_{M,2}&\\cdots&\\lambda_{M,N}\\mathbf{f}_{M,N}\\end{bmatrix} \\tag{3}\\] \\[=\\begin{bmatrix}\\mathbf{K}\\mathbf{P}_{1}\\\\ \\mathbf{K}\\mathbf{P}_{2}\\\\ \\vdots\\\\ \\mathbf{K}\\mathbf{P}_{M}\\end{bmatrix}\\begin{bmatrix}\\mathbf{F}_{1}\\\\ \\mathbf{F}_{2}\\\\ \\vdots\\\\ \\mathbf{F}_{N}\\end{bmatrix}^{\\top}.\\] If the 3D scene points in the targeting survey area \\(\\mathcal{F}\\) are observed by all the images, the camera poses and 3D scene points can be recovered using Singular Value Decomposition (SVD) of \\(\\mathbf{W}_{\\text{mes}}\\)(Sturm and Triggs, 1996). For the common cases of missing observations, the SVD can be replaced with iterative methods (Magerand and Del Bue, 2017; Dai et al., 2013). However, these methods are usually too weak for AAT in the presence of outliers and noise (Iglesias et al., 2023), and can not be directly applied to large-scale AAT. Nevertheless, the formulation of \\(\\mathbf{W}_{\\text{mes}}\\) provides an ideal way of keeping spatial correlation information for neural networks. ### Permutation Equivariant Layer Let \\(\\mathbf{W}\\) be a tensor with the shape of \\(M\\times N\\times D\\), whose row represents the image index, the column represents the feature track index and the third dimension represents the feature index. Taking measurement matrix \\(\\mathbf{W}_{\\text{mes}}\\) in Eq.(3) as an example, \\(\\mathbf{W}_{\\text{mes}}\\) can be rearranged with the shape of \\(M\\times N\\times 2\\) (the third dimension records the feature coordinates on the image plane) to serve as input for the neural network for the sake of convenience. To recover the camera poses and 3D scene points using a deep neural network, we expect a particular layer can output the same results irrespective of the order of the camera poses or the feature tracks. This reordering problem could be solved using the Permutation Equivariant Layer (PEL)(Hartford et al., 2018), which was first introduced by Moran et al. (2021) to handle SfM problem exploring tensor's exchangeability. **Definition 1.** Exchangeability of a tensor \\(\\mathbf{W}\\) gives rise to the following property: If we permute arbitrary rows and columns of \\(\\mathbf{W}\\), then feed the permuted \\(\\mathbf{W}\\) into a PEL, the output tensor \\(\\mathbf{W}^{\\prime}\\) should experience the same permutation of the rows and columns as illustrated in Fig.1. **Theorem 1.**(Hartford et al., 2018) Take tensor \\(\\mathbf{W}\\) as input, the PEL with five unique parameters \\(h_{1}^{(d,o)}\\), \\(h_{2}^{(d,o)}\\), \\( \\), \\(h_{4}^{(d,o)}\\) and \\(h_{5}^{(o)}\\) could guarantee the output tensor \\(\\mathbf{W}^{\\prime}\\) with size of \\(M\\times N\\times O\\) exchangeable following a specific fully connected layer calculation rule in Eq. (4), where \\(d\\) and \\(o\\) are the indexes for the input and output feature channels, respectively. Inspired by the initial work proposed by Moran et al. (2021), we also utilize PEL to extract exchangeable high-level geometry correlations from the feature track matrix \\(\\mathbf{W}\\). But different from Moran et al. (2021), our proposed method takes into account not only the geometric features but also the spectral features of the feature tracks. Furthermore, the outliers in the feature track matrix are also automatically rejected to enhance the robustness of the results. ## 3 System Overview The proposed efficient UAV-based mapping system illustrated in Fig.2 is briefly introduced in this section. As most UAV controllers, such as PixHawk (Meier et al., 2012), depend on GPS for trajectory planning and tracking in survey applications, it is assumed that each UAV image is geotagged with GPS information provided by the UAV controller. Despite the Single Point Positioning (SPP) error of GPS potentially reaching 10 meters on the UAV, it can still serve as a useful guide for the image matching process, focusing on matching nearby images only, as demonstrated by Schonberger and Frahm (2016). To be compatible with distributed parallel processing and limit the GPU memory usage on one computing unit for large-scale UAV-based mapping, the proposed system exploits the hierarchical Sf scheme (Chen et al., 2020; Xu et al., 2021), contains three components, namely, (1) **image clustering**, (2) **DeepAAT**, and (3) **cluster merging**. (1) **Image clustering** divides the entire image set into multiple subsets considering the 2D feature correspondences between images. By treating the complete image set as a scene graph \\(\\mathbf{G(V,E)}\\)(Zhu et al., 2018), each image represents a vertex in \\(\\mathbf{V}\\), and an edge between two image vertices exists in \\(\\mathbf{E}\\) if the two images share feature correspondences. Setting the number of correspondences between images as the edge weight, \\(\\mathbf{G(V,E)}\\) is segmented using normalized cut (Shi and Malik, 2000) iteratively until the number of images in each subset is within a desired number \\(N_{subset}\\). \\(N_{subset}\\) can be set according to the GPU memory on each computing unit. **Remark 1.** The number of 2D feature correspondences between pairs of images typically serves as a crucial metric for evaluating the reliability of relative matching. In essence, the greater the number of 2D feature correspondences between image pairs, the higher their matching reliability, and conversely, the fewer the matches, the lower the reliability. Our goal is to achieve a strong level of mutual matching within each subset. Consequently, the objective of the normalized cut operation on \\(\\mathbf{G(V,E)}\\) is to minimize the sum of edge weights within the cut, while also ensuring a balanced distribution of elements in each subset to enhance the computational efficiency for the following DeepAAT. (2) **DeepAAT** efficiently and robustly recovers camera poses and structural information within each cluster. The network structure and implementation details of DeepAAT will be described in Section 4. (3) **Cluster merging** conducts a global bundle adjustment of the entire images taking the camera poses from each subset as initial values, hence recovering the whole camera poses and structures in the targeting survey area. More specifically, after rejecting the outlier tracks identified by the DeepAAT, the remaining feature tracks are then retriangulated using the initial camera poses resulting from DeepAAT. Then the global bundle adjustment is performed once to get the final result. Readers can refer to Triggs et al. (2000) for additional details on re-triangulation and global bundle adjustment. ## 4 Network Architecture of DeepAAT The network architecture of DeepAAT mainly consists of three parts, spatial-spectral feature aggregation module (Section 4.1), global consistency-based outlier rejecting module (Section 4.2), and pose decode module (Section 4.3), which are illustrated in Fig.3. The input of DeepAAT includes feature measurement matrix \\(\\mathbf{W}_{mes}\\) constructed by reordering Eq.(3) with the shape of \\(M\\times N\\times 2\\), SIFT feature descriptor matrix \\(\\mathbf{W}_{des}\\) with the shape of \\(M\\times N\\times 128\\), and the geotag matrix \\(\\mathbf{W}_{ggs}\\) with the shape of \\(M\\times 3\\). It's important to note that the GPS-derived latitude, longitude, and altitude values are initially transformed into the East-North-Up (ENU) local coordinate system (Shi and El-Sheimy, 2002). Following this transformation, they are further normalized to enhance network generativity. By feeding the input into the spatial-spectral feature aggregation module, a high-level embedded feature \\(\\mathbf{W}_{em}\\) is extracted for the downstream tasks. Then, the global consistency-based outlier rejecting module detects the outliers in the measurement matrix \\(\\mathbf{W}_{mes}\\) and gives the confidence for each 2D observation using \\(\\mathbf{W}_{outlier}\\). Meanwhile, the pose decode module recovers the camera poses. The loss functions will be detailed in Section 4.4. ### Spatial-Spectral Feature Aggregation Module **Position Encoding:** The coordinates within the feature measurement matrix \\(\\mathbf{W}_{mes}\\) and the GPS matrix \\(\\mathbf{W}_{ggs}\\) are both essential for the network to comprehend the spatial distribution. Position encoding (Mildenhall et al., 2021) has Figure 1: Exchangeability of tensor \\(\\mathbf{W}\\). been employed for both of these location-related pieces of information to enhance the network's ability to distinguish relative positional relationships among input data, which is written as follows: \\[\\varepsilon(x)=[zit(z^{0}xx),cos(z^{0}xx), ,sin(2^{L-1}xx),cos(2^{L-1}xx)]^{ \\top}. \\tag{5}\\] where \\(x\\) is the coordinate value in each dimension, \\(L\\) is the coding level. Position encoding solely influences the last dimension of \\(\\mathbf{W}_{mes}\\) and \\(\\mathbf{W}_{gsp}\\). As for the GPS matrix \\(\\mathbf{W}_{gsp}\\), to ensure its dimensional consistency with \\(\\mathbf{W}_{mes}\\) and \\(\\mathbf{W}_{des}\\), Figure 3: Network architecture of DeepAAT. Figure 2: System overview of the efficient UAV-based mapping system. the proposed method conducts a dimension expansion operation in the first dimension (image indexes), transforming it from a two-dimensional matrix of \\(M\\times{}3\\) into a three-dimensional one of \\(M\\times{}N\\times{}3\\). Position encoding does not have learnable parameters, but it enhances the network's ability to distinguish location information. **Residual Permutation Equivariant Layer:** The residual PEL employed in this paper consists of a consecutive pair of PEL, Instance Normalization (IN) (Ulyanov et al., 2016), and Rectified Linear Unit (ReLU) combinations. Within the residual PEL, input and output data are summated through skip connections (He et al., 2016), serving two purposes: (1) ensuring stable gradient propagation within the network; and (2) facilitating the fusion of shallow layers, which contain more actual positional information, with deeper layers that offer more distinctive and discriminative feature information. It's worth noting that IN and ReLU do not alter the permutation equivariant properties of PEL. ### Global Consistency-based Outlier Rejecting Module Even with pair-wise Epipolar geometry verification, \\(\\mathbf{W}_{mes}\\) still contain a substantial number of outlier matches, which can significantly impact obtaining correct global triangulation results and ensuring the proper convergence of the BA. Therefore, the proposed method utilizes a global consistency outlier rejecting module, which, by leveraging global information from the embedded feature \\(\\mathbf{W}_{em}\\), ensures that the subsequent BA operates successfully. The global consistency-based outlier rejecting module primarily comprises three consecutive projection layers, which are fully connected (FC) layers that change the number of channels for non-empty data in sparse matrices, i.e. \\(Proj:\\mathbb{R}^{d}\\rightarrow\\mathbb{R}^{d^{\\prime}}\\). Following the first two projection layers are the Context Normalization (CN) and ReLU activation functions. The CN primarily serves to integrate data, allowing the previous layer's output to acquire corresponding context information in both camera poses and feature tracks. This aids the network in identifying outliers. Following the last projection layer is the Sigmoid. After passing through the Sigmoid function, the network's output is a score matrix with values ranging from 0 to 1, with dimensions \\(M\\times{}N\\). For each non-empty point, the score represents the probability that each 2D feature is an inlier. The closer the score is to 1, the higher the confidence that it is an inlier. In the outlier detection process, with a given threshold \\(\\tau_{audlier}\\), the predicted scores greater than \\(\\tau_{audlier}\\) are considered as inliers, while scores less than \\(\\tau_{audlier}\\) are considered outliers. ### Pose Decode Module The pose decode module utilizes the global information encoded by the spatial-spectral feature aggregation module to decode the camera's position and orientation. The decoder first performs mean pooling on the input feature along the column dimension (feature tracks), which maps the embedded feature \\(\\mathbf{W}_{em}\\) with the shape of \\(M\\times{}N\\times{}O_{em}\\) to \\(M\\times{}O_{em}\\). The reason for choosing mean pooling is that each camera observes a different number of 3D points in the input scene. By averaging the features observed by each camera, the network can fairly represent the general characteristics of the scene observed by the \\(M\\) cameras, regardless of the number of 3D points observed. This enables the decoder to obtain the global context information for each camera within the scene. **Camera position decoder:** In the camera position decoding branch, GPS location information is reintroduced to improve the decoder's spatial positioning awareness. Additionally, the network performs regression on the camera's position offsets, which reflect the errors in the GPS tags, other than the camera's position directly. This is done because the magnitudes of GPS errors in each direction consistently fall within a specific range. **Camera rotation decoder:** In the camera rotation decoding branch, the quaternion of each camera is regressed with two perception layers. ### Loss Function The loss function for DeepAAT comprises three components: \\(\\mathcal{L}_{audlier}\\), \\(\\mathcal{L}_{position}\\), and \\(\\mathcal{L}_{rotation}\\), which are written by: \\[\\mathcal{L}=\\mathcal{L}_{audlier}+\\alpha\\mathcal{L}_{position}+\\beta\\mathcal{L }_{rotation}, \\tag{6}\\] where \\(\\alpha\\) and \\(\\beta\\) are balance factors. \\(\\mathcal{L}_{audlier}\\) is a Binary Cross-Entropy (BCE) like loss used to supervise the global consistency-based outlier rejecting module: where \\(\\hat{\\mathbf{W}}_{audlier}^{m,n}\\) is the predicted outlier score range from \\([0,1]\\). Both the rotation loss \\(\\mathcal{L}_{rotation}\\) and translation loss \\(\\mathcal{L}_{position}\\) are implemented using the mean square loss function: \\[\\mathcal{L}_{rotation}=\\frac{1}{M}\\sum_{m=0}^{M-1}||\\mathbf{q}_{m}- \\hat{\\mathbf{q}}_{m}||_{2}, \\tag{8}\\] \\[\\mathcal{L}_{position}=\\frac{1}{M}\\sum_{m=0}^{M-1}||\\mathbf{t}_{m}- \\hat{\\mathbf{t}}_{m}||_{2},\\] where \\(\\hat{\\mathbf{q}}_{m}\\) and \\(\\hat{\\mathbf{t}}_{m}\\) are the predicted rotation and translation for the \\(m^{th}\\) camera. ## 5 Experiments ### Dataset The experimental data, as depicted in Fig.4, was collected from an urban area including complex road networks, hills, construction sites, and buildings. A total of 4,992 UAV images were employed, subdivided into eight uniformed blocks labeled A to H. These images underwent preprocessing through SfM with high-precision Ground Control Points (GCPs) to establish reference data. Throughout the experiments, data from blocks A to G were used for training, while data from block H was employed as testing data. During dataset preparation, feature points that can be successfully matched but do not appear in the final AAT result are labeled as outliers. ### Implementation Details #### 5.2.1 Training sample generation The used training samples were generated through random sampling of images within each block. In the context of UAV AAT, typically, images that are closer to each other tend to have a greater number of feature correspondences and exhibit more stable connectivity. Specifically, for a given image set, one image is randomly selected to serve as the central image, denoted as \\(\\mathbf{I}_{c}\\). Then, according to the GPS position, Euclidean distances from all the other images to \\(\\mathbf{I}_{c}\\) are calculated and arranged in ascending order. Finally, based on the provided minimum and maximum sampling image limits, \\(N_{min}\\) and \\(N_{max}\\), a random number of sampled images, \\(N_{c}\\), is determined. The \\(N_{c}\\) closest images to \\(\\mathbf{I}_{c}\\) are selected to generate the sample data. Using this sampling strategy, a vast number of distinct samples can be generated. For instance, \\(N_{min}\\) and \\(N_{max}\\) are set as 100 and 130. A single image set can generate a total of \\(624\\times(N_{max}-N_{min})=18,720\\) samples. #### 5.2.2 Data augmentation To enhance the network's generalizability, data augmentation is applied to the training data, focusing on two main aspects. Firstly, Gaussian noise with a mean of zero and a standard deviation of 0.01 was added to the input \\(\\mathbf{W}_{mes}\\). Secondly, given the limited amount of training data, and to augment and leverage the network's permutation equivariance capability, random row and column permutations were applied to the training samples in advance. #### 5.2.3 Sparse matrix Because the number of scene points that can be observed in each image is limited, there will be a large number of zero elements in the measurement matrix \\(\\mathbf{W}_{mes}\\) and the descriptor matrix \\(\\mathbf{W}_{des}\\). Therefore, these matrices are implemented in the form of sparse matrices in the code to improve the processing efficiency of the network. #### 5.2.4 Parameter settings The experiments involve the configuration of certain hyperparameters. Specifically, regarding position encoding, the encoding order is set at \\(L=4\\). In the spatial-spectral aggregation module, the embedded feature dimension is set at \\(O_{em}=256\\). In the outlier rejecting module, the outlier detection threshold is set at \\(\\tau_{outlier}=0.5\\). In the configuration of weights within the loss function, the parameters \\(\\alpha\\) and \\(\\beta\\), which govern the weights for rotation and translation, are assigned values of 0.9 and 0.1, respectively. This allocation stems from the observation that, in contrast to directly predicted rotation, the prediction of translation is effectively an adjustment relative to the initial estimate. Consequently, it is appropriate to assign a lesser weight to translation compared to rotation. #### 5.2.5 Evaluation criteria This paper evaluates the experimental results from various perspectives. For the outlier removal results, evaluation metrics commonly used in binary classification tasks are employed, including \\(Accuracy\\), \\(Precision\\), \\(Recall\\), and \\(F_{1}\\) score, which are calculated as follows: Figure 4: UAV-based image dataset used for the experiments. The dataset is divided into eight blocks. \\[Accuracy =\\frac{TP+TN}{TP+FP+TN+FN}, \\tag{9}\\] \\[Precision =\\frac{TP}{TP+FP},\\] \\[Recall =\\frac{TP}{TP+FN},\\] \\[F_{1} =\\frac{2\\times Precision\\times Recall}{Precision+Recall},\\] where \\(TP\\) is True Positive, \\(FP\\) is False Positive, \\(TN\\) is True Negative, \\(FN\\) is False Negative. For the reconstruction results, we use the reprojection error \\(E_{repo}\\), position error \\(E_{t}\\) and angle error \\(E_{R}\\) to evaluate from three aspects. \\[\\left\\{\\begin{aligned} E_{repo}=\\frac{1}{n_{2d}}\\sum\\limits_{i=1}^ {m}\\sum\\limits_{j=1}^{n}\\left\\|\\left(x_{ij}^{1}-\\frac{P_{i}^{k}X_{j}}{P_{i}^{k} X_{j}},x_{ij}^{2}-\\frac{P_{i}^{2}X_{j}}{P_{i}^{3}X_{j}}\\right)\\right\\|_{2},\\\\ E_{t}=\\frac{1}{n_{i}}\\sum\\limits_{i=1}^{m}\\left\\|\\hat{t}_{i}-\\tilde{t} _{i}\\right\\|_{2},\\\\ E_{R}=\\frac{1}{n_{i}}\\sum\\limits_{i=1}^{m}\\cos^{-1}\\left(\\frac{1}{2} \\left(tr\\left(\\widetilde{R_{i}^{r}R_{i}}\\right)-1\\right)\\right),\\end{aligned}\\right. \\tag{10}\\] where \\(n_{2d}\\) represents the number of 2D pixels in the scene, \\(m\\) is the number of cameras, \\(n\\) is the number of 3D points, and \\(x_{ij}^{k}\\) represents the \\(k\\)th dimension of the coordinate of the \\(j\\)th 3D point observed by the \\(i\\)th image, \\(P_{i}^{k}\\) denotes the \\(k\\)th row of the \\(i\\)th camera matrix, \\(X_{j}\\) denotes the coordinate of the \\(j\\)th 3D point, \\(\\hat{t}_{i}\\) is the reference value of camera position, \\(\\tilde{t}_{i}\\) is the predicted camera position, \\(\\hat{R}_{i}\\) is the reference value of camera rotation, \\(\\widetilde{R}_{i}\\) is the predicted camera rotation value, and \\(tr()\\) denotes the trace of the matrix (i.e. the sum of the main diagonal elements of the matrix). In addition, we also record the time used in network prediction and compare it with baseline methods as an important evaluation indicator. #### 5.2.6 Computation resources The configuration of the machine used in our experiments is as follows. CPU: Intel (R) Xeon (R) Silver 4210R CPU @ 2.40GHz, GPU: NVIDIA A100 SXM4 80GB. To control the memory size and computational complexity, all network training and prediction tasks involved in this article can be run on a single Tesla V100 with 32GB memory. ### Results of scene segmenting and merging To facilitate large-scale reconstruction tasks, we employed a strategy of image clustering and merging. During the image clustering phase, we set a maximum limit of 100 cameras for each subset. Consequently, block H was ultimately divided into 8 distinct blocks, with each subset containing between 72 to 95 cameras. The clustering outcome is illustrated in Fig.5, where each circle denotes a camera, and different colors represent individual subsets. Notably, the gray lines in the figure are the severed edges that connect cameras across different subsets. Table 1 shows that the average performance of the outlier rejection in the scene surpasses 0.95 across all four metrics, with a notable recall rate of 0.973. This indicates that about 97.3% of the pixels identified by the network as positive are indeed true positives. Such high accuracy is advantageous for the subsequent steps of global triangulation and BA. This also demonstrates that the global consistency-based outlier rejection module, as designed in this paper, is highly effective and applicable throughout the entire algorithmic process. Moreover, for clustered scenes, the network's prediction time is under one second, highlighting the proposed network's high operational efficiency in AAT tasks. Here, we predict all 8 scenarios through a single model loading, and except for the first scenario, the remaining 7 scenarios do not require reloading the model, resulting in a nearly tenfold reduction in time consumption. Table 2 demonstrates that across all eight clustered scenes, the predicted camera position error is consistently lower than the initial position error. The results show greater precision in predicting rotation, with an average error of less than 2\\({}^{\\circ}\\), which is highly beneficial for accurate subsequent adjustments. After BA, the average reprojection error across all scenes is less than 0.5 pixels, and the rotation error \\begin{table} \\begin{tabular}{c c c c c c} \\hline Scene & \\(\\uparrow\\)Acc & \\(\\uparrow\\)Pre & \\(\\uparrow\\)Rec & \\(\\uparrow\\)F1 & Time/s \\\\ \\hline H1\\_95 & 0.959 & 0.966 & 0.980 & 0.973 & 0.820 \\\\ H2\\_73 & 0.933 & 0.960 & 0.952 & 0.956 & 0.087 \\\\ H3\\_79 & 0.947 & 0.966 & 0.968 & 0.967 & 0.094 \\\\ H4\\_76 & 0.951 & 0.960 & 0.974 & 0.967 & 0.085 \\\\ H5\\_72 & 0.967 & 0.978 & 0.982 & 0.980 & 0.117 \\\\ H6\\_82 & 0.965 & 0.980 & 0.979 & 0.980 & 0.129 \\\\ H7\\_75 & 0.952 & 0.975 & 0.966 & 0.970 & 0.092 \\\\ H8\\_72 & 0.967 & 0.981 & 0.980 & 0.980 & 0.080 \\\\ Mean & 0.955 & 0.971 & 0.973 & 0.972 & 0.188 \\\\ \\hline \\multicolumn{6}{l}{*Acc(Accuracy), Pre(Precision), Rec(Recall), F1(F1 Score)} \\\\ \\end{tabular} \\end{table} Table 1Outlier rejection result Figure 5: Image clustering result, where the number after \\({}^{\\blacksquare}\\) represents the number of cameras included in the subset. is under 0.1\\({}^{\\circ}\\). The visualized results, as depicted in Fig.6, further affirm the effectiveness of the algorithm. In the following, the Cluster Merging algorithm described in Section 3 is used to fuse the above 8 segmented scenes, and the results are shown in Fig.7 and Tab.3. From Fig.7, it can be seen that after using GPS information to globally align all segmented scenes, there is a significant offset between different scenes. After Cluster Merging, the inconsistency between scenes was effectively eliminated, resulting in globally consistent fusion results that were similar in appearance to the reference one. From Tab.3, it can be seen that the reprojection error of the scene has slightly decreased compared to the average reprojection error of the segmented scene, but in terms of position error and rotation error, it has slightly increased compared to the average result of the segmented scene. In addition, the reconstructed scene points have a slight increase compared to the 3D points of the reference scene. These results indicate that the proposed hierarchical AAT scheme can effectively complete large-scale AAT tasks. ### Comparison We compare the proposed algorithm with the SOTA methods, including: ESFM (Moran et al., 2021): A neural network architecture is proposed, which takes track points as input in the form of a matrix. It can simultaneously predict camera pose and scene points, and use reprojection error as the loss function. Colmap (Schonberger and Frahm, 2016): A state-of-the-art open-source incremental SfM pipeline library, widely used in pose estimation and scene reconstruction tasks. Colmap provides both UI interface and command line running mode, making it easy to operate and has good reconstruction results. OpenMVG (Moulon et al., 2013): Provides both incremental SfM and global Sf implementations, with global SfM being the current SOTA in the open-source library. By using the command line, step-by-step SfM can be easily implemented. Because the output of ESFM and the proposed method is the result before BA, we directly compared the predicted results of the two networks, including reprojection error and rotation error. The results are shown in Tab.4, and some prediction scenarios are shown in Fig.8. To ensure consistency in the learning space of ESFM, we standardized all input scenes, allowing the network to learn the relative positions of all cameras with respect to the first camera in the scene. When comparing the proposed DeepAAT with Colmap and OpenMVG, we directly compared the final \\begin{table} \\begin{tabular}{c c c c c c c c} \\hline \\hline Scene & \\multicolumn{3}{c}{DeepAAT} & \\multicolumn{3}{c}{Results after BA} \\\\ \\cline{3-8} & & RPE/pix & PE/m & RE/\\({}^{\\prime}\\) & RPE/pix & PE/m & RE/\\({}^{\\prime}\\)\\({}^{\\prime}\\) \\\\ \\hline H1 95 & 5.153 & 64.955 & 4.170 & 1.961 & 0.490 & 2.732 & 0.032 \\\\ H2 73 & 5.159 & 52.222 & 4.495 & 1.322 & 0.459 & 1.865 & 0.048 \\\\ H3 79 & 5.192 & 60.994 & 4.941 & 1.845 & 0.475 & 1.994 & 0.032 \\\\ H4 76 & 4.955 & 52.681 & 3.868 & 1.903 & 0.467 & 2.570 & 0.023 \\\\ H5 72 & 5.259 & 45.699 & 4.184 & 1.155 & 0.487 & 2.583 & 0.027 \\\\ H6 82 & 5.417 & 59.702 & 4.376 & 2.042 & 0.464 & 1.938 & 0.027 \\\\ H7 75 & 5.262 & 53.557 & 4.530 & 1.897 & 0.477 & 1.801 & 0.034 \\\\ H8 72 & 5.091 & 45.062 & 4.200 & 1.813 & 0.466 & 2.375 & 0.028 \\\\ Mean & 5.186 & 54.302 & 4.346 & 1.742 & 0.476 & 2.232 & 0.031 \\\\ \\hline \\hline \\multicolumn{8}{l}{*IPE(Initial Position Error), RPE(Reprojection Error), PE(Position Error), RE(Rotation Error)} \\\\ \\end{tabular} \\end{table} Table 2Pose prediction result Figure 6: Network prediction results (upper) and results after BA (lower). Figure 7: (a) GPS alignment result, (b) cluster merging result, and (c) reference result. reconstruction results after BA, including reprojection error, final scene points, and time consumption. The results are shown in Tab.5, and the predicted scenes are shown in Fig.9. Tab.4, indicates that the prediction results of the proposed DeepAAT are much smaller than ESFM in terms of reprojection error, position error, and rotation error. From the four comparative scenarios in Fig.8, it can be seen that the scenarios predicted by the proposed DeepAAT are all correct, but the prediction results of ESFM are relatively chaotic, resulting in the inability to obtain correct results through subsequent global BA. The main reasons are as follows: (1) By integrating GPS information as prior knowledge, DeepAAT significantly enhances its spatial awareness and perception of location. Crucially, DeepAAT predicts the offset in camera position, which is a relative measure to GPS coordinates, rather than attempting to directly ascertain the precise location of each camera. (2) In contrast to ESFM, DeepAAT incorporates a global consistency-based outlier rejection module, which effectively eliminates erroneous matching points that persist even after geometric verification. As a result, the prediction outcomes produced by DeepAAT are considerably more refined and cleaner. In contrast, ESFM lacks a denoising feature, and the presence of noise points in its framework can adversely affect the network's ability to accurately learn and represent the correct scene. Table 5 reveals that DeepAAT outperforms all other methods in terms of average reprojection error and average time consumption, with both indicators surpassing those of Colmap and OpenMVG. Its most striking advantage lies in time efficiency, as DeepAAT significantly outpaces the comparison methods across all test scenarios. Specifically, DeepAAT's average reconstruction efficiency is 453 times greater than Colmap, 580 times that of OpenMVG Incremental, and 28 times that of OpenMVG Global. This suggests that the proposed network substantially enhances the efficiency of AAT reconstruction while maintaining scene integrity. Concurrently, it also enhances reconstruction accuracy to a certain extent. As indicated in Table 5, OpenMVG Incremental failed to reconstruct scene 8_127, likely due to the stringent requirements of the incremental SfM algorithm on initial image pair selection and the relative instability of the OpenMVG Incremental algorithm. ### Ablation Study To test the impact of the core modules proposed in DeepAAT, the following ablation experiments were conducted: \\(\\bigoplus\\) The encoding order of GPS \\(L_{G}\\) and the measurement matrix \\(L_{mes}\\) is set to 2; \\(\\bigoplus\\) The encoding order of GPS \\(L_{G}\\) and the measurement matrix \\(L_{mes}\\) is set to 4, which is the setting used in this article; \\(\\bigoplus\\) The encoding order of GPS \\(L_{G}\\) and the measurement matrix \\(L_{mes}\\) is set to 6; \\(\\bigoplus\\) Remove the spatial spectral feature aggregation module, similar to ESFM, using only the measurement matrix \\(\\mathbf{W}_{mes}\\) as the network input; \\(\\bigcirc\\) Remove the global consistency-based outlier rejecting module. Here, the predicted pose of the network is directly used to triangulate all matching track points during global triangulation. These experimental data are the average of 10 test data results, which are shown in Tab.6. From Tab.6, it can be seen that: (1) Encoding order \\(L\\) of GPS information and the measurement matrix in the spatial-spectral feature aggregation module has little impact on the experimental results, indicating that as long as the encoding order is set within a reasonable range, good experimental results can be achieved. (2) Upon removal of the spatial-spectral feature aggregation module, there was a marked decline in the network's \\begin{table} \\begin{tabular}{c c c c c c c} \\hline \\hline \\multirow{2}{*}{Scene} & \\multicolumn{3}{c}{ESFM} & \\multicolumn{3}{c}{DeepAAT (ours)} \\\\ \\cline{2-7} & \\(\\{\\)JRFE/pixel\\(\\}\\) & \\(\\{\\)JRFE/m \\(\\}\\) & \\(\\{\\)JRFE/pixel\\(\\}\\) & \\(\\{\\)JEFE/m \\(\\}\\) & \\(\\{\\)JRFE/\\(\\}\\) \\\\ \\hline 1\\_128 & 410.580 & 229.447 & 62.265 & **46.891** & **4.340** & **1.610** \\\\ 2\\_107 & 460.225 & 207.427 & 103.297 & **58.965** & **3.807** & **2.409** \\\\ 3\\_112 & 356.813 & 225.512 & 55.177 & **45.562** & **3.905** & **1.422** \\\\ 4\\_104 & 442.035 & 209.172 & 101.788 & **58.436** & **4.301** & **2.363** \\\\ 5\\_104 & 440.210 & 197.034 & 92.420 & **41.844** & **3.962** & **2.091** \\\\ 6\\_106 & 457.345 & 204.852 & 69.944 & **49.994** & **4.087** & **1.713** \\\\ 7\\_118 & 397.622 & 205.955 & 60.104 & **43.848** & **3.957** & 1.631 \\\\ 8\\_127 & 433.504 & 215.331 & 68.668 & **45.161** & **4.177** & **1.417** \\\\ 9\\_127 & 362.216 & 220.011 & 95.567 & **40.667** & **4.455** & **1.676** \\\\ 10\\_129 & 479.165 & 223.690 & 66.945 & **44.770** & **4.144** & **1.483** \\\\ mean & 423.906 & 213.943 & 76.619 & **40.556** & **4.123** & **1.781** \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 4Comparison of ESFM and the proposed DeepAAT Figure 8: Comparison of partial scenarios predicted by ESFM (Upper) and DeepAAT (Lower). overall performance, particularly notable in pose prediction tasks. The reprojection error more than doubled compared to the optimal outcome, accompanied by substantial deviations in both position and rotation errors. These experimental findings highlight that the integration of GPS positioning data and feature point descriptor into the network input plays a critical role in significantly enhancing network performance. (3) Upon removal of the global consistency-based outlier rejecting module, there is an increase in the reprojection error. As shown in Fig.9, the noise feature points generated by triangulation increases significantly. The outliers included in the scene can have a negative impact on subsequent BA and easily lead to a local optimum. For example, for scene 8_127, the final result after BA is shown in Fig.10. It can be seen that there is a camera (marked with a green dashed box) whose pose has been optimized erroneously due to the influence of outliers. ### Generalization experiments on different sizes of input images The proposed network does not require a fixed number of images as input. Therefore, to test the generalization of the network in scenarios with inconsistent camera numbers compared to the training sample, we apply the model trained on scene from 100-130 images to predict scenes with 30-50 cameras and 400-430 cameras. The predicted results are shown in Tab.7 and Tab.8. The prediction results demonstrate that the proposed DeepAAT is versatile, excelling not only in scenes with a similar number of images but also in scenarios with significantly more or fewer images. As indicated in Tab.7, the network trained on scenes with 100-130 images shows a decline in accuracy, precision, recall, and F1 score when applied to 30-50 image scenes. Conversely, its performance metrics improve in 400-430 image scenes. This improvement could be attributed to the longer average track length of matching points in scenes with more cameras, increasing the likelihood of points being classified as inliers and reducing false negatives. This explains the high recall of 0.993 in scenes with 400-430 cameras. Regarding prediction time, the network demonstrates a notable efficiency: despite a rapid increase in the number of cameras, the time required for network prediction increases only marginally. This efficiency is a significant advantage in practical applications. In traditional AAT algorithms, both incremental and global, time consumption escalates quickly with an increasing number of scene images, a trend particularly pronounced in incremental SfM. Thus, our network's time-saving benefits become more pronounced with larger sets of scene images. From Tab.8, we observe that the network, trained on scenes with 100-130 images, exhibits a slight increase in average reprojection error, position error, and rotation error while predicting scenes with 30-50 images and those with 400-430 images. However, in scenarios such as 1_420 and 5_404, the network's predicted position errors surpass the initial position errors. Nevertheless, following global BA, all \\begin{table} \\begin{tabular}{c c c c c c c c} \\hline \\hline \\multirow{2}{*}{Setting} & \\multicolumn{5}{c}{Outlier rejection} & \\multicolumn{4}{c}{Pose estimation} \\\\ \\cline{2-7} & \\(\\uparrow\\)Acc & \\(\\uparrow\\)Pre & \\(\\uparrow\\)Rec & \\(\\uparrow\\)F1 & \\(\\uparrow\\)RPE/pix & \\(\\downarrow\\)PE/m & \\(\\downarrow\\)RF/m \\\\ \\hline \\multirow{3}{*}{30-50} & **0.967** & 0.973 & 0.987 & **0.980** & 49.460 & **3.974** & 2.411 \\\\ & 0.966 & **0.974** & 0.985 & 0.979 & **48.556** & 4.123 & **1.781** \\\\ & **0.967** & 0.971 & **0.989** & **0.980** & 57.947 & 4.953 & 2.036 \\\\ & 0.964 & 0.970 & 0.986 & 0.978 & 98.414 & 5.104 & 2.990 \\\\ & / & / & / & / & / & 48.599 & 4.123 & 1.781 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 6: Results of ablation study \\begin{table} \\begin{tabular}{c c c c c c c} \\hline \\hline \\multirow{2}{*}{Scene} & \\multicolumn{2}{c}{Colmap} & \\multicolumn{2}{c}{OpenMVG Incremental} & \\multicolumn{2}{c}{OpenMVG Global} & \\multicolumn{2}{c}{DeepAAT (Ours)} \\\\ \\cline{2-7} & \\(\\uparrow\\)RPE/pix & \\(\\downarrow\\)Time /s & \\(\\downarrow\\)RPE/pix & \\(\\downarrow\\)Time/s & \\(\\downarrow\\)RPE/pixel & \\(\\downarrow\\)Time/s & \\(\\downarrow\\)RPE/pix & \\(\\downarrow\\)Time/s \\\\ \\hline 1 \\_128 & 0.500 & 465.966 & **0.478** & 548.536 & 0.548 & 29.651 & 0.489 & **0.845** \\\\ 2 \\_107 & 0.566 & 296.569 & 0.518 & 390.071 & 0.611 & 15.830 & **0.482** & **0.832** \\\\ 3 \\_112 & 0.501 & 377.211 & 0.479 & 601.795 & 0.539 & 28.098 & **0.472** & **0.861** \\\\ 4 \\_104 & 0.554 & 276.811 & 0.522 & 418.626 & 0.611 & 14.442 & **0.491** & **0.798** \\\\ 5 \\_104 & 0.528 & 285.427 & 0.473 & 367.106 & 0.590 & 13.879 & **0.447** & **0.780** \\\\ 6 \\_106 & 0.506 & 365.869 & **0.450** & 417.672 & 0.516 & 19.331 & 0.484 & **0.815** \\\\ 7 \\_118 & 0.507 & 375.640 & 0.478 & 564.906 & 0.550 & 25.276 & **0.467** & **0.831** \\\\ 8 \\_127 & 0.521 & 437.120 & Reconstruction failure & 0.548 & 34.464 & **0.473** & **0.862** \\\\ 9 \\_127 & **0.478** & 439.329 & 0.481 & 423.064 & 0.551 & 22.412 & 0.483 & **0.859** \\\\ 10 \\_129 & 0.503 & 465.138 & **0.458** & 615.676 & 0.522 & 28.488 & 0.487 & **0.868** \\\\ Mean & 0.516 & 378.508 & 0.482 & 484.050 & 0.559 & 23.187 & **0.478** & **0.835** \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 5: Comparison results between DeepAAT and traditional algorithms \\begin{table} \\begin{tabular}{c c c c c c c} \\hline \\hline \\multirow{2}{*}{Number of images} & Scene & \\(\\uparrow\\)Acc & \\(\\uparrow\\)Pre & \\(\\uparrow\\)Rec & \\(\\uparrow\\)F1 & Time/s \\\\ \\hline \\multirow{4}{*}{30-50} & 1 -45 & 0.952 & 0.966 & 0.973 & 0.971 & 0.720 \\\\ & 2 -42 & 0.964 & 0.976 & 0.861 & 0.979 & 0.713 \\\\ & 3 -50 & 0.950 & 0.961 & 0.976 & 0.969 & 0.749 \\\\ & 3 -34 & 0.920 & 0.953 & 0.947 & 0.950 & 0.698 \\\\ & 5 -35 & 0.942 & 0.970 & 0.955 & 0.962 & 0.699 \\\\ & Mean & 0.946 & 0.966 & 0.967 & 0.966 & 0.716 \\\\ \\hline \\multirow{4}{*}{400-430} & 1 -420 & 0.971 & 0.973 & 0.959 & 0.982 & 1.186 \\\\ & 2 -428 & 0.976 & 0.977 & 0.993 & 0.986 & 1.148 \\\\ & 3 -406 & 0.977 & 0.979 & 0.993 & 0.986 & 1.148 \\\\ & 4 -401 & 0.975 & 0.976 & 0.993 & 0.985 & 1.152 \\\\ & 5 -044 & 0.977 & 0.978 & 0.994 & 0.866 & 1.139 \\\\ & Mean & 0.975 & 0.977 & 0.993 & 0.985 & 1.163 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 7: Results of outlier rejecting for 100-130 camera models in different numbers of image prediction tasksscenes achieve accurate reconstruction results, as illustrated in Fig.11. This outcome not only underscores the network's effective scene initialization capabilities but also highlights its robust reconstruction powers. This experimental outcome indicates the superior performance of the proposed DeepAAT. It exhibits the capability to effectively handle scene prediction tasks several times larger than its training scope on smaller scenes. Typically, GPU memory consumption is higher during the training phase than in testing for most deep-learning tasks. Consequently, this characteristic significantly enhances the practicality of DeepAAT, making it a robust solution for large-scale applications. ## 6 Conclusion AAT of UAV images has gained widespread adoption in 3D reconstruction, favored for its flexibility and cost-effectiveness. However, challenges persist: incremental AAT methods struggle with low reconstruction efficiency, global AAT methods grapple with subpar robustness and scene integrity, and deep learning-based algorithms often falter when processing a vast number of images. To overcome these challenges, we introduce DeepAAT, a novel approach designed to enhance the efficiency of UAV AAT while maintaining the accuracy and completeness of the reconstructed scenes. Our experiments demonstrate that DeepAAT's time efficiency outstrips incremental algorithms by hundreds of times and global algorithms by tens of times. Figure 10: BA results without outlier filtering (Left) and with outlier filtering (Right). \\begin{table} \\begin{tabular}{c c c c c c c c} \\hline \\hline \\multirow{2}{*}{Cam num} & \\multirow{2}{*}{Scene} & \\multirow{2}{*}{IPE/m} & \\multicolumn{3}{c}{Results of AAT} & \\multicolumn{3}{c}{Results after BA} \\\\ \\cline{4-9} & & & \\(\\downarrow\\)RPE/pix & \\(\\downarrow\\)PE/m & \\(\\downarrow\\)RE/\\({}^{*}\\) & \\(\\downarrow\\)RPE/pix & \\(\\downarrow\\)PE/m & \\(\\downarrow\\)RE/\\({}^{*}\\) \\\\ \\hline \\multirow{9}{*}{30-50} & 1 & 45 & 5.309 & 56.545 & 4.719 & 2.055 & 0.466 & 2.144 & 0.032 \\\\ & 2 & 42 & 4.951 & 48.010 & 4.071 & 1.221 & 0.473 & 2.291 & 0.029 \\\\ & 3 & 50 & 5.245 & 50.903 & 3.983 & 1.701 & 0.475 & 2.208 & 0.038 \\\\ & 4 & 34 & 4.859 & 52.996 & 4.562 & 2.386 & 0.447 & 1.172 & 0.035 \\\\ & 5 & 35 & 5.448 & 73.172 & 4.393 & 2.586 & 0.415 & 1.697 & 0.030 \\\\ & Mean & 5.162 & 56.325 & 4.346 & 1.990 & 0.455 & 1.902 & 0.033 \\\\ \\hline \\multirow{9}{*}{400-430} & 1 & 420 & 5.009 & 63.770 & 5.512 & 1.922 & 0.475 & 3.132 & 0.046 \\\\ & 2 & 428 & 5.111 & 56.347 & 4.928 & 1.884 & 0.482 & 3.023 & 0.029 \\\\ & 3 & 406 & 5.046 & 54.624 & 4.774 & 1.878 & 0.483 & 3.164 & 0.028 \\\\ & 4 & 401 & 4.919 & 59.247 & 4.905 & 1.872 & 0.486 & 3.270 & 0.041 \\\\ & 5 & 404 & 4.935 & 59.912 & 5.254 & 1.862 & 0.486 & 3.148 & 0.034 \\\\ & Mean & 5.004 & 58.780 & 5.075 & 1.884 & 0.482 & 3.147 & 0.036 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 8: Pose prediction results of the 100-130 camera trained model in different numbers of camera prediction tasks Figure 9: Prediction Scenarios without outlier rejecting (Left) and with outlier rejecting (Right). In the near future, we will extend DeepAAT to the image set without GPS information. ## 7 Acknowledgment This study was jointly supported by the National Natural Science Foundation Project (No. 42201477, No. 42130105). ## References * Beder and Steffen (2006) Beder, C., Steffen, R., 2006. Determining an initial image pair for fixing the scale of a 3d reconstruction from an image sequence, in: Joint Pattern Recognition Symposium, Springer. pp. 657-666. * Bhowmick et al. (2017) Bhowmick, B., Patra, S., Chatterjee, A., Govindu, V.M., Banerjee, S., 2017. Divide and conquer: A hierarchical approach to large-scale structure-from-motion. Computer Vision and Image Understanding 157, 190-205. * Chen et al. (2020) Chen, Y., Shen, S., Chen, Y., Wang, G., 2020. Graph-based parallel large scale structure from motion. Pattern Recognition 107, 107537. * Dai and He (2013) Dai, Y., Li, H., He, M., 2013. Projective multiview structure and motion from element-wise factorization. IEEE transactions on pattern analysis and machine intelligence 35, 2238-2251. * Dusumanu et al. (2019) Dusumanu, M., Rocco, I., Pajdla, T., Pollefeys, M., Sivic, J., Torii, A., Sattler, T., 2019. D2-net: A trainable cnn for joint detection and description of local features. arXiv preprint arXiv:1905.03561. * Govindu (2004) Govindu, V.M., 2004. Lie-algebraic averaging for globally consistent motion estimation, in: Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2004. CVPR 2004., IEEE, pp. I-I. * Gu et al. (2021) Gu, X., Yuan, W., Dai, Z., Tang, C., Zhu, S., Tan, P., 2021. Dro: Deep recurrent optimizer for structure-from-motion. arXiv preprint arXiv:2103.13201. * Hartford et al. (2018) Hartford, J., Graham, D., Leyton-Brown, K., Ravanabkathsh, S., 2018. Deep models in interactions across sets, in: International Conference on Machine Learning, PMLR. pp. 1909-1918. * Hartley and Sturm (1997) Hartley, R.I., Sturm, P., 1997. Triangulation. Computer vision and image understanding 68, 146-157. * Hashemianasub et al. (2022) Hashemianasub, S.M., Zhou, T., Lin, Y.C., Habib, A., 2022. Linear feature-based triangulation for large-scale orthophoto generation over mechanized agricultural fields. IEEE Transactions on Geoscience and Remote Sensing 60, 1-18. * He and Habib (2018) He, F., Habib, A., 2018. Three-point-based solution for automated motion parameter estimation of a multi-camera indoor mapping system with planar motion constraint. ISPRS Journal of Photogrammetry and Remote Sensing 142, 278-291. * He et al. (2016) He, K., Zhang, X., Ren, S., Sun, J., 2016. Deep residual learning for image recognition, in: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770-778. * Iglesias et al. (2023) Iglesias, J.P., Nilsson, A., Olsson, C., 2023. expose: Accurate initialization-free projective factorization using exponential regularization, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Figure 11: Pose prediction results of the 100-130 camera trained model in different numbers of camera prediction tasks Recognition, pp. 8959-8968. * Jiang et al. (2020) Jiang, S., Jiang, C., Jiang, W., 2020. Efficient structure from motion for large-scale uav images: A review and a comparison of sfm tools. ISPRS Journal of Photogrammetry and Remote Sensing 167, 230-251. * Jiang and Wang (2021) Jiang, S., Jiang, W., Wang, L., 2021. Unmanned aerial vehicle-based photogrammetric 3d mapping: A survey of techniques, applications, and challenges. IEEE Geoscience and Remote Sensing Magazine 10, 135-171. * Lepetit et al. (2009) Lepetit, V., Moreno-Noguer, F., Fua, P., 2009. Epmp: An accurate o(n) solution to the pmp problem. International journal of computer vision 81, 155-166. * Li et al. (2019) Li, J., Yang, B., Chen, C., Habibi, A., 2019. Nrli-uav: Non-rigid registration of sequential raw laser scans and images for low-cost uav lidar point cloud quality improvement. ISPRS Journal of Photogrammetry and Remote Sensing 158, 123-145. * Lowe (2004) Lowe, D.G., 2004. Distinctive image features from scale-invariant keypoints. International journal of computer vision 60, 91-110. * Magerand and Del Bue (2017) Magerand, L., Del Bue, A., 2017. Practical projective structure from motion (p2sfm), in: Proceedings of the IEEE International Conference on Computer Vision, pp. 39-47. * Meier et al. (2012) Meier, L., Tanskanen, P., Heng, L., Lee, G.H., Fraundorfer, F., Pollefeys, M., 2012. Pishawk: A micro aerial vehicle design for autonomous flight using onboard computer vision. Autonomous Robots 33, 21-39. * Mildenhall et al. (2021) Mildenhall, B., Srinivasan, P.P., Tancik, M., Barron, J.T., Ramamoorthi, R., Ng, R., 2021. Nerf: Representing scenes as neural radiance fields for view synthesis. Communications of the ACM 65, 99-106. * Moran et al. (2021) Moran, D., Koslowsky, H., Kasten, Y., Maron, H., Galun, M., Basri, R., 2021. Deep permutation equivariant structure from motion, in: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 5976-5986. * Moulton et al. (2013a) Moulton, P., Monasse, P., Marlet, R., 2013a. Adaptive structure from motion with a contraint model estimation, in: Computer Vision-ACCV 2012: 11th Asian Conference on Computer Vision, Daejeon, Korea, November 5-9, 2012, Revised Selected Papers, Part IV 11, Springer. pp. 257-270. * Moulton et al. (2013b) Moulton, P., Monasse, P., Marlet, R., 2013b. Global fusion of relative motions for robust, accurate and scalable structure from motion, in: Proceedings of the IEEE international conference on computer vision, pp. 3248-3255. * Schenk (1997) Schenk, T., 1997. Towards automatic aerial triangulation. ISPRS Journal of Photogrammetry and remote Sensing 52, 110-121. * Schonberger and Frahm (2016) Schonberger, J.L., Frahm, J.M., 2016. Structure-from-motion revisited, in: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 4104-4113. * Shi and Malik (2000) Shi, J., Malik, J., 2000. Normalized cuts and image segmentation. IEEE Transactions on pattern analysis and machine intelligence 22, 888-905. * Shin and El-Sheimy (2002) Shin, E.H., El-Sheimy, N., 2002. Accuracy improvement of low cost ins/gps for land applications, in: Proceedings of the 2002 national technical meeting of the institute of navigation, pp. 146-157. * Snavely et al. (2006) Snavely, N., Seitz, S.M., Szeliski, R., 2006. Photo tourism: exploring photo collections in 3d, in: ACM siggraph 2006 papers, pp. 835-846. * Snavely et al. (2008) Snavely, N., Seitz, S.M., Szeliski, R., 2008. Skeletal graphs for efficient structure from motion, in: 2008 IEEE Conference on Computer Vision and Pattern Recognition, IEEE. pp. 1-8. * Sturm and Triggs (1996) Sturm, P., Triggs, B., 1996. A factorization based algorithm for multi-image projective structure and motion, in: Computer Vision--ECCV'96: 4th European Conference on Computer Vision Cambridge, UK, April 15-18, 1996 Proceedings Volume II 4, Springer. pp. 709-720. * Tanathong and Lee (2014) Tanathong, S., Lee, I., 2014. Using gps/ins data to enhance image matching for real-time aerial triangulation. Computers & Geosciences 72, 244-254. * Tang and Tan (2018) Tang, C., Tan, P., 2018. Ba-net: Dense bundle adjustment network. arXiv preprint arXiv:1806.04807. * Triggs et al. (2000) Triggs, B., McLauchlan, P.F., Hartley, R.I., Fitzgibbon, A.W., 2000. Bundle adjustment--a modern synthesis, in: Vision Algorithms: Theory and Practice: International Workshop on Vision Algorithms Corfu, Greece, September 12-22, 1999 Proceedings, Springer. pp. 298-372. * Ulyanov et al. (2016) Ulyanov, D., Vedaldi, A., Lempitsky, V., 2016. Instance normalization: The missing ingredient for fast stylization. arXiv preprint arXiv:1607.08022. * Wang et al. (2021) Wang, J., Zhong, Y., Dai, Y., Birchfield, S., Zhang, K., Smolyanskiy, N., Li, H., 2021. Deep two-view structure-from-motion revisited, in: Proceedings of the IEEE/CVF conference on Computer Vision and Pattern Recognition, pp. 8953-8962. * Wei et al. (2020) Wei, X., Zhang, Y., Li, Z., Fu, Y., Xue, X., 2020. Deepsfm: Structure from motion via deep bundle adjustment, in: Computer Vision--ECCV 2020: 16th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part I 16, Springer. pp. 230-247. * Wu et al. (2022) Wu, P., Li, G., Li, T.H., 2022. Moac: Multi-level perception optimizer based on dual augmented cost for structure-from-motion, in: 2022 IEEE 5th International Conference on Multimedia Information Processing and Retrieval (MIPR), IEEE. pp. 139-145. * Xiao et al. (2022) Xiao, Y., Li, L., Li, X., Yao, J., 2022. Deepmle: A robust deep maximum likelihood estimator for two-view structure from motion, in: 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE. pp. 10643-10650. * Xu et al. (2021) Xu, B., Zhang, L., Liu, Y., Ai, H., Wang, B., Sun, Y., Fan, Z., 2021. Robust hierarchical structure from motion for large-scale unstructured image sets. ISPRS Journal of Photogrammetry and Remote Sensing 181, 367-384. * Zhong et al. (2023) Zhong, J., Yan, J., Li, M., Barriot, J.P., 2023. A deep learning-based local feature extraction method for improved image matching and surface reconstruction from yutu-2 pcam images on the moon. ISPRS Journal of Photogrammetry and Remote Sensing 206, 16-29. * Zhou et al. (2020) Zhou, G., Bao, X., Ye, S., Wang, H., Yan, H., 2020. Selection of optimal building facade texture images from uav-based multiple oblique image flows. IEEE Transactions on Geoscience and Remote Sensing 59, 1534-1552. * Zhou et al. (2017) Zhou, T., Brown, M., Snavely, N., Lowe, D.G., 2017. Unsupervised learning of depth and ego-motion from video, in: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1851-1858. * Zhu et al. (2017) Zhu, S., Shen, T., Zhou, L., Zhang, R., Fang, T., Quan, L., 2017. Accurate, scalable and parallel structure from motion. Ph.D. thesis. Hong Kong University of Science and Technology. * Zhu et al. (2018) Zhu, S., Zhang, R., Zhou, L., Shen, T., Fang, T., Tan, P., Quan, L., 2018. Very large-scale global sfm by distributed motion averaging, in: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 4568-4577. * Zhuang and Chandraker (2021) Zhuang, B., Chandraker, M., 2021. Fusing the old with the new: Learning relative camera pose with geometry-guided uncertainty, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 32-42.
Automated Aerial Triangulation (AAT), aiming to restore image poses and reconstruct sparse points simultaneously, plays a pivotal role in earth observation. AAT has evolved into a fundamental process widely applied in large-scale Unmanned Aerial Vehicle (UAV) based mapping. However classic AT methods still face challenges like low efficiency and limited robustness. This paper introduces DeepAAT, a deep learning network designed specifically for AAT of UAV imagery. DeepAAT considers both spatial and spectral characteristics of imagery, enhancing its capability to resolve erroneous matching pairs and accurately predict image poses. DeepAAT marks a significant leap in AAT's efficiency, ensuring thorough scene coverage and precision. Its processing speed outpaces incremental AAT methods by hundreds of times and global AAT methods by tens of times while maintaining a comparable level of reconstruction accuracy. Additionally, DeepAAT's scene clustering and merging strategy facilitate rapid localization and pose determination for large-scale UAV images, even under constrained computing resources. The experimental results demonstrate that DeepAAT substantially improves over conventional AAT methods, highlighting its potential for increased efficiency and accuracy in UAV-based 3D reconstruction tasks. To benefit the photogrammetry society, the code of DeepAAT will be released at: [https://github.com/WdW-US110/DeepAAT](https://github.com/WdW-US110/DeepAAT). Automated Aerial Triangulation (AAT) Unmanned Aerial Vehicle (UAV) Structure from Motion (SfM) Orientation
Summarize the following text.
arxiv/ff3956bc_548b_45f4_a7b7_de3073b26ff3.md
# The miniSLR: A low-budget, high-performance satellite laser ranging ground station Daniel Hampf\\({}^{1,2}\\) Felicitas Niebler\\({}^{1}\\) Tristan Meyer\\({}^{1}\\) Wolfgang Riede\\({}^{1}\\) \\({}^{1}\\)Institute of Technical Physics, German Aerospace Center, Pfaffenwaldring 38-40, Stuttgart, 70569, BW, Germany. \\({}^{2}\\)DiGOS Potsdam GmbH, Telegrafenberg, Potsdam, 14473, BB, Germany. *Corresponding author(s). E-mail(s): [email protected]; ## 1 Introduction Satellite laser ranging (SLR) is a powerful tool for geodesy, mission support and fundamental science (Pearlman et al. (2019)). In the future, it may also be used for space situational awareness, precise orbit determination and conjunction assessment (Hampf et al. (2021)). However, the effort to construct and operate an SLR ground station is considerable, and poses an entry barrier for new users and applications. Thus, the existing SLR network (see International Laser Ranging Service, ILRS (ILRS (2023), Pearlman et al. (2019))), still suffers from significant gaps in global coverage, especially in the Global South (Osubo et al. (2016)). But also elsewhere, the demand for new SLR stations is growing: An increasing number of SLR missions causes a high load on existing stations, which can only be met with a tight schedule and in favourable weather conditions. More stations may be needed in the future to satisfy all ranging requirements. Furthermore, some older stations reach the end of their lifetime and will need replacement in coming years (Wilkinson et al. (2019)). On the other hand, technical advances of the last twenty years have opened the possibility to construct much smaller, simpler and cheaper SLR ground stations. Compact and powerful lasers, better detectors, faster readout electronics and PCs, and inexpensive but precise direct drive telescope mounts are key technologies for this development. The goal of the miniSLR project has been to combine these novel technologies for the first time into a working prototype of a new generation of SLR ground station. At a fraction of the cost of a conventional SLR system, it is designed to reach the same performance in terms of precision, stability and tracking capabilities. A first version of the miniSLR has been constructed and set up at the DLR (German Aerospace Center) in Stuttgart. In its current configuration, it commenced experimental operation in November 2022. It has been accepted into the ILRS as engineering station, and data from measured passes have been uploaded to the European Data Center (EDC (2023)). Table 1 lists the station's ILRS IDs and coordinates. This paper describes the technical design (Section 2) and results from the first six months of test operation (Section 3). In the final Section 4, an outlook to the next steps in development, and to the potential impact of this new development for the SLR and space geodesy community is given. \\begin{table} \\begin{tabular}{l l} \\hline \\hline System name & miniSLR \\\\ 4-Character Code & SMIL \\\\ CDP System Number & 52 \\\\ CDP Occupation Number & 02 \\\\ IERS DOMES Number & 10916S001 \\\\ CDP Pad ID & 7816 \\\\ Location & Stuttgart, Germany \\\\ Latitude & 48.748893981739\\({}^{*}\\) N \\\\ Longitude & 9.102599520211\\({}^{*}\\) E \\\\ Elevation & 533.240 m \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 1: ILRS station parameters. The coordinates are approximate, based on a GNSS survey. ## 2 System set-up ### Design innovations This section highlights the main design features of the miniSLR that enable the reduction of size and complexity of the system. A full system overview is given in subsequent Section 2.2. #### 2.1.1 Transportability and small overall size The main design goal of the miniSLR has been a significant reduction in the size of the system. Using a receive telescope of only 20 cm aperture and a small direct drive astronomy mount, the whole system could be integrated within and on top of an aluminium enclosure with a footprint of 130 x 230 cm (see Figure 1). At a weight of about 600 kg, the enclosure can be moved around on wheels and installed for operation on any flat and stable surface. The main advantages of this integrated design are: * Lower production cost * Lower maintenance cost Figure 1: The miniSLR prototype on the roof of the DLR institute building. The enclosure in the bottom contains most of the electronics and IT. The top compartments house the receive and transmit telescopes, the laser head, cameras, detectors and beam control optics. * The system can be integrated and validated at factory, before it is moved to its operation site, thus lowering the effort for installation and decreasing delays in the commissioning process. * No need for civil engineering, building permits and construction works, thus significantly lowering costs, effort and time needed for installation. * Re-location can be achieved easily, if necessary. Some of these advantages have already been demonstrated by the French Transportable Laser Ranging Station (FTLRS, Nicolas et al. (2001)). #### Fully sealed, no dome The miniSLR system is fully sealed, thus avoiding the need for a dome with movable parts. This offers two main advantages: First, in case of a catastrophic failure (e.g. complete power loss, mechanical blockage etc.), the system remains in an inherently safe status (i.e. protected from rain). Recovery and repair can be planned and conducted with much less urgency than in the case of a dome that can no longer be closed. Second, the whole system is air-conditioned and retains a constant temperature in all parts. Combined with the relatively short cable lengths, this increases the stability of the timing measurement. #### High repetition rate Q-switch laser Due to the small aperture of the receive telescope, a rather high power laser is needed to achieve sufficient returns. On the other hand, relatively stringent size and weight limitations apply, since the laser head must be mounted in the top compartment. This was resolved by using high-repetition laser ranging (Hampf et al. (2019)) with a small Q-switched diode laser. In this context, it offers three advantages: * Sufficient average power at a very small footprint: At a size of 12 cm x 8 cm x 4 cm, the laser offers a power of 5 Watts (100 \\(\\mathrm{\\SIUnitSymbolMicro\\SIUnitSymbolMicro\\SIUnitSymbolMicro}\\) at 50 kHz). * Due to the low pulse energy, single photon operation is inherently ensured without the need for additional attenuation components, thus further simplifying the design. * The high repetition rate results in a high number of returns for most targets, which decreases the statistical error of the average data points. This enables sufficiently precise measurements despite the relatively long pulse duration of 500 ps (FWHM). #### Laser Ranging at 1064 nm (near-infrared) Using the Nd:YAG fundamental wavelength of 1064 nm for laser ranging has been discussed for many years (e.g. Volker et al. (2013)), and has recently been implemented in a number of SLR systems (Courde et al. (2017), Xue et al. (2016), Eckl et al. (2017), IZN-1 at Teneriffa). Nevertheless, most systems still use frequency doubling to obtain green laser light at 532 nm. This choice is primarily due to the available receive detectors: Up until a few years ago, single photon detectors with picosecond timing precision have only been realized for the visible light spectrum (either photomultiplier tubes or silicon based geiger mode avalanche photo diodes). Today, however, InGaAs SPADs (single photon avalanche diodes based on indium gallium arsenide) achieve sufficient timing precision and good sensitivity at 1064 nm. With these, ranging at the fundamental Nd:YAG wavelength becomes more favourable for a number of reasons: * Avoiding conversion losses of about a factor of four, thus generating more usable power with a same laser (important to keep laser size and weight small) * Avoiding complexity of additional frequency doubling optics * Slightly better atmospheric transmission * Less noise from sky brightness, especially at daylight (blue sky) ### Hardware set-up This section gives a brief overview of the hardware set-up of the miniSLR. References to \"item NN\" relate to the indicators in Figures 2 and 3. #### Tracking and mechanics Tracking is realized using an Astelco NTM-600 direct drive mount. It allows programming of custom trajectories, which are followed with a high timing precision owing to an internal GNSS clock. Thus, sufficiently accurate tracking to satellites with good predictions can be achieved. For simplicity, the pointing model is done by the main control software (see Section 2.3) rather than the mount's own firmware. For rain protection, the mount is wrapped in a Telegizmos telescope cover. The optical set-up, including receive and transmit telescopes, is mounted on three optical breadboards installed on top of the mount. This enables a high degree of flexibility in the optical configuration, which is of paramount importance for an experimental prototype. Figure 2: The central compartment of the miniSLR head: (1) Laser head; (2) Thermo-electric elements (TEC) for laser temperature control; (3) start photodiode; (4) dichroic beam splitter; (5) tracking camera; (6) fibre coupling for single photon detector; (7) programmable USB hubs; (8) 12 V power distribution; (9) counter weights to balance mount elevation axis. #### Transmit path Laser pulses are produced by a Standa MOPA-4 diode laser (item 1). Its specifications are ideally suited for a small SLR system: The tiny laser head can easily be installed on the moving platform. A pulse duration of 500 ps FWHM is sufficiently short to achieve a high precision in averaged normal point data (see Sections 2.4.1 and 2.4.2 for a more detailed discussion on ranging precision). A pulse energy of 85 uJ is enough to achieve returns from all relevant satellites, and a repetition rate of 50 kHz provides a high amount of data points for effective averaging (see also Section 2.4.3). Temperature control for the laser head is realized by two Thorlabs PTC1/M thermoelectric elements. They keep the laser head at constant 22\\({}^{*}\\) Celsius and can dissipate up to 35 W of heat (item 2). Pulse emission times are recorded with a standard photodiode (Thorlabs DET08C/M) using a fraction of light leaking through the first mirror (item 3). From the mirror, the light is guided towards the transmitter compartment (item 10). For calibration, laser power needs to be strongly attenuated in order to not saturate the detector. This is achieved by a flip mirror carrying a reflective neutral density or laser line filter (item 11). It is closed by default and only opens when the system is tracking a satellite. While closed, the laser power is directed into a power meter (item 12). The laser average power is thus monitored and recorded each time a calibration run is performed. The subsequent safety shutter (item 13) is also closed by default and opens only for ranging measurements. It is spring loaded and requires a regular \"open\" signal from the software to open and remain open (for more information on the safety system, see also Section 2.2.5). Figure 3: The transmitter compartment of the miniSLR head: (10) incoming beam from central compartment; (11) attenuator flip mirror; (12) laser power meter; (13) safety shutter; (14) motorized beam steering mirror; (15) dichroic mirror guiding 1064 nm towards exit aperture; (16) transmitter camera; (17) beam expander; (18) backscatter camera; (19) aircraft camera; (20) power distribution; (21) thermometer / hygrometerThe motorized beam mirror (item 14) is needed to fine-control the laser beam direction in relation to the main system pointing. It is moved by two Thorlabs Z806 motors, controlled by the main software. The dichroic mirror (item 15) guides the infrared laser light towards the exit aperture, while incoming light passes through to the transmitter camera (item 16). This camera records the field of view seen by the beam expander and can be used during the initial alignment of the system. The beam expander itself (item 17) is a simple Galileo type telescope with a one-inch negative and a three-inch positive lens. It increases the beam diameter by about a factor of five, thus decreasing the beam divergence and improving the beam pointing resolution. The backscatter camera (item 18) is used to monitor the laser beam in the atmosphere, and fine-align it towards the target. #### 2.2.3 Receive path The receive path starts with a 20 cm aperture Newton telescope (ASA H8). A dichroic mirror on its exit port (item 4) guides visible light towards the main tracking camera (item 5), while transmitting the returning infrared laser light towards the single photon receiver. Two bandpass filters block light from 900 nm to 1700 nm, with a 1 nm wide window at 1064 nm. While not really required at night, these filters are essential for daylight ranging. For simplicity, they are permanently installed and not removed for night time ranging. The light is coupled into a 105 um multi-mode optical fibre connected to an Aurea SPD-OEM-NIR single photon detector, which generates the stop signal for the ranging measurement. Table 2 summarizes the specifications of the optical system (transmit and receive). These values are used in Section 2.4.3 to calculate the expected photon return rates. #### 2.2.4 Timing measurement and control System-wide frequency and time synchronisation is based on a Meinberg GPS-180 GNSS disciplined atomic clock, which provides a 10 MHz sine wave, a 1 PPS signal, and the datum over serial interface. \\begin{table} \\begin{tabular}{r r l} \\hline \\hline \\multicolumn{1}{c}{Transmit aperture} & \\multicolumn{1}{c}{7.5 cm} & \\\\ Beam diameter & \\multicolumn{1}{c}{5 cm} & \\\\ Receive aperture (nominal) & \\multicolumn{1}{c}{20 cm} & \\\\ Obscuration & \\multicolumn{1}{c}{25\\%} & due to secondary mirror in telescope \\\\ Laser pulse energy & \\multicolumn{1}{c}{85 μmJ} & (measured) \\\\ Laser repetition rate & \\multicolumn{1}{c}{50 kHz} & (measured) \\\\ Operating wavelength & \\multicolumn{1}{c}{1064 nm} & \\\\ Beam divergence & \\multicolumn{1}{c}{50 μmrad} & (half angle, estimated) \\\\ Beam stability & \\multicolumn{1}{c}{50 μmrad} & (half angle, estimated) \\\\ Transmitter efficiency & \\multicolumn{1}{c}{0.6} & (measured) \\\\ Receiver efficiency & \\multicolumn{1}{c}{0.1} & (estimated, losses e.g. in band-pass filter) \\\\ Efficiency of detector & \\multicolumn{1}{c}{30\\%} & (given by manufacturer) \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 2: Main specifications of the miniSLR optical system. These values are also used for calculation of the expected return rates (Section 2.4.3). The timestamps are recorded using a Swabian instruments Time Tagger Ultra with a nominal timing precision of 9 ps. Additionally to the start, stop and PPS signals, laser trigger and detector gate are recorded as well on separate channels for debugging and monitoring. Laser triggers and detector gate signals are generated by two Swabian Instruments Pulse Streamers. While one pulse generator runs a steady PPS-aligned 50 kHz trigger sequence for the laser, the other one produces a dynamically calculated gating sequence for the single photon detector, based on the expected time of flight to the target. Figure 4 shows the electronics installed in the cabinet. Timing calibration is done roughly once per hour during regular operation. For this, the attenuated laser beam is guided directly towards the receiver aperture via a 45\\({}^{\\circ}\\) mirror and a 45\\({}^{\\circ}\\) diffuse surface. The nominal range to the calibration target is given by the distance from the mount axes intersection to the virtual intersection of the two 45\\({}^{\\circ}\\) surfaces, and measured to 1.504 m. #### 2.2.5 Aircraft and laser safety The output beam power is significantly above the maximum permissible exposure (MPE) defined in the laser safety norm EN 60825-1:2014. The miniSLR is thus classified as a class 4 laser system. While the beam expander reduces the power and energy density enough to avoid skin burns outside of the device, the limit for eye injuries is exceeded by about a factor of 200 at the exit aperture. Assuming a diffraction limited beam divergence of about 50 urad, the laser beam becomes eye-safe at a distance of about 10 km. Figure 4: The inside of the cabinet houses the complete electronics for the system. Left rack top to bottom: Laser controller, PC, mount controller. Right rack: Internet router, trigger and gate controller (Swabian Instruments), atomic clock (Meinberg), event timer (Swabian Instruments), network switch, two 12 V power supplies. Laser emission is automatically shut off by the attenuator and safety shutter, unless a \"clear\" signal is given from the following checks (conducted by a special module in the control software): * Pointing must be above the azimuth-dependent minimum elevation, which mirrors the adjacent buildings and obstacles. Below the elevation mask, the safety shutter can be opened, but only if the eye-safe attenuation is activated. This is mainly used for time calibration. * Telescope must be tracking a target, not slewing or idling. The target must be whitelisted for laser ranging. * No aircraft must be within 20deg of the beam, within a distance of up to 20 km. Aircraft positions are received by a data stream from the German Air Traffic Authorities. For cross-check, a local ADS-B receiver (Jetvision Radarcape) and a thermal infrared camera (FLIR Tau-2, item 19) are installed. * Operation status code must be nominal for a number of critical components, such as shutter and attenuator. Telescope pointing information must be up to date. To ensure workplace safety, warning lamps, emergency stop button, access control and laser hazard signs are installed. #### 2.2.6 Slow control To facilitate remote operation and simple trouble shooting, most power lines can be switched independently by software. This can be used e.g. to remotely restart components that are not working nominally. Programmable USB Hubs (Aeroname 3P) are used to connect devices to the PC, which allow detailed monitoring of each USB port (data, power) and power-cycling USB-powered devices by software. Temperature, humidity and air pressure are continuously recorded inside and outside of the device for monitoring, safety and SLR data evaluation. ### Software To operate the miniSLR, the control software OOOS (orbital objects observation software), developed at the institute for the previous laser ranging station \"Uhlandshohe\", has been refined and improved. It is written almost entirely in python to facilitate rapid development and easy debugging. Exploiting multiprocessing and fast computing libraries (e.g. numpy), the software can handle all control tasks in real-time on a standard Linux or Windows PC. Special focus has been put into designing a clear and tidy graphical user interface (GUI), thus enabling fast training of observers and efficient work. A number of automation functions take over most of the standard tasks, however a complete \"hands-off\" operation has not yet been achieved for laser ranging. A range of processing nodes (also called daemons) take over different blocks of control. The nodes are loosely coupled to each other over a TCP/IP protocol. The GUI acts as a central node, orchestrating the work of the daemon nodes and channelling all user interaction (input and output). The daemon nodes connect to an abstract hardware layer, which in turn implements the actual device interfaces based on current configuration settings. Thus, changes to the system's hardware can easily be incorporated into the software. Figure 5 shows a screenshot of OOOS running during an actual SLR observation. ### Expected performance #### 2.4.1 Single shot precision The timing uncertainty of a single range measurement (\"single shot precision\") is given by the timing uncertainties of all contributing components: \\[\\sigma_{\\mathrm{single}}=\\sqrt{\\sigma_{\\mathrm{L}}^{2}+\\sigma_{\\mathrm{D1}}^{2 }+\\sigma_{\\mathrm{D2}}^{2}+\\sigma_{\\mathrm{ET}}^{2}+\\sigma_{\\mathrm{S}}^{2}} \\tag{1}\\] With: * \\(\\sigma_{\\mathrm{L}}\\), timing uncertainty due to the laser pulse duration * \\(\\sigma_{\\mathrm{D1}}\\), time jitter of the start detector (photodiode) * \\(\\sigma_{\\mathrm{D2}}\\), time jitter of receive detector (SPAD) * \\(\\sigma_{\\mathrm{ET}}\\), time uncertainty of event timer * \\(\\sigma_{\\mathrm{S}}\\), uncertainties caused by the design of the satellite retroreflectors. These are not subject to the miniSLR design and are therefore not considered here. For most satellites, the satellite signature is very small compared to the other contributing factors. When evaluating equation 1, one has to observe that timing uncertainties are given by different metrics, the most common being FWHM (full width half maximum), RMS Figure 5: Screenshot of OOOS during a ranging observation of satellite TOPEX. (root mean square), or standard deviation \\(\\sigma\\) of a normal distribution. Assuming that all uncertainties are normal distributed (which is usually a reasonable approximation), it is possible to relate these quantities to each other with: \\[\\sigma = 0.42\\times\\text{FWHM} \\tag{2}\\] \\[\\sigma \\approx \\text{RMS} \\tag{3}\\] Using the specified values from the used components, an expected single shot precision of 39 mm is derived (see Table 3). #### 2.4.2 Normal point precision In post processing, individual range measurements are averaged into normal points (NPTs). Recommended normal point durations are given by the ILRS, and range from 5 s to 300 s, depending on satellite altitude and expected return strength. Assuming a purely statistical error distribution, the precision of a normal point \\(\\sigma_{\\text{NPT}}\\) with \\(N\\) individual data points is given by \\[\\sigma_{\\text{NTP}}=\\frac{\\sigma_{\\text{single}}}{\\sqrt{N}} \\tag{4}\\] Given the single shot precision of 39 mm, averaging about 1,500 data points in a normal point should yield a normal point precision of 1 mm. It must be pointed out that in reality the error distribution will not be purely statistical, but systematic errors also contribute. The main issue are drifts of timing delays (e.g. from photon detection to electrical signal), if they occur on the timescales of minutes. Drifts on longer timescales are eliminated by regular timing calibration. The amount of these contributions cannot be inferred from device specifications, but they are included in the experimental system validation (see Section 3.5). #### 2.4.3 Return rates The system is designed to always operate in single photon mode, i.e., for each outgoing laser pulse the detector should see no more than one photon. This avoids any systematic time shifts in the receive detector due to multiple photon signals. As the number of actual photons returning is following a Poisson distribution, this can be achieved by ensuring a mean return quota (received photons per outgoing pulse) of much less than one, ideally below 0.1. \\begin{table} \\begin{tabular}{r r l} \\hline \\hline Laser & 210 ps & (Given as 500 ps FWHM) \\\\ Start detector & 40 ps & \\\\ Receive detector & 150 ps & (Worst case, probably better) \\\\ Event timer & 9 ps & \\\\ \\hline Single shot precision \\(\\sigma_{\\text{single}}\\) & 261 ps & (equivalent to 39 mm) \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 3: Timing uncertainty budget for the miniSLR components. Single shot precision is calculated using equation 1. On the other hand, the mean number of photons must no be too small in order to still produce a visible signal. The actual limit is hard to estimate and depends on background brightness, and a number of system specifications. For the described miniSLR set-up, the night time limit is around \\(5\\times 10^{-5}\\) (equivalent to roughly \\(2.5\\,\\mathrm{Hz}\\) return rate). The expected return quotas for different targets, atmospheric conditions and measurement geometries can be calculated using the modified radar link equation from Degnan (1993). It gives the mean number of expected photoelectrons per laser pulse as \\[n_{\\mathrm{pe}}=\\left(E_{\\mathrm{T}}\\frac{\\lambda}{hc}\\right)G_{\\mathrm{t}}\\ \\sigma_{\\mathrm{ocs}}\\ \\left(\\frac{1}{4\\pi R^{2}}\\right)^{2}A_{\\mathrm{r}}\\ T_{a}^{2}\\ \\eta_{\\mathrm{t}}\\ \\eta_{\\mathrm{r}}\\ \\eta_{ \\mathrm{d}} \\tag{5}\\] with: * \\(E_{\\mathrm{T}}\\), Laser pulse energy * \\(\\lambda\\), Laser wavelength * \\(G_{\\mathrm{t}}\\), Gain, a function of beam divergence and pointing stability * \\(\\sigma_{\\mathrm{ocs}}\\), Optical cross section of satellite reflector * \\(\\left(\\frac{1}{4\\pi R^{2}}\\right)^{2}\\), Attenuation at distance \\(R\\) * \\(A_{\\mathrm{r}}\\), Aperture of receive telescope * \\(T_{a}^{2}\\), Atmospheric transmission * \\(\\eta_{\\mathrm{t}}\\), Efficiency of transmitter optics * \\(\\eta_{\\mathrm{r}}\\), Efficiency of receiver optics * \\(\\eta_{\\mathrm{d}}\\), Efficiency of detector As some of the factors in equation 5 can only be estimated (e.g. beam pointing stability), or are subject to frequent changes (e.g. atmospheric conditions), the resulting numbers can be indicative only. A model based on this equation, but including elevation dependent atmospheric effects, has been developed and experimentally verified by Meyer (2022). Using this model and the miniSLR specifications from Table 2, expected return quotas for a few important satellites have been estimated (see Table 4). The numbers rapidly decrease with satellite altitude, owing to the \\(R^{4}\\) factor in the link budget. While a very strong signal is expected from most LEO satellites, high satellites (especially Galileo) seem to be quite challenging. For low satellites, the return quota may even exceed the desired single photon maximum of 10%, if indeed these theoretical values can be achieved. In this case, the beam steering could be used to slightly misalign the beam, to reduce the return quota. It should be pointed out that the optical cross sections used here are lower than theoretical values from Arnold (2003). Also, some of the miniSLR specifications are rather cautious, e.g. the receiver efficiency of 10%. It is thus conceivable that higher return rates than calculated here are possible in reality, especially under favourable atmospheric conditions. In Section 3.3, some experimentally measured return rates are compared with the values estimated here. ## 3 Validation results and discussion ### Data census and processing The results presented here are based on observations made between November 2022 and April 2023. Figures 6 to 8 show some typical examples of measurements taken. Post-processing of the data is done manually using OOOS. Data filtering and normal point generation are implemented according to the algorithm developed by the ILRS (Sinclair (2012)). A rejection interval of 2.5 times the RMS is used, as recommended for single photon systems. The normal points are indicated by red crosses in the plot. Summary measurement reports are generated in CRD (consolidated ranging data) format, and used for further analysis. ### Tracking accuracy Accurate satellite tracking is fundamental to a productive SLR operation, but at the same time challenging for a relatively small mount. To allow for blind tracking (without visual acquisition), the tracking accuracy should be not much worse than the laser beam divergence of 10 arcsec (\\(\\approx 50\\,\\mathrm{\\SIUnitSymbolMicro rad}\\)). For the generation of pointing models, about 50-70 stars are recorded automatically. The process is slightly complicated by the fact that the mount cannot move the full 360deg in azimuth, and has to use both pier sides (i.e. elevations above 90deg) to cover the full sky. For each star, the offset from the camera target point is recorded. All offsets are subsequently fitted by an analytical model adapted from Wallace (2016). With this, a pointing accuracy of better than 10 arcsec is achieved both on stars as well as on satellites with accurate predictions, including fast LEO satellites. Ranging with blind tracking has been shown successfully in a number of cases. Unfortunately, the pointing accuracy quickly deteriorates, and blind tracking usually becomes impossible after a few weeks. The reason for this is not yet determined, but it may be due either to instability in the mounting (on a gravel bed on the roof of a six-storey building), or some thermal effects in the mechanical mountings of the optical system. For the current study, most passes have been recorded at night, with visual guidance. Closed loop tracking is performed automatically by the software if the target is \\begin{table} \\begin{tabular}{r r r r r} \\hline Satellite & NPT Duration & Optical cross section & Return quota & returns / NPT \\\\ \\hline Grace-FO & 5 s & \\(0.6\\times 10^{6}\\,\\mathrm{m}^{2}\\) & 70\\% & 175,000 \\\\ Ajisai & 30 s & \\(6.1\\times 10^{6}\\,\\mathrm{m}^{2}\\) & 18\\% & 270,000 \\\\ Stella & 30 s & \\(0.1\\times 10^{6}\\,\\mathrm{m}^{2}\\) & 3.7\\% & 55,000 \\\\ Lares & 30 s & \\(0.28\\times 10^{6}\\,\\mathrm{m}^{2}\\) & 1\\% & 15,000 \\\\ Lageos & 120 s & \\(1.24\\times 10^{6}\\,\\mathrm{m}^{2}\\) & 0.02\\% & 1,200 \\\\ Etalon & 300 s & \\(23\\times 10^{6}\\,\\mathrm{m}^{2}\\) & 0.004\\% & 600 \\\\ Galileo & 300 s & \\(3.1\\times 10^{6}\\,\\mathrm{m}^{2}\\) & 0.0003\\% & 45 \\\\ \\hline \\end{tabular} \\end{table} Table 4: Satellite optical cross sections, return quotas and number of returns per normal point (NPT) expected for different satellites, based on the model referenced in Section 2.4.3. Figure 6: Ranging plot of Ajisai. For clarity, only every 100th data point is shown. Normal points are marked in red. They contain between 40,000 and 200,000 individual ranges. Figure 7: Ranging plot of Lageos 1. For clarity, only every 3rd data point is shown. Normal points contain about 2,000 ranges each. visible. A few passes have been recorded with blind tracking, or partial blind tracking (visual acquisition before entering the earth shadow). Daylight tracking has been attempted once, but without success. Detector rates, however, seemed to be at a manageable level. Calibration records could be recorded without issues. The failure to see satellite returns is believed to be due to insufficient pointing accuracy. Improving the pointing stability will be an important task in the further development of the miniSLR, to enable blind tracking at day and night. ### Return rates Figure 9 shows measured return numbers per normal point for some satellites. By and large, the measured numbers correspond roughly with the theoretical expectations from Section 2.4.3 (blue crosses). As expected from the modelling, large spreads between high and low data yields exist. These can be attributed to differences in tracking geometry, elevation angle, atmospheric transmission (local thin clouds), and changes in tracking accuracy. Except for Lageos, the experimental values are somewhat lower than the calculated values, which are already at the low end of theoretical expectations. This may indicate that system losses are higher than assumed, and higher return rates may be achieved e.g. by better alignment of the optics. ### Precision During the post-processing, the single shot RMS is calculated for each normal point. In calibration runs, the values are typically between 210 ps and 230 ps. In satellite tracks, Figure 8: Ranging plot of Etalon 2. Normal points contain around 800 ranges each. Figure 10: Measured RMS of data within a normal point (NPT). The theoretical expectation for this value is given as 261 ps (blue dashed line, see Table 3). See caption of Figure 9 for further explanation. Figure 9: Measured return rates, given in data points per normal point (NPT). Normal point durations are 5 s for Grace-FO, 30 s for Ajisai, Lares and Stella, and 120 s for the Lageos satellites. The boxes show the range from first to third quartile of the return numbers, the horizontal line denotes the median. Outliers beyond twice the inter-quartile range are shown as individual circles. Blue crosses mark the theoretical expectation from Table 4. the RMS depends slightly on the strength of the signal, probably due to imperfections in the data filtering. Figure 10 shows the normal point RMS values for a selection of satellites. Median values range from 220 ps to 330 ps. This is well compatible with the theoretical expectation of 261 ps / 39 mm (see Section 2.4.1). As can be seen from Figure 9, the expected minimum of 1,500 data points per normal point is often achieved. For some Lageos points, and usually for high satellites (such as Etalon-2, shown in Figure 8), it can be lower. A quality cut has been applied at 300, i.e. NPTs with less than 300 data points are discarded. Such low data normal points may occur at e.g. beginning or end of measurements, during interruptions due to aircraft warnings, from imperfect tracking, or scattered clouds. Ideally, one would like to increase the quality cut to 1,500 points, to achieve the envisaged averaging effect (see Section 2.4.2), however this would have eliminated too much data in the present study. With a higher quality cut, a slightly improved normal point precision may be achieved. ### Accuracy To obtain a realistic estimation of the system performance, the data has kindly been analysed by Toshimichi Otsubo from Hitotsubashi University, using his rapid quality control software (Otsubo et al. (2019)). It generates global fits for the orbits of all considered satellites, based on data from all SLR stations that have uploaded measurements for the time period in question. Thus, the analysis provides a measure of one station's accuracy, relative to all other stations in the network. The most relevant results of this analysis are: * Station coordinates * Station range bias (adjusted once for each station) * Pass range bias (adjusted for every pass) * Normal point precision, estimated from scatter of normal points around fit. For the analysis of the miniSLR accuracy, only data taken during 5 nights from February 7 to 13, 2023, is considered. Five satellites are included in the analysis: Lageos 1 and 2, Ajisai, Stella and Lares. The data comprises of 15 passes with a total of 163 normal points of these satellites. Based on this limited dataset, a first estimation of the experimental performance of the system is performed. Coordinates of the station invariant point (intersection of the two mount axes) have been measured by a surveyor in a local datum, and transformed to ITRF2014 Cartesian coordinates. They agree to the coordinates from SLR data to within 20 cm (see Table 5). The reason for this rather large deviation may be inaccuracies in the \\begin{table} \\begin{tabular}{r r r} \\hline \\hline Coordinate & Local survey & SLR analysis \\\\ \\hline x & 4,160,755.242 m & 4,160,755.135 m \\\\ y & 666,638.631 m & 666,638.658 m \\\\ z & 4,772,593.195 m & 4,772,593.327 m \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 5: Station coordinates in ITRF2014, as measured by a local surveyor, and by SLR data from February 2023. conversion of the local datum into ITRF2014, or insufficient SLR data for a very accurate position esimate. The station range bias is fitted with \\(3.4\\,\\mathrm{cm}\\). While already encouragingly small, this number is still larger than expected from system specifications. Possible reasons could be a wrongly measured distance to the local calibration target, or systematic biases between calibration and actual satellite measurements (e.g. by the attenuator). The pass-by-pass variation of the range bias is found to have an RMS of \\(7.4\\,\\mathrm{mm}\\). This can be compared to the pass range bias RMS of the other stations that have supplied data in the same timespan for the same satellites (see Figure 11). While the best stations in the network, like Graz, Yarragadee or Wettzell, achieve values below \\(3\\,\\mathrm{mm}\\), other station are at the same order of magnitude as the Stuttgart miniSLR, or worse. The normal point precision, i.e. the scatter of normal points around the fitted (and bias-corrected) orbits, averages to \\(4\\,\\mathrm{mm}\\). Again, the comparison with other station shows that the miniSLR achieves a satisfactory performance (Figure 12). While the low number of data points limits the statistical significance of these results, they are a good indication of the possible performance of the system. It seems fair to claim that the miniSLR can indeed reach a similar accuracy as conventional stations, and thus be valuable tool for geodesy and other SLR applications. Long-term stability and more statistical significance of the results will require more data, which will be collected and supplied to the ILRS in the future. Figure 11: RMS of pass range biases according to data analysis described in Section 3.5. It displays the changes in the pass-to-pass range bias offsets applied to match the global orbital fits. ## 4 Conclusion and Outlook In the scope of the work described here, a fully functional prototype of a minimal SLR system has been constructed, commissioned and tested. The validation was done on existing ILRS supported satellites at all relevant altitudes and with different retroreflector configurations. The results indicate that ranging to most relevant targets can be performed with an accuracy comparable to existing, conventional SLR systems. Compared to state-of-the-art SLR systems, the miniSLR currently still lacks the possibility to consistently range to Galileo satellites, and to reliably perform blind ranging (without visual acquisition). Also, daylight ranging has not yet been demonstrated. The issues with blind tracking can probably be solved with a more rigid mechanical construction of the optical bench, or even just with a more suitable operating location. Presumably, this would also enable daylight ranging. While the effect of such improvements is yet to be shown, the problems seem not to be immanent to the minimal SLR concept. The issue of ranging to Galileo targets, on the other hand, is indeed connected to the small size of the receiver aperture. Both theoretical estimates as well as experimental results seem to indicate that despite the rather high laser power of \\(5\\,\\mathrm{W}\\), not enough photons are received to reliably detect returns from these satellites. It may be possible to achieve a better sensitivity by improving the alignment of the optics, but this is uncertain. Other options, like a more powerful laser, can be considered as well, but would require a significant engineering effort. Figure 12: Mean precision according to data analysis described in Section 3.5. The mean precision is given as scatter of normal points around the fitted orbit, after application of a constant range bias, and a pass-dependent range and time bias. Thus, the current version of the miniSLR seems mainly suitable for the following applications: * Geodetic measurements, especially at remote locations currently not well covered by the existing SLR network * Supporting high-performance stations, relieving them of some of the daily tracking load * Mission support for LEO satellites * Conjunction assessment, if at least one object is equipped with retroreflector * Studies and experiments requiring a flexible SLR ground station The Stuttgart miniSLR will continue to deliver data to the ILRS for further validation. As part of a research project, it will also be equipped with polarization optics to test the feasibility of satellite identification through polarizing retroreflectors (Bartels et al. (2022)). Additionally, it will be used to participate in laser ranging research projects. In parallel, a commercial version of the miniSLR will be designed and constructed by DiGOS Potsdam GmbH. Using a different mount, an improved mechanical set-up and a number of smaller modifications, it is expected to achieve an even better performance than the prototype described in this paper. ## Declarations ### Acknowledgements The authors would like to acknowledge contributions to this project by * former team member Ewan Schafer, * former team member Paul Wagner, * Robin Neumann and Luis Gentner, who supported the construction and data taking, * the institute's mechanical, electronic and IT department for their continued effort in supporting this project, * Prof. Toshimichi Otsubo from Hitotsubashi University, Tokyo, for his continued support and encouragement. ### Supplementary information The miniSLR design is partly patented. The name \"miniSLR\" is a registered trademark. ### Funding The work described here was mainly funded by DLR as part of a technology transfer project. ### Conflict of interest DH declares that he is involved in the commercial distribution of the miniSLR at DiGOS Potsdam GmbH and may profit from sales of the system. ### Availability of data The normal point data generated and analysed for this paper is available on the EDC website (EDC (2023)). Raw data is available from the corresponding author on reasonable request. ### Code availability The code used to process and analyse the data is part of the software package OOOS, which is licensed under GPLv3. It is available from the corresponding author on reasonable request. ### Author contributions DH co-developed the concept of the miniSLR, lead the technical development and wrote the paper. FN worked on the technical implementation of the miniSLR and conducted measurements. TM conducted measurements, calculated theoretical return rates, and supported the software development. WR co-developed the concept and vision of the miniSLR and had the administrative lead of the project. ## References * Pearlman et al. (2019) Pearlman, M., Arnold, D., Davis, M., _et al._: Laser geodetic satellites: a high-accuracy scientific tool. J Geod **93**, 2181-2194 (2019) [https://doi.org/10.1007/s00190-019-01228-y](https://doi.org/10.1007/s00190-019-01228-y) * Hampf et al. (2021) Hampf, D., Riede, W., Bartels, N., et al.: A path towards low-cost, high-accuracy orbital object monitoring. 8th European Conference on Space Debris (2021) * ILRS (2019) ILRS: Website. [https://ilrs.gsfc.nasa.gov/](https://ilrs.gsfc.nasa.gov/) * Pearlman et al. (2019) Pearlman, M.R., Noll, C.E., Pavlis, E.C., _et al._: The ILRS: approaching 20 years and planning for the future. J Geod **93**, 2161-2180 (2019) [https://doi.org/10.1007/s00190-019-01241-1](https://doi.org/10.1007/s00190-019-01241-1) * Otsubo et al. (2016) Otsubo, T., Matsuo, K., Aoyama, Y., et al.: Effective expansion of satellite laser ranging network to improve global geodetic parameters. Earth Planet Sp **68** (2016) [https://doi.org/10.1186/s40623-016-0447-8](https://doi.org/10.1186/s40623-016-0447-8) * Wilkinson et al. (2019) Wilkinson, M., Schreiber, U., Prochazka, I., _et al._: The next generation of satellite laser ranging systems. J Geod **93**, 2227-2247 (2019) [https://doi.org/10.1007/s00190-018-1196-1](https://doi.org/10.1007/s00190-018-1196-1) * EDC (2019) EDC: Website. [https://edc.dgfi.tum.de/en/](https://edc.dgfi.tum.de/en/) * Nicolas et al. (2001) Nicolas, J., Pierron, F., Samain, E., _et al._: Centimeter Accuracy for the French Transportable Laser Ranging Station (FTLRS) through Sub-System Controls. Surveys in Geophysics **22**, 449-464 (2001) [https://doi.org/10.1023/A:1015612032752](https://doi.org/10.1023/A:1015612032752) * Pogorelov et al. (2019)Hampf, D., Schafer, E., Sproll, F., _et al._: Satellite laser ranging at 100 kHz pulse repetition rate. CEAS Space J **11**, 363-370 (2019) [https://doi.org/10.1007/s12567-019-00247-x](https://doi.org/10.1007/s12567-019-00247-x) * Volker et al. (2013) Volker, U., Friedrich, F., Buske, I., et al.: Laser based observation of space debris: Taking benefits from the fundamental wave. Proc. 6th European Conference on Space Debris (2013) * Courde et al. (2017) Courde, C., Torre, J.-M., Samain, E., et al.: Satellite and lunar laser ranging in infrared. Proc. SPIE 10229, Photon Counting Applications **102290K** (2017) [https://doi.org/10.1117/12.2270573](https://doi.org/10.1117/12.2270573) * Xue et al. (2016) Xue, L., Li, Z., Zhang, L., et al.: Satellite laser ranging using superconducting nanowire single-photon detectors at 1064 nm wavelength. Opt Lett **41** (2016) [https://doi.org/10.1364/OL.41.003848](https://doi.org/10.1364/OL.41.003848) * Eckl et al. (2017) Eckl, J.J., Schreiber, U., Schuler, T.: Satellite laser ranging in the near-infrared regime. Proc. SPIE 10229, Photon Counting Applications **102290J** (2017) [https://doi.org/10.1117/12.2270519](https://doi.org/10.1117/12.2270519) * Degnan (1993) Degnan, J.J.: Millimeter accuracy satellite laser ranging: a review. Contributions of space geodesy to geodynamics: technology **25**, 133-162 (1993) * Meyer (2022) Meyer, T.: Analysis of the performance parameters of Satellite Laser Ranging (SLR) Systems based on the link budget under exemplary inclusion of the miniSLR System. Master thesis, University of Stuttgart (2022) * Arnold (2016) Arnold, D.A.: Cross Section of ILRS Satellites. [https://ilrs.gsfc.nasa.gov/docs/CrossSectionReport.pdf](https://ilrs.gsfc.nasa.gov/docs/CrossSectionReport.pdf) * Sinclair (2016) Sinclair, T.A.: ILRS Normal Point Algorithm. [https://ilrs.gsfc.nasa.gov/data_and_products/data/npt/npt_algorithm.html](https://ilrs.gsfc.nasa.gov/data_and_products/data/npt/npt_algorithm.html) * a Telescope Pointing Analysis System, (2016) * Otsubo et al. (2019) Otsubo, T., Muller, H., Pavlis, E.C., _et al._: Rapid response quality control service for the laser ranging tracking network. J Geod **93**, 2335-2344 (2019) [https://doi.org/10.1007/s00190-018-1197-0](https://doi.org/10.1007/s00190-018-1197-0) * Bartels et al. (2022) Bartels, N., Allenspacher, P., Hampf, D., et al.: Space object identification via polarimetric satellite laser ranging. Commun Eng 1 **5** (2022) [https://doi.org/10.1038/s44172-022-00003-w](https://doi.org/10.1038/s44172-022-00003-w)
Satellite Laser Ranging (SLR) is an established technique providing very accurate position measurements of satellites in Earth orbit. However, despite decades of development, it remains a complex and expensive technology, which impedes its further growth to new applications and users. The miniSLR implements a complete SLR system within a small, transportable enclosure. Through this design, costs of ownership can be reduced significantly, and the process of establishing a new SLR site is greatly simplified. A number of novel technical solutions have been implemented to achieve a good laser ranging performance despite the small size and simplified design. Data from the initial six months of test operation has been used to generate a first estimation of the system performance. The data includes measurements to many of the important SLR satellites, such as Lageos, Etalon and most of the geodetic and Earth observation missions in LEO. It is shown that the miniSLR achieves sub-centimetre accuracy, comparable with conventional SLR systems. The miniSLR is an engineering station in the International Laser Ranging Service (ILRS), and supplies data to the community. Continuous efforts are undertaken to further improve the system operation and stability. **Keywords:** satellite laser ranging, satellite geodesy, ground stations, new space
Summarize the following text.
arxiv-format/0001028v1.md
# Preprint INRNE-TH-93/4 (May 1993) e-print quant-ph/0001028 Generalized Intelligent States and \\(Su(1,1)\\) and \\(Su(2)\\) Squeezing+ Footnote †: This preprint was sent [with the here preserved mis-spellings Heizenberg, studed, , and the false degeneracy of the eigenvalue of \\(L(\\lambda)\\)] to Phys. Rev. Lett. in May 1993 (LF5064/ 03 Jun 93) and declined from PRL in August 1993. An extended version of it appeared later in J. Math. Phys. **35**, 2297 (1994). Meanwhile similar (but not all) results were published by other authors in PRL and Phys. Rev. A. D.A. Trifonov Institute for Nuclear Research and Nuclear Energy Blv. Tzarigradsko chaussee 72, 1784 Sofia, Bulgaria ## 1 Introduction The squeezed states of electromagnetic field in which the fluctuations in one of the quadrature components \\(Q\\) and \\(P\\) of the photon annihilation operator \\(a=(Q+iP)/\\sqrt{2}\\) are smaller than those in the ground state \\(|0\\rangle\\) have attracted due attention in the last decade (see for example the review papers[1, 2] and references there in). In the recent years an interest is devoted to the squeezed states for other observables[3]-[11]. One looks for non gaussian states which exhibit \\(Q\\)-\\(P\\) squeezing[3]-[7] and/or for states in which the fluctuations of other physical observables are squeezed[7]-[11]. The aim of the present paper is to construct \\(SU(1,1)\\) and \\(SU(2)\\) squeezed intelligent states and to consider some general properties of squeezing for an arbitrary pair of quantum observables \\(A\\) and \\(B\\) in states which minimize the Robertson-Schrodinger uncertaintyrelation (R-S UR)[12]. We call such states generalized intelligent states (GIS) or squeezed intelligent states when the accent is on their squeezing properties. The \\(Q\\)-\\(P\\) GIS are well studied and known as squeezed states, two photon coherent states (CS) (see references in[1, 2]), correlated states[13] or Schrodinger minimum uncertainty states[14]. The term intelligent states (IS)[11] is refered to states that provide the equality in the Heizenberg UR for \\(A\\) and \\(B\\). The \\(Q\\)-\\(P\\) IS are also known as Heizenberg minimum uncertainty states. The spin IS are introduced and studed in[11]. ## 2 Generalized intelligent states For any two quantum observables \\(A\\) and \\(B\\) the corresponding second momenta in a given state obey the R-S UR[12, 13], \\[\\sigma_{A}^{2}\\,\\sigma_{B}^{2}\\geq\\frac{1}{4}(\\langle C\\rangle^{2}+4\\sigma_{AB }^{2}),\\quad C\\equiv-i[A,B], \\tag{1}\\] where \\(\\sigma_{A},\\sigma_{B}\\) and \\(\\sigma_{AB}\\) are the dispersions and the covariation of \\(A\\) and \\(B\\), \\[\\sigma_{A}^{2}\\,=\\,\\langle A^{2}\\rangle-\\langle A\\rangle^{2},\\] \\[\\sigma_{AB}=\\frac{1}{2}(\\langle AB+BA\\rangle)-\\langle A\\rangle \\langle B\\rangle. \\tag{2}\\] The states that provide the equality in the R-S UR (1) will be called here generalized intelligent states (GIS). When the covariation \\(\\sigma_{AB}=0\\) then the S-R UR coincides with the Heizenberg one. In paper[13] it was proved that if a pure state \\(|\\psi\\rangle\\) with nonvanishing dispersion of the operator \\(A\\) minimizes the R-S UR then it is an eigenstate of the operator \\(\\lambda A+iB\\), where \\(\\lambda\\) is a complex number, related to \\(\\langle C\\rangle\\) and to \\(\\sigma_{i}(\\psi),i=A,B,AB.\\) Here we prove that this is a sufficient condition for any state \\(|\\psi\\rangle\\). **Proposition 1**: _A state \\(|\\psi\\rangle\\) minimizes the R-S UR (1) if it is an eigenstate of the operator \\(L(\\lambda)=\\lambda A+iB\\),_ \\[L(\\lambda)|z,\\lambda\\rangle=z|z,\\lambda\\rangle, \\tag{3}\\] _where the eigenvalue \\(z\\) is a complex number._ _Proof._ Let first restrict the parameter \\(\\lambda\\) in the eigenvalue eqn. (3), \\({\\rm Re}\\,\\lambda\ eq 0\\). Then we express \\(A\\) and \\(B\\) in terms of \\(L(\\lambda)\\) and \\(L^{\\dagger}(\\lambda)\\) and obtain \\[\\sigma_{A}^{2}(z,\\lambda)=\\frac{\\langle C\\rangle}{2{\\rm Re}\\, \\lambda}\\,,\\qquad\\sigma_{B}^{2}(z,\\lambda)=|\\lambda|^{2}\\frac{\\langle C \\rangle}{2{\\rm Re}\\,\\lambda}\\,,\\] \\[\\sigma_{AB}(z,\\lambda)=-\\langle C\\rangle\\frac{{\\rm Im}\\,\\lambda} {2{\\rm Re}\\,\\lambda}\\,, \\tag{4}\\] where \\(\\langle C\\rangle=\\langle\\lambda,z|C|z,\\lambda\\rangle\\). The obtained second momenta (4) obey the equality in R-S UR (1). Let now the eigenvalue equation (3) holds for \\({\\rm Re}\\,\\lambda=0\\). This means that the state \\(|z,\\lambda\\rangle\\) is an eigenstate of the Hermitean operator \\(rA+B\\) where \\(r={\\rm Im}\\,\\lambda\\). We consider now the mean value of the non negative operator \\(F^{\\dagger}(r)F(r)\\), where \\(F(r)=rA+B-(r\\langle A\\rangle+\\langle B\\rangle)\\) and \\(r\\) is any real number. Herefrom we get the uncertainty relation \\[\\sigma_{A}^{2}\\,\\sigma_{B}^{2}\\geq\\sigma_{AB}^{2}\\,, \\tag{5}\\] the equality holding in the eigenstates of \\(F(r)\\) only. One can consider the equality in (5) as the desired equality in the Robertson-Schrodinger UR if in these states the mean value of the operator \\(C\\) vanishes. And this is the case. Indeed, consider in \\(|z,ir\\rangle\\) the mean values of the operators \\(A(rA+B)\\) and \\((rA+B)A\\). We easily get the coinsidence of the two mean values, wherefrom we obtain \\(\\langle ir,z|C|z,ir\\rangle=0\\). Thus all eigenstates \\(|z,\\lambda\\rangle\\) are GIS. One can prove that when the operator \\(A\\) has no discrete spectrum then for any \\(|\\psi\\rangle\\)\\(\\sigma_{A}(\\psi)\ eq 0\\), thereby the condition (3) is also necessary and all \\(A\\)-\\(B\\) GIS (for any \\(B\\)) are of the form \\(|z,\\lambda\\rangle\\). Such are for example the cases of canonical \\(Q\\)-\\(P\\) GIS[14] and the \\(SU(1,1)\\) GIS, considered below. The above result stems from the following property of the dispersion of quantum observables: \\[\\sigma_{A}(\\psi)=0\\Longleftrightarrow A|\\psi\\rangle=a|\\psi\\rangle. \\tag{6}\\] As a consequence of the second part of the proof of the Proposition 1 we have the following **Proposition 2**: _If the commutator \\(C=-i[A,B]\\) is a positive operator then the operator \\(rA+B\\) with real \\(r\\) has no eigenstates in the Hilbert space._ In terms of GIS \\(|z,\\lambda\\rangle\\) the above Proposition 2 gives the restriction on \\(\\lambda\\): \\({\\rm Re}\\,\\lambda\ eq 0\\) in cases of positive \\(C\\). Before going to examples let us point out that the \\(A\\)-\\(B\\) IS \\(|z,\\lambda=1\\rangle\\equiv|z\\rangle\\) are noncorrelated and with equal variances, \\[L|z\\rangle=z|z\\rangle, L=L(\\lambda=1)=A+iB, \\tag{7}\\] \\[\\sigma_{A}^{2}(z)=\\frac{1}{2}\\langle z|C|z\\rangle=\\sigma_{B}^{2}( z). \\tag{8}\\] We shall call such states equal variances IS or non squeezed IS, addopting the Eberly and Wodkiewicz[7] definition of \\(A\\)-\\(B\\) squeezed states. It is convenient to describe this squeezing by means of the dimensionless parameter \\(q_{A}\\)[8] \\[q_{A}=\\frac{\\langle C\\rangle/2\\,-\\,\\sigma_{A}^{2}}{\\langle C\\rangle/2}, \\tag{9}\\] in terms of which the 100% squeezing corresponds to \\(q_{A}=1\\). In the equal variances IS \\(|z\\rangle\\)\\(q_{A}=0=q_{B}\\). Let now consider the cases when the commutator \\(C=-i[A,B]\\) is a positive operator: \\(\\langle\\psi|C|\\psi\\rangle>0\\). In such cases \\({\\rm Re}\\lambda\ eq 0\\) and we can safely devide by \\(\\langle\\psi|C|\\psi\\rangle\\). Then from eqns (4) we get the quite general result for squeezing in GIS \\(|z,\\lambda\\rangle\\) with positive \\(C\\), \\[q_{A}(z,\\lambda)=1-\\frac{1}{2{\\rm Re}\\,\\lambda},\\qquad q_{B}(z,\\lambda)=1- \\frac{|\\lambda|^{2}}{2{\\rm Re}\\,\\lambda}. \\tag{10}\\]We see that the squeezing parameter \\(q\\) depends on \\(\\lambda\\) only and \\(100\\%\\) squeezing of \\(A\\) is obtained at \\(\\mbox{Re}\\,\\lambda\\to\\infty\\) (and of \\(B\\) at \\(\\lambda=0\\)). In many cases the IS \\(|z\\rangle\\) are constructed. Except of the canonical \\(Q\\)-\\(P\\) case we point out also the cases of lowering and raising operators of some semisimple Lie groups (the \\(SU(2)\\) and the \\(SU(1,1)\\)[15] for example) and for the quantum group \\(SU(1,1)_{q}\\), constructed recently[10]. The GIS \\(|z,\\lambda\\rangle\\) are eigenstates of the linearly transformed operator \\[L\\longrightarrow L(\\lambda)=uL+vL^{\\dagger}, \\tag{11}\\] where \\(u=(\\lambda+1)/2\\), \\(v=(\\lambda-1)/2\\), \\(L^{\\dagger}=A-iB\\). If this is a similarity transformation then GIS can be obtained by acting on \\(|z\\rangle\\) with the transforming operator \\(S(\\lambda)\\) (the generalized squeezing operator) as it was done by Stoler (see the reference in[1, 2]) in the canonical case. In the examples below we construct GIS by solving the eigenvalue equations of \\(L(\\lambda)\\). ## 3 \\(Su(1,1)\\) squeezed intelligent states In this section we construct and discuss \\(K_{1}\\)-\\(K_{2}\\) GIS, where \\(K_{1}\\) and \\(K_{2}\\) are the generators of the discrete series \\(D^{+}(k)\\) of representations of \\(SU(1,1)\\) with Cazimir operator \\(C_{2}:=k(k-1)\\). From the commutation relation \\([K_{1},K_{2}]=-iK_{3}\\) we see that one can apply the corresponding formulas of the previous section with \\(A=K_{1},B=-K_{2}\\) and \\(C=K_{3}\\). The operator \\(K_{3}\\) is positive with eigenvalues \\(k+m\\) where \\(m=0,1,2,\\ldots,\\). Then as a consequence of the Proposition 2 the GIS \\(|z,\\lambda;k\\rangle\\) exist only if \\(\\mbox{Re}\\,\\lambda\ eq 0\\) and one can safely use formulas (4) for the second momenta of \\(K_{1,2}\\) in the \\(SU(1,1)\\) GIS \\(|z,\\lambda;k\\rangle\\). Since the operator \\(K_{1}\\) has no discrete spectrum the condition (3) is also necessary for GIS. The \\(SU(1,1)\\) equal variances IS \\(|z;k\\rangle\\) (the eigenstates of \\(K_{1}-iK_{2}\\equiv K_{-}\\)) have been constructed and studied by Barut and Girardello as 'new \"coherent\" states associated with noncompact groups'[15]. These states form an overcomplete family of states and provide a representation of any state \\(|\\psi\\rangle\\) in terms of entire annalytic function \\(\\langle\\psi|z;k\\rangle\\) of \\(z\\) of order 1 and type 1 (exponential type). In the Hilbert space of such entire analytic functions the generators of \\(SU(1,1)\\) act as the following differential operators [15] (we shall call this BG-representation) \\[K_{3}=k+z\\frac{d}{dz}\\,,\\quad K_{+}=K_{-}^{\\dagger}=z\\,,\\] \\[K_{-}=2k\\frac{d}{dz}+z\\frac{d^{2}}{dz^{2}}\\,. \\tag{12}\\] We use the BG-representation to construct the \\(SU(1,1)\\) GIS \\(|z^{\\prime},\\lambda;k\\rangle\\) (we denote for a while the eigenvalue by \\(z^{\\prime}\\)). The eigenvalue equation (3) now reads \\[\\left[u(2k\\frac{d}{dz}+z\\frac{d^{2}}{dz^{2}})+vz\\right]\\Phi_{z^{\\prime}}(z)=z^ {\\prime}\\Phi_{z^{\\prime}}(z)\\,, \\tag{13}\\] where the parameters \\(u,v\\) have been defined in formula (11). By means of a simple substitutions the above equation is reduced to the Kummer equation for the confluent hypergeometric function \\({}_{1}F_{1}(a,b;z)\\)[16], so that we have the following solution of eqn. (13) \\[\\Phi_{z^{\\prime}}(z)=\\exp\\left(cz\\right){}_{1}F_{1}(a,b;-2cz)\\,, \\tag{14}\\] \\[a=k-\\frac{z^{\\prime}}{2uc}\\,,\\quad b=2k;\\quad c^{2}=-\\frac{v}{u}\\,. \\tag{15}\\] This solution obey the requirements of the BG representation iff \\[|c|=\\sqrt{|v/u|}<1\\Leftrightarrow\\mbox{Re}\\,\\lambda>0\\,, \\tag{16}\\] which is exactly the restriction on \\(\\lambda\\) imposed by the positivity of the commutator \\(C\\equiv K_{3}\\), according to the Proposition 2. No other constrains on \\(z^{\\prime}\\) and \\(\\lambda\\) are needed. Thus we obtain the \\(SU(1,1)\\) GIS \\(|z^{\\prime},\\lambda;k\\rangle\\) in the BG-representation in the form \\[\\langle k;\\lambda,z^{\\prime}|z;k\\rangle=\\exp\\left(c^{*}z\\right){}_{1}F_{1}(a^ {*},b;-2c^{*}z)\\,, \\tag{17}\\] where the parameters \\(a,b\\) and \\(c\\) are given by formulas (3.4). Using the power series of \\({}_{1}F_{1}(a,b;z)\\)[16] we get the coinsidence of our solution (17) at \\(\\lambda=1\\) (\\(u=1\\), \\(v=0\\)) with the solution of Barut and Girardello[15], \\[\\langle k;\\lambda=1,z^{\\prime}|z;k\\rangle={}_{0}F_{1}(2k;zz^{\\prime*})\\,=\\, \\langle k;z^{\\prime}|z;k\\rangle. \\tag{18}\\] We note the twofold degeneracy of the eigenvalues of the operator \\(L(\\lambda\ eq 1)\\) as it is seen from eqn. (3.4). We denote the two solutions as \\(\\langle\\pm;k;\\lambda,z^{\\prime}|z;k\\rangle\\). The degeneracy is removed at \\(\\lambda=1\\) as it is known from the BG-solution. Thus this point is a branching point for the operator \\(L(\\lambda)\\). It worth noting that the degeneracy is also removed by the following constrain on the two complex parameters \\(z^{\\prime}\\) and \\(\\lambda\\) in eqn. (3.6) \\[z^{\\prime}\\,=\\,2k\\sqrt{-uv}\\,=\\,k\\sqrt{1-\\lambda^{2}}\\,. \\tag{19}\\] Using the properties of the function \\({}_{1}F_{1}(a,b;z)\\)[16] we get from (17) in both (\\(\\pm\\)) cases the same expression \\(\\exp\\left(z\\sqrt{-v^{*}/u^{*}}\\right)\\) which can be seen to be nothing but the BG-representation of the Perelomov \\(SU(1,1)\\) CS \\(|\\zeta;k\\rangle\\)[17] with \\(\\zeta=\\sqrt{-v/u}\\), \\[|\\zeta;k\\rangle=(1-|\\zeta|^{2})^{k}\\,\\exp\\left(\\zeta K_{+}\\right)|k;k\\rangle\\,. \\tag{20}\\] If we impose \\(z^{\\prime}=-2k\\sqrt{-uv}\\) we get CS\\(|-\\zeta;k\\rangle\\). One can directly check (using the \\(SU(1,1)\\) commutation relations only) that CS (20) are indeed eigenstates of \\(L(\\lambda)\\), eqn. (11), with eigenvalue (19) provided \\(\\zeta^{2}=-v/u\\). We calculate explicitly the first and second momenta of the generators \\(K_{i}\\) in CS \\(|\\zeta;k\\rangle\\) (for \\(\\sigma_{K_{i}}\\) see also[8]) \\[\\sigma_{K_{1}K_{2}}=-2k\\,\\frac{\\mbox{Re}\\,\\zeta\\,\\mbox{Im}\\,\\zeta }{(1-|\\zeta|^{2})^{2}}\\,,\\] \\[\\sigma_{K_{1}}^{2}=\\frac{k}{2}\\,\\frac{|1+\\zeta^{2}|^{2}}{(1-|\\zeta |^{2})^{2}},\\quad\\sigma_{K_{2}}^{2}=\\frac{k}{2}\\,\\frac{|1-\\zeta^{2}|^{2}}{(1-| \\zeta|^{2})^{2}} \\tag{21}\\] and convince that the equality in the R-S UR (1) is satisfied. Thus all the Perelomov \\(SU(1,1)\\) CS are GIS. They are represented by the points of the two dimensional surface (19) in the four dimensional space of points \\((z,\\lambda)\\). The BG CS[15] form another subset of \\(SU(1,1)\\) GIS isomorfic to the plane \\(\\lambda=1\\). We note that the aboved formulas for the first and second momenta of \\(K_{i}\\) in CS \\(|\\zeta;k\\rangle\\) hold also for the (non square integrable) Lipkin-Cohen representation with Bargman index \\(k=1/4\\) (but not for \\(k=3/4\\) ), \\[K_{1}\\,=\\,\\frac{1}{4}\\,(Q^{2}-P^{2}),\\quad K_{2}\\,=\\,-\\frac{1}{4 }(QP+PQ),\\] \\[K_{3}\\,=\\,\\frac{1}{4}(Q^{2}+P^{2}). \\tag{22}\\] Due to the expressions of \\(K_{i}\\) in terms of the canonical pair \\(Q,P\\) the CS \\(|\\zeta;k=1/2,1/4,3/4\\rangle\\) (\\(|\\zeta;k=1/4,3/4\\rangle\\) are eigenstates of the squared boson operator \\(a^{2}\\)) are of interest for \\(Q\\)-\\(P\\) squeezing[4, 14, 18]. One can also calculate the fluctuations of \\(Q\\) and \\(P\\)[18] and show that CS \\(|\\zeta;k=1/4\\rangle\\) exhibit about 56% ordinary squeezing (Buzek[4]). The squeezing of \\(K_{1,2}\\) in CS \\(|\\zeta;k\\rangle\\) has been studed in[8]: the 100% squeezing (in the sense of the parameter \\(q\\), eqn. (9) for \\(K_{1}\\) is obtained at \\(\\zeta=i\\). We note however that \\[\\sigma_{i}^{2}(\\zeta;k)\\,\\geq\\,\\frac{k}{2}\\,=\\,\\sigma_{i}^{2}(0;k),\\quad i=K_{ 1},K_{2}\\,,\\] i.e. no squeezing of \\(\\sigma_{i}\\) in \\(|\\zeta;k\\rangle\\) in comparison with the ground state \\(|0;k\\rangle\\). In conclusion to this section we note that for \\(SU(1,1)\\) GIS the squeezing operator \\(S(\\lambda)\\) exists and can be defined by means of the relation \\(|z,\\lambda;k\\rangle=S(\\lambda)|z;k\\rangle\\) since the spectra of \\(L\\) and \\(L(\\lambda)\\) coinside. It belongs again to the \\(SU(1,1)\\) (but not to the series \\(D^{+}(k)\\) since one can show that it is not unitary) and its matrix elements \\(\\langle k;z|S|z;k\\rangle\\) are explicitly given by the functions (17) with \\(z^{\\prime}=z\\). These diagonal matrix elements determine \\(S\\) uniquely due to the analyticity property of the BG-representation[15]. We recall that the same property of the diagonal matrix elements holds in the canonical (Glauber) CS representation (see for example[2] and references therein). ## 4 \\(Su(2)\\) squeezed intelligent states Let now \\(A,B\\) and \\(C\\) be the generators \\(J_{1},-J_{2}\\) and \\(-J_{3}\\) of \\(SU(2)\\) group, i.e. the spin operators of spin \\(j=1/2,1,\\ldots,\\). In this example the commutator \\(C=-J_{3}\\) is not positive (the limit \\({\\rm Re}\\,\\lambda=0\\) can be taken) and the operator \\(A=J_{1}\\) has a disctete spectrum (some of its eigenstates are examples of exceptional GIS which are not eigenstates of \\(L(\\lambda)\\)). In paper[11] there were constructed the eigenstates (in their notations) \\(|w_{N}(\\tau)\\rangle\\) of the operator \\(J(\\alpha)=J_{1}-i\\alpha J_{2}\\), where \\(N=0,1,2\\ldots,2j\\), \\(\\tau^{2}=(1-\\alpha)/(1+\\alpha)\\), \\(\\alpha\\) being arbirary complex number. These states are eigenstates also of \\(L(\\lambda)=\\lambda J_{1}-iJ_{2}\\), thereby they all are \\(J_{1}\\)-\\(J_{2}\\) GIS, minimizing the R-S UR (1). They can be represented in the general form \\(|z_{N},\\lambda;j\\rangle\\) with the eigenvalues \\(z_{N}=(j-N)\\sqrt{\\lambda^{2}-1}\\). Among them (for \\(N=0\\) and \\(N=2j\\)) are the Bloch (the spin or the \\(SU(2)\\)) CS \\(|\\tau;-j\\rangle\\) and \\(|-\\tau;-j\\rangle\\) (\\(\\tau\\) is any complex number) \\[|\\tau;-j\\rangle=(1+|\\tau|^{2})^{-j}\\exp{(\\tau J_{+})}|-j\\rangle. \\tag{23}\\]The mean values of \\(J_{i},i=1,2,3\\) and \\(J_{i}^{2}\\) (and the dispersions \\(\\sigma_{J_{1}}\\) and \\(\\sigma_{J_{2}}\\)) in Bloch CS are known[11, 19]. Calculating also the covariation, \\[\\sigma_{J_{1},J_{2}}(\\tau)=2j\\frac{\\mathop{\\rm Re}\\tau\\mathop{\\rm Im}\\tau}{(1+| \\tau|^{2})^{2}} \\tag{24}\\] we can directly check that in CS \\(|\\tau\\rangle\\) the equality in the R-S UR (1) holds for the spin operators \\(J_{1,2}\\). Thus the Bloch CS are a subset of the \\(SU(2)\\) GIS. Let us briefly discuss the properties of the \\(SU(2)\\) GIS. First of all for a given parameter \\(\\lambda\\) there are \\(2j+1\\) independent GIS \\(|z_{N},\\lambda;j\\rangle\\). There is only one equal variances IS, namely \\(|-j\\rangle\\), the point \\(\\lambda=1\\) being again the branching point of the \\(L(\\lambda)\\). From this fact it follows that squeezing operator does not exist. Since the commutator \\(C=i[J_{1},J_{2}]=-J_{3}\\) the limit \\(\\mathop{\\rm Re}\\lambda=0\\) in GIS is allowed and in the fluctuations formulas (4) as well since at this limit \\(\\langle C\\rangle=\\langle J_{3}\\rangle=0\\). The operator \\(A=J_{1}\\) has a discrete spectrum, therefore \\(\\sigma_{A}\\geq 0\\). From the explicit formula \\[\\sigma_{J_{1}}^{2}(\\tau)=\\frac{j}{2}\\frac{|1-\\tau^{2}|^{2}}{(1+|\\tau|^{2})^{2}} \\tag{25}\\] we see that this fluctuation vanishes at \\(\\tau^{2}=1\\). Therefore in virture of the property (6) the Bloch CS \\(|\\tau=\\pm 1;-j\\rangle\\) are eigenstates of \\(J_{1}\\) which can be checked also directly, the eigenvalues being \\(\\pm j\\). The other eigenstates of \\(J_{1}\\) are exactly those exceptional states which minimize the R-S UR (1) but are not of the form \\(|z,\\lambda\\rangle\\) (i.e. dont obey eqn.(3)). The final note we make about \\(SU(2)\\) GIS is that except for the eigenvalue \\(z_{N}=0\\) (when \\(N=j\\)) all the others are not degenerate (unlike the \\(SU(1,1)\\) case). ## 5 Concluding remarks We have presented a method for construction of squeezed intelligent states (called here generalized intelligent states (GIS)) for any two quantum observables \\(A\\) and \\(B\\) in which 100% squeezing (after Eberly) can be obtained. GIS minimize the Robertson-Schrodinger uncertainty relation and can be considered as a generalization of the canonical \\(Q\\)-\\(P\\) squeezd states[13]. When the operators \\(A\\) and/or \\(B\\) are exspressed in terms of the canonical pair \\(Q,P\\) one can look in the \\(A\\)-\\(B\\) GIS for the squeezing of \\(Q\\) end/or \\(P\\) as well. Such are for example the cases of \\(SU(1,1)\\) GIS for the representations with Bargman indexes \\(k=1/4,1/2,3/4\\). The \\(SU(1,1)\\) GIS form a larger set of states which contains as two different subsets the Perelomov CS and the Barut and Girrardello CS. The method is based on the minimization of the Robertson-Schrodinger UR (1) for which the eigenvalue equation (3) for the operator \\(L(\\lambda)=\\lambda A+iB\\) is a sufficient condition. In case of \\(A\\) with continuous spectrum this is also a necessary conditon independently on \\(B\\). In view of this the method provides the possibility (when one is interested in squeezing of the fluctuations of \\(A\\)) to look for the best squeezing partner of \\(A\\). Thus for example if \\(A=P\\) then one can show that the eigenstates of \\(L(\\lambda)\\) exist for a series \\(B=Q^{n}\\), \\(n=1,5,9,\\ldots\\),. When the \\(A\\)-\\(B\\) GIS can be obtained from the equal variances IS \\(|z\\rangle\\) by means of the invertable squeezing operator \\(S(\\lambda)\\) the latter belongs to \\(SU(1,1)\\) as it can be derived from (11). This fact shows that \\(SU(1,1)\\) plays important role in a wide class of squeezing phenomina (not only in \\(Q\\)-\\(P\\) case). ### Acknowledgments This work is partialy supported by Bulgarian Science Foundation research grant # F-116. ## References * [1] R. Loudon and P. Knight, J. Mod. Opt. **34**, 709 (1987). * [2] W. Zhang, D. Feng and R. Gilmore. Rev. Mod. Phys. **62**, 867 (1990). * [3] G. D'Ariano, M. Rasetti and M. Vadacchino, Phys. Rev. D **32**, 1034 (1985). * [4] V. Buzek. J. Mod. Opt. **37**, 159 (1990); J. Sun et all. Phys. Rev. A**44**, 3369 (1991); C. Gerry and E. Hach III, Phys. Lett. A**174**, 185 (1993). * [5] J. Katriel et all, Phys. Rev. D**34**, 2332 (1986). * [6] P. Kral, J. Mod. Opt. **37**, 889 (1990). * [7] K. Wodkiewicz and J. Eberly, J. Opt. Soc. Am. B**2**, 458 (1985); K. Wodkiewicz, J. Mod. Opt. **34**, 941 (1987). * [8] V. Buzek, J. Mod. Opt. **37**, 303 (1990). * [9] J. Vaccaro and D. Pegg, J. Mod. Opt. **37**, 17 (1990). * [10] L. Kuang and F. Wang, Phys. Lett. A**173**, 221 (1993). * [11] C. Aragone, E Chalband and S. Salamo, J. Math. Phys. **17**, 1963 (1976). * [12] H. Robertson, Phys. Rev. **35**, 667 (1930); S. Schrodinger, Ber. Kil. Acad. Wiss., s. 296, Berlin (1930). * [13] V. Dodonov, E. Kurmyshev and V. Man'ko, Phys. Lett. A**76**, 150 (1980). * [14] D. A. Trifonov, J. Math. Phys. **34**, 100 (1993). * [15] A. O. Barut and L. Girardello, Commun. Math. Phys. **21**, 41 (1971). * [16]_Handbook on Mathematical Functions_, edited by M. Abramowitz and I. A. Stegun (National Bureau of Standarts, 1964; Russian translation, Nauka, 1979). * [17] A. M Perelomov, Commun. Math. Phys. **26**, 222 (1972). * [18] B. A. Nikolov and D. A. Trifonov, Commun. JINR. E2-81-798 (Dubna, 1981). * [19] E. H. Lieb, Commun. Math. Phys. **31**, 327 (1973).
A sufficient condition for a state \\(|\\psi\\rangle\\) to minimize the Robertson-Schrodinger uncertainty relation for two observables \\(A\\) and \\(B\\) is obtained which for \\(A\\) with no discrete spectrum is also a necessary one. Such states, called generalized intelligent states (GIS), exhibit arbitrarily strong squeezing (after Eberly) of \\(A\\) and \\(B\\). Systems of GIS for the \\(SU(1,1)\\) and \\(SU(2)\\) groups are constructed and discussed. It is shown that \\(SU(1,1)\\) GIS contain all the Perelomov coherent states (CS) and the Barut and Girardello CS while the Bloch CS are subset of \\(SU(2)\\) GIS. PACS' numbers 03.65.Ca; 03.65.Fd; 42.50.Dv.
Give a concise overview of the text below.
arxiv-format/0002040v1.md
# The dynamics of iterated transportation simulations Kai Nagel,\\({}^{a,b,}\\)1 Marcus Rickert,\\({}^{a,b,}\\)2 Patrice M. Simon,\\({}^{a,b,}\\)3 and Martin Pieck\\({}^{a,}\\)4 \\({}^{a}\\) Los Alamos National Laboratory, Los Alamos NM, U.S.A. \\({}^{b}\\) Santa Fe Institute, Santa Fe NM, U.S.A. Footnote 1: Corresponding author; current affiliation: Swiss Federal Institute of Technology (ETH) Zurich, Department of Computer Science, ETH Zentrum, CH-8092 Zurich, Switzerland; Email: [email protected]; Fax +1-810-815-1674 Footnote 2: Current affiliation: sd&m AG, Troisdorf, Germany; Email: [email protected] Footnote 3: Current affiliation: Carlson Wagonlit IT, Minneapolis MN, U.S.A.; Email: [email protected] Footnote 4: Email: [email protected] This version: October 23, 2018 # **Keywords:** dynamic traffic assignment (DTA); traffic micro-simulation; TRANSIMS; large-scale simulations; urban planning Introduction Transportation-related decisions of people often depend on what everybody else is doing. For example, decisions about mode choice, route choice, activity scheduling, etc., can depend on congestion, caused by the aggregated behavior of others. From a conceptual viewpoint, this consistency problem causes a deadlock, since nobody can start planning because she does not know what everybody else is doing. In fact, this problem is well-known not only in transportation, but in socio-economic systems in general. The traditional answer is to assume that everybody has complete information and is fully \"rational\", i.e. that, for some given utility-function, each individual agent picks the solution that is best for herself. This means that each individual agent's decision-making process now is globally known, and so each individual agent can (in principle) compute everybody else's decision-making process conditioned on her own, and so she can arrive at a solution. As a side effect, since everybody arrives at the same solution for everybody, one can replace the individual decision-making process by a global computation. Since this is a fictional process, one is traditionally not interested in the computation process itself, but just in the end result, which is a Nash equilibrium: Nobody can be better off by unilaterally changing strategies. Since only the end result is of interest, _any_ algorithm finding that end result (assuming it is unique) is equally valid. In transportation, a typical example is the user equilibrium solution of the static assignment problem (e.g. Sheffi (1985)): No driver (or traffic stream) can be better off by switching routes. Thus, we assume that being at the equilibrium point is behaviorally justified, and everything we do in order to get there is just a mathematical or computational trick. It is instructive to look at biological ecosystems for a minute. Here also, the behavior of everybody depends on everybody else. For example, an animal should not go to an area where predators catch it. Yet, since we assume that animals are less capable than humans to perform organized planning and reasoning, nobody ever assumed that animals would pre-compute an optimal solution based on some utility function. Instead, one formulates the problem as one of _co-evolution_, where everybody's (mostly instinctive) behavior evolves in reaction to what is going on in the environment, constrained by the rules of genetical chemistry. It is indeed this \"eco-system\" (= agent-based) approach that more and more groups are also taking in the simulation of socio-economic problems. The advantage is that one does not have to make assumptions about properties of the system that are necessary in order to make the mathematics work. For example, one can just define rules on how agents decide on switching routes, both over night and on-line, and let the simulation run. The disadvantage is that currently much less is known about the dynamical properties of such systems. The general question is how valid such approaches are for real-world problems. This includes the validity of the dynamics, the uniqueness of the solution, and the robustness of the solution towards changes in implementation. Non-uniqueness of solutions would be annoying although it may well be possible that this is a property of the real-world system. Robustness, i.e. that similar simulation methods yield similar results, is something we need to hope for because it will be hard to use such simulations in practice without it. The work behind this paper is agent-based, since it simulates all individual entities of the traffic system, such as travelers, vehicles, signals, etc., as separate objects, all following their own rules and rules of interaction. For example, there is a plan for each individual traveler instead of origin-destination streams. The simulations use, however, also concepts from the traditional equilibrium approach, notably the idea that no traveler should be able to (significantly) improve by switching routes. It is in general not necessary to do this with a micro-simulation approach; we felt however that it would be better to start out this way before moving on to more uncertain terrain, such as truly behaviorally based decision rules. This paper will, after a section about the problem formulation, first review static assignment (Sec. 3) and some theoretical results about simulation-based assignment (Sec. 4). In this, we will argue why we consider computational work a necessary complement to analytical progress. We will then proceed (Sec. 5) with a description of the real world scenario within which our computational studies were undertaken; we will also give a short description of the software modules that we used. The following sections (6-8) then describe results, in particular about uniqueness, variability, robustness and validation, and about alternative comparison measures. The paper is concluded by a summary. ## 2 Problem formulation: Dynamic traffic assignment The problem treated in this paper is commonly referred to as dynamic traffic assignment, or DTA. In general, one is given information about the traffic network, plus a time-dependent origin-destination (OD) matrix which represents demand. The problem is to assign routes to each OD stream such the the result is \"realistic\", which is often assumed to be the same as \"in equilibrium\". The main difference between most other simulation-based work (e.g. DYNAMIT (1999); Mahmassani et al. (1995)) and ours is that we are interested in an extremely disaggregated version of the problem: The transportation network comes with information such as number of lanes, speed limits, turn pockets, and signal phasing plans. And the demand is given in terms of individual trip plans, with a starting time, a starting location on the network, and a destination. Thus, writing this as a time-dependent OD-matrix is not really useful: First, since each link of the network is a potential origin or destination, one can easily obtain a matrix with 200 000 \\(\\times\\) 200 000 entries (TRANSIMS Portland case-study, in preparation). Second, our starting times come with second-by-second resolution. Translating this into second-by-second OD matrixes would result in matrices which are mostly empty, while aggregating it into longer time intervals means giving up information. ## 3 Static equilibrium assignment This section contains a very short review of static equilibrium assignment, which is the traditional approach to our problem. The purpose of the section is not to discuss newest developments in the field (for a relatively recent review see, e.g., Patriksson (1994)), but to lay the ground work to point out certain similarities between traditional equilibrium assignment and current implementations of simulation-based assignment. ### Deterministic User Equilibrium (UE) assignment The traditional solution the problem of assigning traffic demand to routes in an urban planning scenario is static deterministic equilibrium assignment. For our case, this would be equivalent to a steady-state _rate_ of travelers for each OD pair - anything in our plan-set that does not correspond to a steady state rate cannot be represented by static assignment. Equilibrium assignment problems are usually posed in a way such that they have a unique solution (in terms of the link flows), and algorithms are known that come arbitrarily close to that solution. One important assumption is that the travel time on a link is a monotonically increasing function of the link flow - it is this assumption which is violated in practice since for all flow levels below capacity there are two corresponding travel time values. One iterative algorithm that comes arbitrarily close to the solution is the Frank-Wolfe algorithm. A possible interpretation of the Frank-Wolfe algorithm is as follows: 1. Use the current set of the link travel times and compute fastest paths for each OD stream (also called all-or-nothing assignment). If this is the first iteration, use free speed travel times. 2. Find a certain optimal convex combination between the set of path assignments that have been computed so far and this new set of assignments. In other words, for a certain fraction of the travelers, replace their paths by new ones. 3. Declare this combination the current set of paths. Calculate link travel times for it and start over. This is repeated until some stopping criterion is met. This Frank-Wolfe algorithm is not the most efficient, but it is interesting because it resembles the iterated micro-simulation technique that we want to describe later in this text. For more information, see textbooks on the subject, e.g. Sheffi (1985). ### Stochastic User Equilibrium (SUE) assignment For stochastic assignment, one assumes that the route choice behavior of individual travelers has a random component - for example, because the information is noisy, or because there is a part of the cost function that cannot be explained by travel time, or because people's perception is imprecise (Ben-Akiva and Lerman, 1985). The standard approach to such problems is Discrete Choice Modeling (Ben-Akiva and Lerman, 1985). The outcome of this theory is that, for a given OD pair \\(rs\\), each alternative route \\(k\\) is chosen with probability \\(P_{k}\\). Faster routes are still preferred over slower routes, but a certain fraction of the OD stream chooses the slower routes. Note that at this point the solution of the problem has been made deterministic, that is, all noise is moved into the distribution of the routes. The solution to this is again unique under the usually assumed problem formulation - especially again that link travel time is a monotonically increasing function of link flow. An algorithm similar to the Frank-Wolfe algorithm can be shown to be applicable. A possible implementation of this (for a Probit choice model) is: * Given (in Step 1 of the deterministic assignment algorithm in Sec. 3.1) the current set of link travel times, compute a Monte Carlo version of an all-or-nothing assignment. Monte Carlo here means that, for each OD pair, we randomly disturb the link travel times according to the Probit distribution and only then calculate the fastest path. * In Step 2 of the deterministic assignment algorithm in Sec. 3.1, instead of calculating the optimal combination between the two sets of assignments, just use \\(1-1/n\\) of the old set and \\(1/n\\) of the new set where \\(n\\) is the number of the iteration (method of successive averages). In other words: One uses the best current estimate of link travel times, but then uses a noisy version of this to calculate a new assignment; and the fraction of the old assignment to be replaced is set to \\(1/n\\). ## 4 Simulation-based assignment As stated above, we want to solve a problem which is highly disaggregated and where the demand is given on a second-by-second basis. In addition, we assume that we are solving a real-world problem, which means besides other things that in principle we can get arbitrarily realistic network information. There is by now some agreement that such problems can be approached with detailed micro-simulations. That is, once we have plans -which include starting times and exact routes- for each traveler, we can just feed this into a micro-simulation and extract any performance measure such as time-dependent link travel times from the simulation. Based on these performance measures, we can change the plans of some or all of the travelers, re-run the micro-simulation, etc., until some kind of relaxation criterion is met. Let us introduce some minimal notation. Each user \\(u\\) chooses a route \\(r_{u}\\). Given \\(N\\) users, then the set of these routes is \\(\\vec{R}=(r_{1},\\ldots,r_{N})\\). Recall that each route includes a starting time. The resulting link costs are \\(\\vec{C}=(c_{1},\\ldots,c_{L})\\), where \\(L\\) is the number of links. \\(\\vec{C}\\) depends on the time-of-day, i.e. \\(\\vec{C}=\\vec{C}(t)\\). In iterated micro-simulations, there are two transitions: 1. the simulation (also called network loading model): \\[S:\\vec{R}\\rightarrow\\vec{C}\\,\\] and 2. the route assignment: \\[A:\\vec{C}\\rightarrow\\vec{R}\\.\\] Both mappings can be deterministic or stochastic. Our problem can then be formulated as an iterated map (Cascetta, 1989; Cascetta and Cantarella, 1991). One can see it as an iterated map both in the routes or in the costs: \\(\\vec{R}^{n}\\rightarrow\\vec{R}^{n+1}\\) or \\(\\vec{C}^{n}\\rightarrow\\vec{C}^{n+1}\\) (Bottom, in preparation; Bottom et al., 1998). A fix point would be reached if, e.g., \\(\\vec{R}^{n+1}\\equiv A\\circ S(\\vec{R}^{n})=\\vec{R}^{n}\\). If at least one of the mappings \\(S\\) or \\(A\\) is stochastic, then one cannot expect to reach a fix-point; however, one can hope to reach a steady state density: \\(p(\\vec{R}^{n+1})=p(\\vec{R}^{n})\\). As a side remark, note that one can also formulate this as a continuous dynamical system by making \\(n\\) continuous; the mapping \\(A\\circ S\\) would then be replaced by a differential equation. Such systems (e.g. Friesz et al. (1994)), although related, are somewhat more removed from the topic of this paper since iterated versions of dynamical systems can display vastly different dynamics from their continuous counterparts (e.g. Schuster (1995)). The stochastic user equilibrium case could be modeled by assuming that we have as many \"users\" as we have routes for each OD relation \\(rs\\), together with a traffic stream strength \\(q_{u}^{rs}\\). The mapping \\(S\\) is then simply the typical link cost function. That is, link flows are the sum of all OD streams that pass over the current link, and link cost is a function of link flow. The mapping \\(A:\\vec{C}\\to\\vec{R}\\) would come from the particular \"re-planning\" algorithm that was selected for the SUE problem, for example from a Monte Carlo assignment plus the method of successive averages. One can for example search for a fix-point in \\(\\vec{C}\\), i.e. \\(\\vec{C}^{n+1}=C^{n}\\). For the more general problem of time-dependent assignment, one would like to show similar things as one has shown for the equilibrium assignments: for example uniqueness, and an algorithm that is guaranteed to converge. Indeed, it can be shown (Cascetta, 1989; Cascetta and Cantarella, 1991) that under certain circumstances the mapping \\(A\\circ S:\\vec{R}\\to\\vec{R}\\) is ergodic, which means that any combination of feasible routes will eventually be used by the iterations. \"Feasible routes\" is a set of routes that brings the traveler from her starting location to her destination; in general, it is a finite set since one assumes that routes are loopless. This means that every combination of routes has a time-invariant probability \\(p(\\vec{R})\\) to be used, and since the system is ergodic, in order to obtain mean values one can replace the phase-space average \\(\\langle X\\rangle=\\sum_{\\vec{R}}p(\\vec{R})\\cdot X(\\vec{R})\\) by an iteration average \\(\\overline{X}^{T}=1/T\\sum_{n=i}^{i+T}X(\\vec{R}^{n})\\), where \\(n\\) and \\(i\\) are iteration indices. The in our view most critical condition for this to be true is that * _either_ each feasible route has a probability larger than zero to be selected in the following time step (Cascetta, 1989) (\\(*\\)) * _or_ each set of feasible routes can be reached from any other set of feasible routes via a sequence of iterations (Cascetta and Cantarella, 1991). (\\(**\\)) For practical applications, however, the situation is more complicated. We want to point out three examples of possible problems with the analytical results. We and others have never observed these problems in the practice of simulation-based DTA; however, they indicate that systematic simulations or an improvement of the theory are necessary. The examples come from Palmer (1989), which is an introduction to the phenomenon of broken ergodicity, which is one of the possible problems one might face. The first two examples can be found in most textbooks on Statistical Physics. * First, ergodicity is actually not enough to ensure that the phase space density (i.e. the space of all possible route sets) becomes uniform and stationary. For example, it would be ergodic to sort all feasible route sets into a sequence and only allow transitions along the sequence. A more stringent property called \"mixing\" is needed to cause any initial phase space distribution (which in our case is just a point: _one_ set of routes) to spread uniformly. (\\(*\\)) will ensure mixing, (\\(**\\)) will not. * Second, ergodicity only says something for the infinite time limit; it might take much longer than the age of the universe for an actual ergodic system to do a good job of covering the phase space. * Third, the system may show broken ergodicity. That is, the system may be quasi-ergodic in a _part_ of the phase space, with very little yet non-zero probability of escaping from that part. Our iterative assignment may be \"stuck\" with a particular type of solution for a very large number of iterations; if we do not run enough iterations, we will never see that there is another type of solution. Sometimes, one calls these states \"meta-stable\", but that word makes the situation sound less problematic than it potentially can be. In consequence, in this paper we want to report simulation experiments with highly disaggregated DTAs in large realistic networks. First, we look into uniqueness of the simulation results. Second, we are interested in how \"robust\" our results are. We want define \"robustness\" more in terms of common sense than in terms of a mathematical formalism. For this, we do not only want a single iterative process to \"converge\", but we want the result to be independent of any particular implementation. In consequence, we run many computational experiments, sometimes with variations of the same code, sometimes with totally different code, in order to see if any of our results are robust against these changes. Part of the robustness analysis is a validation, since we compare some results to field measurements, where available. Last, we will argue that there may be better ways to compare simulations than the typical link-by-link analysis, and show an example. Before we do all this, however, we need to describe our study set-up. ## 5 Context ### Dallas/Fort Worth Case study The context of the work done for this paper is the so-called Dallas-Fort Worth case study of the TRANSIMS project (Beckman et al, 1997). Most of the details relevant for the present paper can also be found in Nagel and Barrett (1997). The purpose of the case study was to show that a micro-simulation based approach to transportation planning such as promoted by TRANSIMS will allow analysis that is difficult or impossible with traditional assignment, such as measures of effectiveness (MOE) by sub-populations (stakeholder analysis), in a straightforward way. In the following we want to mention the most important details of the case study set-up; as said, more information can be found in Beckman et al (1997) and Nagel and Barrett (1997). The underlying road network for the study (public transit was not considered) was a so-called focused network, which had 14751 mostly bi-directional links and 9864 nodes. Out of those, 6124 links and 2292 nodes represented _all_ roads in a 5 miles times 5 miles study area, whereas the network got considerably \"thinner\" with further distance from the study area.5 A picture of the focused network can be found in Nagel and Barrett (1997). Footnote 5: Note that this “thinning out” of the network was not done in any systematic way and is explicitely _not_ recommended. It was an ad-hoc solution because more data was not available. The TRANSIMS design specifies to use demographic data as input and generate, via synthetic households and synthetic activities, the transportation demand. The Dallas/Fort Worth case study was based on interim technology: part of the demand generation was not available then. For that reason, we use a standard time-dependent OD matrix as a starting point, which is immediately broken down into individual trips. All trips are routed through the empty network, and only trips that go through our smaller study area are retained. This base set contains approx. 300 000 trips. Note that this defines a base set of trips for all subsequent studies presented in this paper: All trips thrown out before can no longer influence the result of the studies, although they may in reality. Again, more information can be found in Beckman et al (1997) and Nagel and Barrett (1997). ### The micro-simulations The above procedure does not only generate a base set of trips, but also an initial set of routes (called _initial planset_). These routes are then run through a micro-simulation, where each individual route plan is executed subject to the constraints posed by the traffic system (e.g. signals) and by other vehicles. Note that this implies that the micro-simulation is capable of executing pre-computed routes (only very few micro-simulation had this capability when this work was done although their number is growing), and it also implies that, in the simulations, drivers do _not_ have the capability of changing their routing on-line.6 Three micro-simulations are used, all three related to the TRANSIMS project, but with different levels of realism and different intended usages. We will call them TR (for TRANSIMS micro-simulation), PA (for PAMINA), and QM (for Queue Model). TR is the most realistic one, QM the least realistic ones of the three. The first two micro-simulations are based on the so-called cellular automata technique for traffic flow (Nagel et al. (1998) and references therein). The third one uses a simple queueing model (Gawron, 1998; Simon and Nagel, 1999). Footnote 6: On-line re-routing is not incompatible with TRANSIMS technology (Rickert, 1998), but it has not generally been implemented and studied. **The TRANSIMS micro-simulation (TR).** TR is the \"mainstream\" TRANSIMS micro-simulation. As said above, it is the most realistic of the three, including elements such as number of lanes, speed limits, signal plans, weaving and turn pockets, lange changing both for vehicle speed optimization and for plan following, etc. The studies described in this paper were run on five coupled Sun Sparc 5 workstations which ran the micro-simulation on the given problem as fast as real time; newer versions of this micro-simulation also run on a SUN Enterprise 4000. Details of TR can be found in Nagel et al. (1997) and in TRANSIMS (since 1992). **The PAMINA micro-simulation (PA).** The second micro-simulation, PA, uses simplified signal plans, and it does neither include pocket lanes nor lane changing for plan following. Most other specifications are the same as for TR, although differences can be caused by the different implementation. PAis much better optimized for high computing speed: it ran more than 20 times faster than TR for this study, which is a combined effect of using faster hardware (it is much easier to port to different hardware, thus being able to take advantage of new and faster hardware much sooner), less realism, and an implementation oriented towards computational speed. This micro-simulation is documented in Rickert (1998), Gawron et al. (1997), and Rickert and Nagel (1997). **The Queue Model micro-simulation (QM).** The QM micro-simulation uses simple FIFO queues for the link exits. These queues have a service rate equivalent to the link capacity. The main difference to other queueing models, e.g. Simao and Powell (1992), is that in our model each link has a limited \"storage capacity\", representing the number of vehicles that can sit on the link at jam density. This results in the capability to model queue spill-back across intersections, a very important feature of congested traffic. When a car enters a link at time \\(t_{enter}\\), an expected link travel time, \\(T_{free}\\), is calculated using the length and the free flow speed of the link. The vehicle is then put into the queue, together with a time \\(t_{dep1}=t_{enter}+T_{free}\\) which marks the earliest possible departure at the other end of the link. In each time step, the queue is checked if the first vehicle can leave according to \\(t_{dep1}\\), according to the capacity constraints, and according to the storage constraints of the destination link. The queue is served until one of these conditions is not fulfilled. The spirit of the model is also similar to earlier versions of INTEGRATION (INTEGRATION, 1994). For further details on QM, see Simon and Nagel (1999). The reason for having a model like this is that we want a micro-simulation model that fits into the overall TRANSIMS framework (i.e. runs on individual, pre-computed plans) but has much less computational and data requirements than the other simulation models. Indeed, QM runs on the same data as traditional assignment models, and on a single CPU it is computationally a factor 20 faster than PA. A parallel version is planned. ### Router Our micro-simulations run on precomputed route plans, i.e. on a link-by-link list which connects the starting point with the destination. For our studies, we use a time-dependent Dijkstra fastest path algorithm. Link travel times are, during the simulation, averaged into 15-minute bins. These 15-minute bins give the link costs for the Dijkstra algorithm (Jacob et al., in press). The Dallas study makes no attempt to include alternative modes of transportation, such as walking, bicycle, or public transit. We want to mention here that, in earlier versions, we randomly disturbed link travel times by \\(\\pm 30\\%\\) in order to spread out the traffic. This would be very similar to what some implementations of Stochastic User Equilibrium (SUE) assignment (Sec. 3.2) do. We found, however, that this led to many undesirable paths, for example cars leaving the freeway and re-entering at the same entry/exit. In general, it is rather difficult to find \"reasonable\" path alternatives different from the optimal path (Park and Rilett, 1997). We would therefore expect that also the standard SUE approach, when applied to large networks, would display such unrealistic artifacts. ### Feedback iterations and re-planning The initial planet is obviously wrong during heavy traffic because drivers have not adjusted to the occurence of congestion. In reality, drivers avoid heavily congested segments if they can. We model that behavior by using _iterative re-planning_: The micro-simulation is run on a pre-computed planset and travel times along links are collected. Then, for a certain fraction, \\(f\\), of the drivers, new routes are computed based on these link travel times. Technically, each route from the old planset is read in, with probability \\(1-f\\) it is written unchanged into a new file, and with probability \\(f\\) a new route is computed given the starting time, starting location, and destination location from the old route plus the time-dependent link travel times provided by the last iteration of the micro-simulation. In consequence, \\(f\\) is the re-planning fraction. After this, the micro-simulation is run again on the new planset, more drivers are re-routed, etc., until the system is \"relaxed\", i.e. no further changes are observed from one iteration to the next except for fluctuations (all three micro-simulations are stochastic). ## 6 Uniqueness By uniqueness we want to refer to a property that the same combination of router and microsimulation, with the same input data, generates via the relaxation procedure the same traffic scenario. This still allows for differences in the random seeds, in the way one selects travelers for re-planning, etc. We have run (Rickert, 1998) many relaxation experiments with different re-planning mechanisms, including an incremental network loading over 20 iterations, a very slow iteration series with 1% replanning fraction, and different ways of picking the travellers that are replanned. We have not found any indication that any of those relaxations ran into a traffic scenario that was different from the other ones. This means that in spite of the cautionary note in Sec. 4 regarding broken ergodicity etc., practical implementations of DTA seem to be well-behaved in this regard. This is consistent with observations from other groups (e.g. Wagner (personal communication)) and also from experiments involving human subjects (Mahmassani et al., 1986). As a side remark, it may be worth mentioning that the best-performing relaxation method was similar to the method of successive averages (MSA). What we call \"age-dependent re-planning\" (Rickert, 1998) started in practice with a 30% replanning fraction, which slowly decreased to 5% in the 20th iteration. MSA by comparison uses \\(1/n\\) as the replanning fraction, where \\(n\\) is the iteration number. Clearly, MSA also interpolates from high replanning fractions at the beginning to 5% at the 20th iteration. However, age-dependent re-planning also moves through the population in a systematic way, which is more than what MSA does. More systematic comparisons between these methods should be tried. Variability Note that any given iteration \\(n\\) corresponds to a certain set of route plans \\(\\vec{R}^{n}\\). For that reason, one can just re-run the simulation of these route plans, i.e. just the mapping \\(S\\). If one uses a different random seed, this leads to a different traffic scenario. _These_ differences (not to confuse with possible differences caused by broken ergodicity) could be quite large in our experiments. An example can be found in Nagel (1998), which shows link density plots for two simulations of exactly the same route plan sets but with different random seeds. The microsimulation that was used was the TRANSIIMS micro-simulation, i.e. TR. In one simulation (the \"exceptional\" traffic pattern), vehicles were unable to get off a freeway fast enough, thus blocking the freeway, thus causing queue spillback through a significant part of the network. In the other simulation (the \"generic\" traffic pattern, which we also found with many more other random seeds), this heavy queue spill-back did not occur. We have repeated these variability investigations with a Portland (Oregon) scenario and a different micro-simulation - indeed the QM queue micro-simulation from this paper. During those investigations, we found that such strong variations depend, as one might expect, heavily on the congestion level: They do not occur for low demand but become more and more frequent when demand rises (B. Raney et al, unpublished). An interesting comparison can be made between the sources and handling of noise in Stochastic User Equilibrium (SUE) assignment and our method. SUE assignment has at every iteration a \"best estimate\" of link travel times. A possible implementation of SUE assignment is to take a deliberately randomized realization of those link travel times, to re-route a fraction of the population on those, and then to take this new combined route set and to compute the new resulting link travel times. Instead of deliberately randomizing our best estimate of link travel times, we use a stochastic microsimulation which on its own generates variability of link travel times. The advantage of our method may be that the noise is actually _directly_ generated by the traffic system dynamics - one should therefore assume that for example correlations between links will be considerably more realistic as with the parametrized noise approach of the SUE assignment. In contrast to SUE, however, we use the _same_ random realization for _all_ re-planned routes. If one route is, via a fluctuation, fast in this realization, it will be fast for all OD pairs and thus attract a considerable amount of new traffic. This causes local oscillations, which are avoided in the SUE approach. However, remember that we would expect unrealistic routes with some SUE implementations, see Sec. 5.3. ## 8 Robustness and Validation As stated above, we mean by robustness the reproducibility of results under different implementations. Discussion of driving rules (mostly car following, lane changing, and gap acceptance) is a necessary part of this, but it is not sufficient and somewhat misleading since it does not put enough emphasis on the actual traffic outcome of the simulation. We propose at least two \"macroscopic\" tests: 1. \"Building block tests\": Test simple situations, such as traffic in a closed loop, unprotected turn flows, etc. See Nagel et al. (1997) for a discussion of this. 2. \"Real situation tests\": Compare the results of different micro-simulations under the same scenario. This is the topic of this section. [Figure 2 about here.] [Figure 3 about here.] [Figure 4 about here.] ### Visual comparison of link densities In our case, we did the same re-planning scenario, as described in Sec. 5.4, with three different micro-simulations, as described in Sec. 5.2. Visual comparisons of typical relaxed traffic patterns at 8:00am are shown in Figs. 2 to 4. In our view, there is a remarkable degree of \"structural\" similarity between the plots. This becomes particularly clear if one compares where the simulations predict bottlenecks, which are in general at the downstream end of congested pieces. [Figure 5 about here.] [Figure 6 about here.] [Table 1 about here.] ### Quantitative comparison of approach counts; validation A quantitative analysis of the above results would be useful, but is beyond the scope of this paper. Instead, we want to turn to link exit volume results for a smaller number of links, which have the advantage that comparison data from reality is available. Note that the field data is from 1996, whereas the demand for our simulations is from 1990. Thus, one would expect that there is more traffic in the field data. Fig. 5 shows a graphical comparison for the TR micro-simulation result. In general, it seems that we are underestimating traffic, as we had expected. However, we have a tendency to overpredict traffic on low priority links. This is probably because our router is based on travel time only, and does not include relevant other measures such as convenience. Fig. 6 compares exit counts for all links that we had field data for. The links are sorted according to increasing flow in the field data. Indeed, both PA and QM are somewhat underestiming the flows, although TR in the average does not do so. This was also the result of a wider comparison using other data (Beckman et al, 1997). We also include results from an assignment done by the local transportation planning authority, the North-Central Texas Council of Governments, NCTCOG. Unfortunately, the NCTCOG assignment data and the field data are not really comparable since the NCTCOG assignment was made for a different network than our simulations and the field measurements. In particular, the north extension of the Dallas North Tollway had not been built, which is the freeway extending from the interchange in the center of the study area to the north. A result of this is that in the NCTCOG assignment the freeway connects to a frontage road. This is what leads to the high assigned volume for link 32. One should recognize that this is a physically impossible solution since a signalized 3-lane road cannot carry 6000 vehicles per hour. A simulation-based method would have generated a totally different result in the same situation. Table 1 gives a quantitative summary of the same data. What is shown is the mean relative percentage deviation from the field value, i.e. \\[dev=\\frac{1}{N}\\sum_{a}\\frac{|x_{a}^{field}-x_{a}^{sim}|}{x_{a}^{field}}\\;,\\] where the sum goes over all links \\(a\\) in a class, \\(N\\) is the number of links in that class, and \\(x_{a}^{field}\\) and \\(x_{a}^{sim}\\) are the volume counts from the field data and from the simulation, respectively. Thus, a low value of \\(dev\\) means a small relative difference to reality. According to the table, classes go from 0 to 250 vehicles per hour, from 251 to 500 vehicles per hour, etc. The table first gives the class, then the number of count sites available for this class, then the values of \\(dev\\) for the different models. This is followed by data for the NCTCOG assignment, where another column of the number of count sites is given since the NCTCOG results were only available for a smaller number of links (that assignment was run on a reduced network). In spite of the difference regarding the freeway extension between the NCTCOG result and our simulation results we believe that the comparison between our results and the NCTCOG assignment allows the following two conclusions: 1. Our simulation-based results already at this early stage of the technology development yield forecast quality which is comparable to traditional assignment. 2. The three micro-simulations generate results that are remarkably similar in structure. This indicates that demand generation research is currently at least as important as micro-simulation research. ### Comparisons using accessibility Often, transportation engineers use aggregated measures of system performance (Measures of Effectiveness, MOEs) such as vehicle miles traveled (VMT), the sum of all traveled distances in the system. A similar measure are \"geometrical\" mean speeds, which are a measure of accessibility. In our situation, we collected for all vehicles with their origin outside the study area and their destination inside the study area their geometrical distance, \\(d\\), between origin and destination, and their travel time, \\(T\\). \\(d/T\\) is then the \"geometrical\" speed for a traveler, a measure of how fast she makes progress towards her destination. We see (Fig. 7) that during the rush hour, the results of TR, PA and for QM are practically identical. In uncongested situations, PA predicts faster travel than QM which predicts faster travel than TR. This effect can be traced back to the fact that (due to an implementation error) the maximum average speed in TR was 75 km/h (47 MPH), in PA it was the correct design value of 103 km/h (64 MPH), in QM it was set to the average free flow speed which is slightly lower than PA's average speed limit. Therefore, in uncongested situations, the predicted travel times clearly have to be systematically different. This indicates that aggregated measures can be considerably more robust than more disaggregated ones. In order to economize resources, one should therefore pay close attention to the question at hand - quite possibly, available models can give a robust answer for that question even when they fail to reproduce reality on a link-by-link basis. This is, however, naturally also a question for further research: Under what circumstances can we trust such aggregate measures even when the simulation results are not close to reality on a more disaggregated level? ## 9 Summary and discussion In this paper, we reported computational experiments with large scale dynamic traffic assignments (DTA) in the context of a Dallas scenario. An important difference of our approach to many other investigations is that our approach is completely disaggregated, i.e. we treat individual travelers from the beginning to the end. We also used a relatively large network, with 6124 links, where the restriction to the network size came from data availability, not from the capabilities of our methods. We pointed out that although some theory is available for DTAs, this theory needs to be used with care for typical simulation-based DTA scenarios with a small number of iterations. For that reason, computational experiments remain a necessity. We started by looking for indications of non-uniqueness of the solution, i.e. that different set-ups of the iterations could lead to different relaxed traffic scenarios. We did not find any indication that this had happened in our situation. We did, however, find instances of very strong variability of the simulation itself, i.e. the mapping \\(S\\) from route plans to traffic, which is stochastic. The reason for this is that the links are not independent; a queue which is caused by a \"normal\" fluctuation may spill back through large parts of the system. We then moved to the issue of \"robustness\", by which we mean that different implementations should yield comparable results in the same scenarios. In consequence, we implemented three different micro-simulations and ran them with the same input data and the same re-planning algorithms. Comparisons between those results, and also to field data and to a traditional assignment result, indicate that (1) simulation-based assignment is already at the current stage of research of similar quality as traditional (equilibrium) assignment, and (2) contributions to deviations from field data come probably as much from the demand generation as from the micro-simulations. We concluded by arguing that a link-by-link comparison of performances is not necessarily what one wants in order to evaluate a result. As an example, we showed a curve for accessibility of a certain area in the micro-simulation as a function of time-of-day, and we pointed out that for the critical part of the morning, which is the rush hour, the curves for the three simulation methods are practically identic. ## Acknowledgments KN thanks the Niels Bohr Institute in Copenhagen/Denmark for hospitality during the time when this paper was completed. Many thanks to the North-Central Texas Council of Governments (NCTCOG), especially Ken Cervenka, for preparing and providing the data. Los Alamos National Laboratory is operated by the University of California for the U.S. Department of Energy under contract W-7405-ENG-36 (LA-UR 98-2168). ## References * The Dallas-Fort Worth case study. Los Alamos Unclassified Report (LA-UR) 97-4502, see transims.tsasa.lanl.gov. * Ben-Akiva and Lerman (1985) Ben-Akiva, M., Lerman, S. R., 1985. Discrete choice analysis. The MIT Press, Cambridge, MA. * Bottom et al (1998) Bottom, J., Ben-Akiva, M., Bierlaire, M., Chabini, I., 1998. Generation of consistent anticipatory route guidance. In: Proceedings of TRISTAN III, vol. 2. San Juan, Puerto Rico. * Bottom and in preparation (2000) Bottom, J. A., in preparation. Ph.D. thesis, Massachusetts Institute of Technology, Cambridge, MA. * Cascetta (1989) Cascetta, E., 1989. A stochastic process approach to the analysis of temporal dynamics in transportation networks. Transportation Research B, 23B(1), 1-17. * Cascetta and Cantarella (1991) Cascetta, E., Cantarella, C., 1991. A day-to-day and within day dynamic stochastic assignment model. Transportation Research A, 25A(5), 277-291. * DYNAMIT (1999) DYNAMIT, 1999. DYNAMIT. Massachusetts Institute of Technology, Cambridge, Massachusetts. See its.mit.edu. * Friesz et al (1994) Friesz, T. L., Bernstein, D., Mehta, N. J., Tobin, R. L., Ganjalizadeh, S., 1994. Day-to-day dynamic network disequilibria and idealized traveler information systems. Operations Research, 42(6), 1120-1136. * Gawron (1998) Gawron, C., 1998. An iterative algorithm to determine the dynamic user equilibrium in a traffic simulation model. International Journal of Modern Physics C, 9(3), 393-407. * Gawron et al (1997) Gawron, C., Rickert, M., Wagner, P., 1997. Real-time simulation of the German autobahn network. In: Proc. of the 4th Workshop on Parallel Systems and Algorithms (PASA '96), edited by F. Hossfeld, E. Maehle, E. Mayer. World Scientific Publishing Co. * INTEGRATION (1994) INTEGRATION, 1994. INTEGRATION: A model for simulating IVHS in integrated traffic networks, User's guide for model version 1.5e. Transportation Systems Research Group, Queens' University and M. Van Aerde and Associates, Ltd. *Jacob, R. R., Marathe, M. V., Nagel, K., in press. A computational study of routing algorithms for realistic transportation networks. ACM Journal of Experimental Algorithms. See www.inf.ethz.ch/~nagel/papers. * Mahmassani et al. (1986) Mahmassani, H., Chang, G.-L., Herman, R., 1986. Individual decisions and collective effects in a simulated traffic system. Transportation Science, 20(4), 258. * Mahmassani et al. (1995) Mahmassani, H., Hu, T., Jayakrishnan, R., 1995. Dynamic traffic assignment and simulation for advanced network informatics (DYNASMART). In: Urban traffic networks: Dynamic flow modeling and control, edited by N. Gartner, G. Improta. Springer, Berlin/New York. * Nagel (1998) Nagel, K., 1998. Experiences with iterated traffic microsimulations in Dallas. In: Traffic and granular flow'97, edited by D. Wolf, M. Schreckenberg, pages 199-214. Springer, Heidelberg. * Nagel and Barrett (1997) Nagel, K., Barrett, C., 1997. Using microsimulation feedback for trip adaptation for realistic traffic in Dallas. International Journal of Modern Physics C, 8(3), 505-526. * Nagel et al. (1997) Nagel, K., Stretz, P., Pieck, M., Leckey, S., Donnelly, R., Barrett, C. L., 1997. TRANSIMS traffic flow characteristics. Los Alamos Unclassified Report (LAUR) 97-3530, see www.inf.ethz.ch/~nagel/papers. Earlier version: Transportation Research Board Annual Meeting paper 981332. * Nagel et al. (1998) Nagel, K., Wolf, D., Wagner, P., Simon, P. M., 1998. Two-lane traffic rules for cellular automata: A systematic approach. Physical Review E, 58(2), 1425-1437. * Palmer (1989) Palmer, R., 1989. Broken ergodicity. In: Lectures in the Sciences of Complexity, edited by D. L. Stein, vol. I of Santa Fe Institute Studies in the Sciences of Complexity, pages 275-300. Addison-Wesley. * Park and Rilett (1997) Park, D., Rilett, L. R., 1997. Identifying multiple and reasonable paths in transportation networks: A heuristic approach. Transportation Research Records, 1607, 31-37. * Patriksson (1994) Patriksson, M., 1994. The Traffic Assignment Problem: Models and Methods. Topics in Transportation. VSP, Zeist, The Netherlands. * Rickert (1998) Rickert, M., 1998. Traffic simulation on distributed memory computers. Ph.D. thesis, University of Cologne, Germany. See www.zpr.uni-koeln.de/~mr/dissertation. * Rickert and Nagel (1997) Rickert, M., Nagel, K., 1997. Experiences with a simplified microsimulation for the Dallas/Fort Worth area. International Journal of Modern Physics C, 8(3), 483-504. * Schuster (1995) Schuster, H. G., 1995. Deterministic Chaos: An Introduction. Wiley-VCH Verlag GmbH. * Sheffi (1985) Sheffi, Y., 1985. Urban transportation networks: Equilibrium analysis with mathematical programming methods. Prentice-Hall, Englewood Cliffs, NJ, USA. * Sheffi and Schuster (1997)Simao, H., Powell, W., 1992. Numerical methods for simulating transient, stochastic queueing networks. Transportation Science, 26, 296. * Simon and Nagel (1999) Simon, P. M., Nagel, K., 1999. Simple queueing model applied to the city of Portland. International Journal of Modern Physics C, 10(5), 941-960. Earlier version Transportation Research Board Annual Meeting paper 99 12 49. * TRANSIMS (1992) TRANSIMS, since 1992. TRANSIMS, TRansportation ANalysis and SIMulation System. See transims.tsasa.lanl.gov. * Wagner (1997) Wagner, P., personal communication. List of Figures * 1 Sum of all trip times (system-wide Vehicle Time Travelled) for different iteration series. For details see Rickert (1998). All iteration series relax towards the same VTT, which indicates (but does not prove) that all series releax towards the same traffic scenario. * 2 TRANSSIMS 14b 8:00 AM * 3 PAMINA 8:00 AM * 4 QM 8:00 AM * 5 Approach volumes from the TRANSSIMS micro-simulation series compared to field data. The wide gray bars are the simulation results; the narrow black bars are the field data. The freeway intersection in the bottom left corner of this figure is in the center of the study area as can be seen in Figs. 2 to 3. * 6 Approach volumes for the links shown in Fig. 5. The links are sorted according to increasing flow in the field data, which is denoted by the wide line. The different points in different shades of gray denote simulation results as denoted in the legend; the black line shows the NCTCOG assignment result. Lines are included to guide the eye. * 7 \"Geometrical\" mean speeds into the study-area, i.e. geometrical distance between origin and destination divided by trip time. The \\(x\\)-axis shows the starting time for the trips. Figure 1: Sum of all trip times (system-wide Vehicle Time Travelled) for different iteration series. For details see Rickert (1998). All iteration series relax towards the same VTT, which indicates (but does not prove) that all series relax towards the same traffic scenario. Figure 2: TRANSIMS 14b 8:00 AM Figure 4: QM 8:00 AM Figure 3: PAMINA 8:00 AM Figure 5: Approach volumes from the TRANSIMS micro-simulation series compared to field data. The wide gray bars are the simulation results; the narrow black bars are the field data. The freeway intersection in the bottom left corner of this figure is in the center of the study area as can be seen in Figs. 2 to 3. Figure 6: Approach volumes for the links shown in Fig. 5. The links are sorted according to increasing flow in the field data, which is denoted by the wide line. The different points in different shades of gray denote simulation results as denoted in the legend; the black line shows the NCTCOG assignment result. Lines are included to guide the eye. Figure 7: “Geometrical” mean speeds into the study-area, i.e. geometrical distance between origin and destination divided by trip time. The \\(x\\)-axis shows the starting time for the trips. List of Tables * 1 Comparison between field data and simulation results. Shown is the average relative error, i.e. \\(\\langle|N_{field}-N_{sim}|/N_{field}\\rangle\\), where the averages are separate for different flow levels. The first column shows the boundaries of the classes, in vehicles per hour. The second column shows the number of entries per class. The third, fourth, and fifth column show results for the different micro-simulations, TR, PA, and QM, respectively. The sixth column shows again the number of entries per class, this time for the NCTCOG assignment; column number seven finally shows the results for the NCTCOG assignment. For the TR, PA, QM, and NCTCOG columns, lower values are better. \\begin{table} \\begin{tabular}{r c c c c c c} \\hline \\hline Class & \\(N\\) & TR & PA & QM & \\(N_{COG}\\) & NCTCOG \\\\ \\hline \\(\\leq 250:\\) & 7 & 0.960294 & 1.217647 & 1.391176 & & \\\\ \\(251-500:\\) & 7 & 1.004520 & 0.639925 & 0.577778 & 6 & 0.774523 \\\\ \\(501-750:\\) & 5 & 0.409052 & 0.354857 & 0.347362 & 3 & 0.346680 \\\\ \\(751-1000:\\) & 3 & 0.317941 & 0.252839 & 0.218395 & 3 & 0.348600 \\\\ \\(1001-1500:\\) & 11 & 0.245609 & 0.325389 & 0.307282 & 8 & 0.620177 \\\\ \\(\\geq 1501:\\) & 6 & 0.226150 & 0.271165 & 0.370695 & 5 & 0.745831 \\\\ \\hline \\end{tabular} \\end{table} Table 1: Comparison between field data and simulation results. Shown is the average relative error, i.e. \\(\\langle|N_{field}-N_{sim}|/N_{field}\\rangle\\), where the averages are separate for different flow levels. The first column shows the boundaries of the classes, in vehicles per hour. The second column shows the number of entries per class. The third, fourth, and fifth column show results for the different micro-simulations, TR, PA, and QM, respectively. The sixth column shows again the number of entries per class, this time for the NCTCOG assignment; column number seven finally shows the results for the NCTCOG assignment. For the TR, PA, QM, and NCTCOG columns, lower values are better.
Iterating between a router and a traffic micro-simulation is an inexcisibly accepted method for doing traffic assignment. This paper, after pointing out that the analytical theory of simulation-based assignment to-date is insufficient for some practical cases, presents results of simulation studies from a real world study. Specifically, we look into the issues of uniqueness, variability, and robustness and validation. Regarding uniqueness, despite some cautionary notes from a theoretical point of view, we find no indication of \"meta-stable\" states for the iterations. Variability however is considerable. By variability we mean the variation of the simulation of a given plan set by just changing the random seed. We show then results from three different micro-simulations under the same iteration scenario in order to test for the robustness of the results under different implementations. We find the results encouraging, also when comparing to reality and with a traditional assignment result.
Write a summary of the passage below.
arxiv-format/0002203v1.md
# Nuclear Matter and its Role in Supernovae, Neutron Stars and Compact Object Binary Mergers James M. Lattimer Madappa Prakash Dept. of Physics & Astronomy, State University of New York at Stony Brook, Stony Brook, NY 11794-3800 ###### keywords: Nuclear Matter; Supernovae; Neutron Stars; Binary Mergers Pacs: 26.50.+x, 26.60.+c, 97.60.Bw, 97.60.Jd, 97.80.-d + Footnote †: Partially supported by USDOE Grants DE-AC02-87ER40317 and DE-FG02-88ER-40388, and by NASA ATP Grant # NAG 52863. Introduction The equation of state (EOS) of dense matter plays an important role in the supernova phenomenon and in the structure and evolution of neutron stars. Matter in the collapsing core of a massive star at the end of its life is compressed from white dwarf-like densities of about \\(10^{6}\\) g cm\\({}^{-3}\\) to two or three times the nuclear saturation density, about \\(3\\cdot 10^{14}\\) g cm\\({}^{-3}\\) or \\(n_{s}=0.16\\) baryons fm\\({}^{-3}\\). The central densities of neutron stars may range up to 5-10 \\(n_{s}\\). At densities around \\(n_{s}\\) and below matter may be regarded as a mixture of neutrons, protons, electrons and positrons, neutrinos and antineutrinos, and photons. At higher densities, additional constituents, such as hyperons, kaons, pions and quarks may be present, and there is no general consensus regarding the properties of such ultradense matter. Fortunately for astrophysics, however, the supernova phenomenon and many aspects of neutron star structure may not depend upon ultradense matter, and this article will focus on the properties of matter at lower densities. The main problem is to establish the state of the nucleons, which may be either bound in nuclei or be essentially free in continuum states. Neither temperatures nor densities are large enough to excite degrees of freedom, such as hyperons, mesons or quarks. Electrons are rather weakly interacting and may be treated as an ideal Fermi gas: at densities above \\(10^{7}\\) g cm\\({}^{-3}\\), they are relativisitic. Because of their even weaker interactions, photons and neutrinos (when they are confined in matter) may also be treated as ideal gases. At low enough densities and temperatures, and provided the matter does not have too large a neutron excess, the relevant nuclei are stable in the laboratory, and experimental information may be used directly. The so-called Saha equation may be used to determine their relative abundances. Under more extreme conditions, there are a number of important physical effects which must be taken into account. At higher densities, or at moderate temperatures, the neutron chemical potential increases to the extent that the density of nucleons outside nuclei can become large. It is then important to treat matter outside nuclei in a consistent fashion with that inside. These nucleons will modify the nuclear surface, decreasing the surface tension. At finite temperatures, nuclear excited states become populated, and these states can be included by treating nuclei as warm drops of nuclear matter. At low temperatures, nucleons in nuclei are degenerate and Fermi-liquid theory is probably adequate for their description. However, near the critical temperature above which the dense phase of matter inside nuclei can no longer coexist with the lighter phase of matter outside nuclei, the equilibrium of the two phases of matter is crucial. The fact that at subnuclear densities the spacing between nuclei may be of the same order of magnitude as the nuclear size itself will lead to substantial reductions in the nuclear Coulomb energy. Although finite temperature \"plasma\" effects will modify this, the zero-temperature Wigner-Seitz approximation employed by Baym, Bethe & Pethick (1) is usually adequate. Near the nuclear saturation density, nuclear deformations must be dealt with, including the possibilities of \"pasta-like\" phases and matter turning \"inside-out\" (_i.e._, the dense nuclear matter envelopes a lighter, more neutron-rich, liquid). Finally, the translational energy of the nuclei may be important under some conditions. This energy is important in that it may substantially reduce the average size of the nuclear clusters. An acceptable way of bridging the regions of low density and temperature, in which the nuclei can be described in terms of a simple mass formula, and high densities and/or high temperatures in which the matter is a uniform bulk fluid, is to use a compressible liquid droplet model for nuclei in which the drop maintains thermal, mechanical, and chemical equilibrium with its surroundings. This allows us to address both the phase equilibrium of nuclear matter, which ultimately determines the densities and temperatures in which nuclei are permitted, and the effects of an external nucleon fluid on the properties of nuclei. Such a model was originally developed by Lattimer _et al._ (2) and modified by Lattimer & Swesty (3). This work was a direct result of David Schramm's legendary ability to mesh research activities of various groups, in this case to pursue the problem of neutron star decompression. After the fact, the importance of this topic for supernovae became apparent. ## 2 Nucleon Matter Properties The compressible liquid droplet model rests upon the important fact that in a many-body system the nucleon-nucleon interaction exhibits saturation. Empirically, the energy per particle of bulk nuclear matter reaches a minimum, about -16 MeV, at a density \\(n_{s}\\cong 0.16\\) fm\\({}^{-3}\\). Thus, close to \\(n_{s}\\), its density dependence is approximately parabolic. The nucleon-nucleon interaction is optimized for equal numbers of neutrons and protons (symmetric matter), so a parabolic dependence on the neutron excess or proton fraction, \\(x\\), can be assumed. About a third to a half of the energy change made by going to asymmetric matter is due to the nucleon kinetic energies, and to a good approximation, this varies as \\((1-2x)^{2}\\) all the way to pure neutron matter (\\(x=0\\)). The \\(x\\) dependence of the potential terms in most theoretical models can also be well approximated by a quadratic dependence. Finally, since at low temperatures the nucleons remain degenerate, their temperature dependence to leading order is also quadratic. Therefore, for analytical purposes, the nucleon free energy per baryon can be approximated as \\(f_{bulk}(n,x)\\), in MeV, as \\[f_{bulk}(n,x)\\simeq-16+S_{v}(n)(1-2x)^{2}+\\frac{K_{s}}{18}\\biggl{(} \\frac{n}{n_{s}}-1\\biggr{)}^{2}\\] \\[-\\frac{K^{\\prime}_{s}}{27}\\biggl{(}\\frac{n}{n_{s}}-1\\biggr{)}^{3}- a_{v}(n,x)T^{2}\\,, \\tag{1}\\] where \\(a_{v}(n)=(2m^{*}/\\hbar^{2})(\\pi/12n)^{2/3}\\). The expansion parameters, whose values are uncertain to varying degrees, are the incompressibility, \\(K_{s}=190-250\\) MeV, the skewness parameter \\(K^{\\prime}_{s}=1780-2380\\) MeV, the symmetry energy coefficient \\(S_{v}\\equiv S_{v}(n_{s})=25-36\\) MeV, and the bulk level density parameter, \\(a_{v}(n_{s},x=1/2)\\simeq(1/15)(m^{*}(n_{s},x=1/2)/m)\\) MeV\\({}^{-1}\\), where \\(m^{*}\\) is the effective mass of the nucleon. Values for \\(m^{*}(n_{s},x=1/2)/m\\) are in the range \\(0.7-0.9\\). The general definition of the incompressibility is \\(K=9dP/dn=9d(n^{2}df_{bulk}/dn)/dn\\), where \\(P\\) is the pressure, and \\(K_{s}\\equiv K(n_{s},1/2)\\). It is worthwhile noting that the symmetry energy and nucleon effective mass (which directly affects the matter's specific heat) are density dependent, but these dependencies are difficult to determine from experiments. The parameters, and their density dependences, characterize the nuclear force model and are essential to our understanding of astrophysical phenomena. The experimental determination of these parameters has come from comparison of the total masses and energies of giant resonances of laboratory nuclei with theoretical predictions. Some of these comparisons are easily illustrated with the compressible liquid droplet model. In this model, the nucleus is treated as uniform drop of nuclear matter with temperature \\(T\\), density \\(n_{i}\\) and proton fraction \\(x_{i}\\). The nucleus will, in general, be surrounded by and be in equilibrium with a vapor of matter with density \\(n_{o}\\) and proton fraction \\(x_{o}\\). At low ambient densities \\(n\\) and vanishing temperature, the outside vapor vanishes. Even at zero temperature, if \\(n\\) is large enough, greater than the so-called neutron drip density \\(n_{d}\\simeq 1.6\\cdot 10^{-3}\\) fm\\({}^{-3}\\), the neutron chemical potential of the nucleus is positive and \"free\" neutrons exist outside the nucleus. At finite temperature, the external vapor consists of both neutrons and protons. In addition, because of their high binding energy, \\(\\alpha-\\)particles will also be present. The total free energy density is the sum of the various components: \\[F=F_{H}+F_{o}+F_{\\alpha}+F_{e}+F_{\\gamma}\\,. \\tag{2}\\] Here, \\(F_{H}\\) and \\(F_{o}\\) represent the free energy densities of the heavy nuclei and the outside vapor, respectively. The energy densities of the electrons and photons, \\(F_{e}\\) and \\(F_{\\gamma}\\), are independent of the baryons and play no role in the equilibrium. For simplicity, we neglect the role of \\(\\alpha\\)-particles in the following discussion (although it is straightforward to include their effect (2)). In the compressible liquid drop model, it is assumed that the nuclear energy can be written as an expansion in \\(A^{1/3}\\) and \\((1-2x_{i})^{2}\\): \\[F_{H}=un_{i}[f_{bulk}+f_{surf}+f_{Coul}+f_{trans}]\\,, \\tag{3}\\] where the \\(f\\)'s represent free energies per baryon due to the bulk, surface, Coulomb, and translation, respectively. The bulk energy, for example, is given by Eq. (1). The surface energy can be parametrized as \\[f_{surf}=4\\pi R^{2}\\sigma(x_{i},T)\\equiv 4\\pi R^{2}h(T)[\\sigma_{o}-\\sigma_{s}(1- 2x_{i})^{2}]\\,, \\tag{4}\\] where \\(R\\) is the nuclear charge radius, \\(h(T)\\) is a calculable function of temperature, \\(\\sigma_{o}\\) is the surface tension of symmetric matter, and \\(\\sigma_{s}=(n_{i}^{2}/36\\pi)^{1/3}S_{s}\\) where \\(S_{s}\\) is the surface symmetry energy coefficient from the traditional mass formula. In this simplified discussion, the influence of the neutron skin (2), which distinguishes the \"drop model\" from the \"droplet model\", is omitted. The Coulomb energy, in the Wigner-Seitz approximation (1), is \\[f_{Coul}=0.6x_{i}^{2}A^{2}e^{2}D(u)/R\\,, \\tag{5}\\] where \\(D(u)=1-1.5u^{1/3}+0.5u\\) and \\(u\\) is the fraction of the volume occupied by nuclei. If the fractional mass of matter outside the nuclei is small, \\(u\\simeq n/n_{i}\\). It is clear that additional parameters, \\(S_{s}\\) and another involving the temperature dependence of \\(h\\), exist in conjunction with those defining the expansions of the bulk energy. The temperature dependence is related to the matter's critical temperature \\(T_{c}\\) at which the surface disappears. It is straightforward to demonstrate from the thermodynamic relations defining \\(T_{c}\\), namely \\(\\partial P_{bulk}/\\partial n=0\\) and \\(\\partial^{2}P_{bulk}/\\partial n^{2}=0\\), that \\(T_{c}\\propto\\sqrt{K_{s}}\\). Therefore, the specific heat to be associated with the surface energy will in general be proportional to \\(T_{c}^{-2}\\propto K_{s}^{-1}\\). About half the total specific heat originates in the surface, so \\(K_{s}\\) influences the temperature for a given matter entropy, important during stellar collapse. The equilibrium between nuclei and their surroundings is determined by minimizing \\(F\\) with respect to its internal variables, at fixed \\(n,Y_{e}\\), and \\(T\\). This is described in more detail in Refs. (2; 3), and leads to equilibrium conditions involving the pressure and the baryon chemical potentials, as well as a condition determining the nuclear size \\(R\\). The latter is analogous to the one found by Baym, Bethe & Pethick (1) who equated the nuclear surface energy with twice the Coulomb energy. The relations in Eqs. (4) and (5) lead to \\[R=\\left[\\frac{15\\sigma(x_{i})}{8\\pi e^{2}x_{i}^{2}n_{i}^{2}}\\right]^{1/3}. \\tag{6}\\]Experimental limits to \\(K_{s}\\), most importantly from RPA analyses of the breathing mode of the giant monopole resonance [(4)], give \\(K_{s}\\cong 230\\) MeV. It is also possible to obtain values from the so-called scaling model developed from the compressibile liquid drop model. The finite-nucleus incompressibility is \\[K(A,Z)=(M/\\hbar^{2})R^{2}E_{br}^{2}\\,, \\tag{7}\\] where \\(M\\) is the mass of the nucleus and \\(E_{br}\\) is the breathing-mode energy. \\(K(A,Z)\\) is commonly expanded as \\[K(A,Z)=K_{s}+K_{surf}A^{-1/3}+K_{vI}I^{2}+K_{surfI}I^{2}A^{-1/3}+K_{C}Z^{2}A^{- 4/3}\\,, \\tag{8}\\] and then fit by least squares to the data for \\(E_{br}\\). Here the asymmetry \\(I=1-2Z/A\\). For a given assumed value of \\(K_{s}\\), and taking \\(K_{surfI}=0\\), Pearson [(5)] showed that experimental data gave \\[K_{C}\\simeq 15.4-0.065K_{s}\\pm 2\\ {\\rm MeV}\\,,\\quad K_{surf}\\simeq 230-3.2K_{s} \\pm 50\\ {\\rm MeV}\\,. \\tag{9}\\] With minimal assumptions regarding the form of the nuclear force, Pearson [(5)] demonstrated that values of \\(K_{s}\\) ranging from 200 MeV to more than 350 MeV could be consistent with experimental data. But the liquid drop model predicts other relations between the parameters: \\[K(A,Z) = R^{2}\\frac{\\partial^{2}E(Z,A)/A}{\\partial R^{2}}\\Bigg{|}_{A}=9n^ {2}\\frac{\\partial^{2}E(Z,A)/A}{\\partial n^{2}}\\Bigg{|}_{A}\\,,\\] \\[0=P(A,Z) = R\\frac{\\partial^{2}E(Z,A)/A}{\\partial R}\\Bigg{|}_{A}=3n\\frac{ \\partial E(Z,A)/A}{\\partial n}\\Bigg{|}_{A}\\,. \\tag{10}\\] Here \\(E(Z,A)\\) is the total energy of the nucleus, and is equivalent to Eq. (3). The second of these equations simply expresses the equilibrium between the nucleus and the surrounding vacuum, which implies that the pressure of the bulk matter inside the nucleus is balanced by the pressure due to the curvature of the surface and the Coulomb energy. It can then be shown that \\[K_{C} = -(3e^{2}/5r_{o})[8+27n_{s}^{3}f^{\\prime\\prime\\prime}_{bulk}(n_{s} )/K_{s}]\\,,\\] \\[K_{surf} = 4\\pi r_{o}^{2}\\sigma_{o}[9n_{s}^{2}\\sigma_{o}^{\\prime\\prime}/ \\sigma_{o}+22+54n_{s}^{3}f^{\\prime\\prime\\prime}_{bulk}(n_{s})/K_{s}]\\,,\\] \\[K_{surfI} = 4\\pi r_{o}^{2}\\sigma_{s}[9n_{s}^{2}\\sigma_{s}^{\\prime\\prime}/ \\sigma_{s}+22+54n_{s}^{3}f^{\\prime\\prime\\prime}_{bulk}(n_{s})/K_{s}]\\,,\\] \\[K_{I} = 9[n_{s}^{2}S_{v}^{\\prime\\prime}(n_{s})-2n_{o}S_{v}^{\\prime}(n_{s} )-9n_{s}^{4}S_{v}^{\\prime}(n_{s})f^{\\prime\\prime\\prime}_{bulk}(n_{s})/K_{s}]\\,. \\tag{11}\\] Primes denote derivatives with respect to the density. From these relations,and again assuming \\(K_{surfl}=0\\), Pearson demonstrated that an interesting correlation between \\(K_{s}\\) and \\(K^{\\prime}_{s}\\), where \\(K^{\\prime}_{s}\\equiv-27n_{s}^{3}f^{\\prime\\prime\\prime}_{bulk}(n_{s})\\), could be obtained: \\[K^{\\prime}_{s}=-0.0860K^{2}_{s}+(28.37\\pm 2.65)K_{s}\\,. \\tag{12}\\] Assuming \\(K_{s}\\simeq 190-250\\) MeV, this suggests that \\(K^{\\prime}_{s}=1780-2380\\) MeV, a potential constraint. Alternatively, eliminating \\(K^{\\prime}_{s}\\), one finds \\[K_{s}=137.4-26.36n_{s}^{2}\\sigma_{o}^{\\prime\\prime}/\\sigma_{o}\\pm 23.2\\ {\\rm MeV}\\,. \\tag{13}\\] The second derivative of the surface tension can be deduced from Hartree-Fock or Thomas-Fermi semi-infinite surface calculations. For example, if a parabolic form of \\(f_{bulk}\\) is used, one finds \\[n_{s}^{2}\\sigma_{o}^{\\prime\\prime}/\\sigma_{o}=-6 \\tag{14}\\] leading to \\(K_{s}=295.5\\pm 23.2\\) MeV. In general, the density dependence of \\(S_{v}\\) will decrease the magnitudes of \\(K_{s}\\) and \\(\\sigma_{o}^{\\prime\\prime}\\) from the above values. It is hoped current experimental work will tighten these constraints. A shortcoming of the scaling model is that, to date, the surface symmetry energy term was neglected. This is not required, however, and further work is necessary to resolve this matter. Because the surface energy represents the energy difference between uniformly and realistically distributed nuclear material in a nucleus, the parameter \\(S_{s}\\) can be related to the density dependence of \\(S_{v}(n)\\) and to \\(K_{s}\\). If \\(f_{bulk}\\) is assumed to behave quadratically with density around \\(n_{s}\\), this relation can be particularly simply expressed (6): \\[\\frac{S_{s}}{S_{v}}=\\frac{3}{\\sqrt{2}}\\frac{a_{1/2}}{r_{o}}\\int\\limits_{0}^{1} \\frac{\\sqrt{x}}{1-x}\\biggl{[}\\frac{S_{v}}{S_{v}(xn_{s})}-1\\biggr{]}dx. \\tag{15}\\] Here, \\(S_{v}\\equiv S_{v}(n_{s})\\), \\(a_{1/2}=(dr/d\\ln n)_{n_{s}/2}\\) is a measure of the thickness of the nuclear surface and \\(r_{o}=(4\\pi n_{s}/3)^{-1/3}=R/A^{1/3}\\). If \\(S_{v}(n)\\) is linear, then the integral is 2; if \\(S_{v}(n)\\propto n^{2/3}\\), then the integral is 0.927. Since \\(a_{1/2}\\) will be sensitive to the value of \\(K_{s}\\), we expect the value of \\(S_{s}/S_{v}\\) to be also. Experimentally, there are two major sources of information regarding the symmetry energy parameters: nuclear masses and giant resonance energies. However, because of the small excursions in \\(A^{1/3}\\) afforded by laboratory nuclei, each source provides only a correlation between \\(S_{s}\\) and \\(S_{v}\\). For example, the total symmetry energy in the liquid droplet model (now explicitly including the presence of the neutron skin, see Ref. [2]) is \\[E_{sym}=(1-2x_{i})^{2}S_{v}/[1+(S_{s}/S_{v})A^{-1/3}]. \\tag{16}\\] Evaluating \\(\\alpha=d\\ln S_{s}/d\\ln S_{v}\\) near the \"best-fit\" values \\(S_{s0}\\) and \\(S_{v0}\\), one finds \\[\\alpha\\simeq 2+S_{v0}<A>^{1/3}/S_{s0}\\simeq 6\\,, \\tag{17}\\] where \\(<A>^{1/3}\\) for the fitted nuclei is about 5. Thus, as the value of \\(S_{v}\\) is changed in the mass formula, the value of \\(S_{s}\\) must vary rapidly to compensate. An additional correlation between these parameters can be obtained from the fitting of isovector giant resonances, and this has the potential of breaking the degeneracy of \\(S_{v}\\) and \\(S_{s}\\), because it has a different slope [6]. Lipparini & Stringari [7] used a hydrodynamical model of the nucleus to derive the isovector resonance energy: \\[E_{d} = \\sqrt{\\frac{24\\hbar^{2}}{m^{*}}\\frac{NZ}{A}\\Bigl{[}\\int\\frac{nr^{ 2}S_{v}}{S_{v}(n)}d^{3}r\\Bigr{]}^{-1}} \\tag{18}\\] \\[\\simeq 96.5\\sqrt{\\frac{m}{m^{*}}}\\frac{S_{v}}{30\\ {\\rm MeV}}\\Biggl{[}1+ \\frac{5S_{s}}{3S_{v}A^{1/3}}\\Biggr{]}^{-1}A^{-1/3}\\ {\\rm MeV},\\] where \\(m^{*}\\) is an effective nucleon mass. This relation results in a slightly less-steep correlation between \\(S_{s}\\) and \\(S_{v}\\), \\[\\alpha=2/m^{*}+(3/5)S_{v0}<A>^{1/3}/S_{s0}\\simeq 4-5\\,. \\tag{19}\\] Unfortunately the value of \\(m^{*}\\) is an undetermined parameter and this slope is not very different from that obtained from fitting masses. Therefore, uncertainties in the model make a large difference to the crossing point of these two correlations. A strong theoretical attack, perhaps using further RPA analysis, together with more experiments to supplement the relatively meager amount of existing data, would be very useful. ## 3 The Equation of State and the Collapse of Massive Stars Massive stars at the end of their lives are believed to consist of a white dwarf-like iron core of 1.2-1.6 M\\({}_{\\odot}\\) having low entropy (\\(s\\leq 1\\)), surrounded by layersof less processed material from shell nuclear burning. The effective Chandrasekhar mass, the maximum mass the degenerate electron gas can support, is dictated by the entropy and the average lepton content, \\(Y_{L}\\), believed to be around 0.41-0.43. As mass is added to the core by shell Si-burning, the core eventually becomes unstable and collapses. During the collapse, the lepton content decreases due to net electron capture on nuclei and free protons. But when the core density approaches \\(10^{12}\\) g cm\\({}^{-3}\\), the neutrinos can no longer escape from the core on the dynamical collapse time [8]. After neutrinos become trapped, \\(Y_{L}\\) is frozen at a value of about 0.38-0.40, and the entropy is also thereafter fixed. The core continues to collapse until the rapidly increasing pressure reverses the collapse at a bounce density of a few times nuclear density. The immediate outcome of the shock generated by the bounce is also dependent upon \\(Y_{L}\\). First, the shock energy is determined by the net binding energy of the post-bounce core, and is proportional to \\(Y_{e}^{10/3}\\)[9]. Second, the shock is largely dissipated by the energy required to dissociate massive nuclei in the still-infalling matter of the original iron core outside the post-bounce core. The larger the \\(Y_{L}\\) of the core, the larger its mass and the smaller this shell. Therefore, the progress of the shock is very sensitive to the value of \\(Y_{L}\\). The final value of \\(Y_{L}\\) is controlled by weak interaction rates, and is strongly dependent upon the fraction of free protons, \\(X_{p}\\), which is proportional to \\(\\exp(\\mu_{p}/T)\\), and the phase space available for proton capture on nuclei, which is proportional to \\(\\mu_{e}-\\hat{\\mu}\\), where \\(\\hat{\\mu}=\\mu_{n}-\\mu_{p}\\). Both are sensitive to the proton fraction in nuclei (\\(x_{i}\\)) and are largely controlled by \\(Y_{L}\\). In addition, the specific heat controls the temperature which has a direct influence upon the free proton abundance and the net electron capture rate. In spite of the intricate feedback, nuclear parameters relating chemical potentials to composition, especially \\(S_{v}\\) and \\(S_{s}\\), are obviously important. As an example, consider \\(\\hat{\\mu}=\\mu_{n}-\\mu_{p}=-{n_{i}}^{-1}\\partial F_{H}/\\partial x_{i}\\). With the model of Eqs. (3)-(5), one has \\[\\hat{\\mu}=4S_{v}(1-2x_{i})-\\left(\\frac{72\\pi e^{2}D}{5x_{i}n_{i}}\\right)^{1/3} \\frac{\\sigma_{o}-\\sigma_{s}(1-2x_{i})(1-6x_{i})}{(\\sigma_{o}-\\sigma_{s}(1-2x_{ i})^{2})^{1/3}}\\,. \\tag{20}\\] Recall that \\(\\sigma_{s}\\propto S_{s}\\). Although the bulk and Coulomb terms alone (Eq. 20 with \\(\\sigma_{s}=0\\)) imply that \\(\\hat{\\mu}\\) for a given \\(x_{i}\\) rises with increasing \\(S_{v}\\), the proper inclusion of the surface symmetry energy gives rise to the opposite behavior. This is illustrated in Fig. 1. Uncertainties in nuclear parameters can thus be expected to have an influence upon the collapse of massive stars, for example, in the collapse rate, the final trapped lepton fraction, and the radius at which the bounce-generated shock initially stalls. Swesty, Lattimer & Myra (10) investigated the effects upon stellar collapse of altering parameters in a fashion constrained by nuclear systematics. They found that as long as the parameters permitted a neutron star maximum mass above the PSR1913+16 mass limit (1.44 M\\({}_{\\odot}\\)), the shock generated by core bounce consistently stalls near 100 km, independently of the assumed \\(K_{s}\\) in the range 180-375 MeV and \\(S_{v}\\) in the range 27-35 MeV. Ref. (10) also found that the final trapped lepton fraction is also apparently independent of variations in both \\(K_{s}\\) and \\(S_{v}\\). These results are in contrast to earlier simulations which had used EOSs that could not support cold, catalyzed 1.4 M\\({}_{\\odot}\\) stars, or in which \\(S_{s}\\) was not varied consistently with \\(S_{v}\\). The strong feedback between the EOS, weak interactions, neutrino transport, and hydrodynamics is an example of _Mazurek's Law_. In fact, the only significant consequence of varying \\(S_{v}\\) involved the pre-bounce neutrino luminosities. Increasing \\(S_{v}\\) increases the electron capture rate (proportional to \\(\\mu_{e}-\\hat{\\mu}\\) and therefore increases the \\(\ u_{e}\\) luminosity during collapse, as shown in Fig. 2. Nevertheless, the collapse rate also increases, so that neutrino trapping occurs sooner and the final trapped lepton fraction does not change. It is possible that large neutrino detectors such as Super-Kamiokande or SNO may be able to observe an enhanced early rise in neutrino luminosity from nearby galactic supernovae. Figure 1: Comparison of \\(\\hat{\\mu}=\\mu_{n}-\\mu_{p}\\) as a function of \\(x_{i}\\) for various assumed values of \\(S_{v}\\), both including and excluding the effects of the surface symmetry energy. ## 4 The Structure of Neutron Stars The theoretical study of the structure of neutron stars is crucial if new observations of masses and radii are to lead to effective constraints on the EOS of dense matter. This study becomes ever more important as laboratory studies may be on the verge of yielding evidence about the composition and stiffness of matter beyond \\(n_{s}\\). To date, several accurate mass determinations of neutron stars are available, and they all lie in a narrow range (\\(1.25-1.44\\) M\\({}_{\\odot}\\)). There is some speculation that the absence of neutron stars with masses above 1.5 M\\({}_{\\odot}\\) implies that \\(M_{max}\\) for neutron stars has approximately this value. However, since fewer than 10 neutron stars have been weighed, and all these are in binaries, this conjecture is premature. Theoretical studies of dense matter indicate that considerable uncertainties exist in the high-density behavior of the EOS largely because of the poorly constrained many-body interactions. These uncertainties are reflected in a significant uncertainty in the maximum mass of a beta-stable neutron star, which ranges from 1.5-2.5 M\\({}_{\\odot}\\). There is some theoretical support for a lower mass limit for neutron stars in the range \\(1.1-1.2\\) M\\({}_{\\odot}\\). This follows from the facts that the collapsing core of a massive star is always greater than 1 M\\({}_{\\odot}\\) and the minimum mass of a Figure 2: The neutrino luminosities during infall as a function of the bulk symmetry energy parameter. protonneutron star with a low-entropy inner core of \\(\\sim 0.6\\) M\\({}_{\\odot}\\) and a high-entropy envelope is at least 1.1 M\\({}_{\\odot}\\). Observations from the Earth of thermal radiation from neutron star surfaces could yield values of the quantity \\(R_{\\infty}=R/\\sqrt{1-2GM/Rc^{2}}\\), which results from redshifting the stars luminosity and temperature. \\(M-R\\) trajectories for representative EOSs (discussed below) are shown in Figure 3. It appears difficult to simultaneously have \\(M>1\\)M\\({}_{\\odot}\\) and \\(R_{\\infty}<12\\) km. Those pulsars with at least some suspected thermal radiation generically yield effective values of \\(R_{\\infty}\\) so small that it is believed that the radiation originates from polar hot spots rather than from the surface as a whole. Other attempts to deduce a radius include analyses [(14)] of X-ray bursts from sources 4U 1705-44 and 4U 1820-30 which implied rather small values, \\(9.5<R_{\\infty}<14\\) km. However, the modeling of the photospheric expansion and touchdown on the neutron star surface requires a model dependent relationship between the color and effective temperatures, rendering these estimates uncertain. Absorption lines in X-ray spectra have also been investigated with a view to deducing the neutron star radius. Candidates for the matter producing the absorption lines are either the accreted matter from the companion star or the products of nuclear burning in the bursts. In the former case, the most plausible element is thought to be Fe, in which case the relation \\(R\\approx 3.2GM/c^{2}\\), only slightly larger than the minimum possible value based upon causality, [(15; 16)] is inferred. In the latter case, plausible candidates are Ti and Cr, and larger values of the radius would be obtained. In both cases, serious difficulties remain in interpreting the large line widths, of order 100-500 eV, in the \\(4.1\\pm 0.1\\) keV line observed from many sources. A first attempt at using light curves and pulse fractions from pulsars to explore the \\(M-R\\) relation suggested relatively large radii, of order 15 km [(17)]. However, this method, which assumed dipolar magnetic fields, was unable to satisfactorily reconcile the calculated magnitudes of the pulse fractions and the shapes of the light curves with observations. Prospects for a radius determination have improved in recent years, however, with the detection of a nearby neutron star, RX J185635-3754, in X-rays and optical radiation [(18)]. The observed X-rays, from the ROSAT satellite, are consistent with blackbody emission with an effective temperature of about 57 eV and very little extinction. In addition, the fortuitous location of the star in the foreground of the R CrA molecular cloud limits the distance to \\(D<120\\) pc. The fact that the source is not observable in radio and its lack of variability in X-rays implies that it is not a pulsar unlike other identified radio-silent isolated neutron stars. This gives the hope that the observed radiation is not contaminated with non-thermal emission as is the case for pulsars. The X-ray observations of RXJ185635-3754 alone yield \\(R_{\\infty}\\approx 7.3(D/120\\mbox{ pc})\\) km for a best-fit blackbody. Such a value is too small to be consistent with any neutron star with more than 1 M\\({}_{\\odot}\\). But the optical flux is about a factor of 2.5 brighter than what is predicted for the X-ray blackbody, which is consistent with there being a heavy-element atmosphere [19]. With such an atmosphere, it is found [20] that the effective temperature is reduced to approximately 50 eV and \\(R_{\\infty}\\) is also increased, to a value of approximately 21.6(\\(D\\)/120 pc) km. Upcoming parallax measurements with the Hubble Space Telescope should permit a distance determination to about 10-15% accuracy. If X-ray spectral features are discovered with the planned Chandra and XMM space observatories, the composition of the neutron star atmosphere can be inferred, and the observed redshifts will yield independent mass and radius information. In this case, _both_ the mass and radius of this star will be found. Furthermore, a proper motion of 0.34 \\({}^{\\prime\\prime}\\) yr\\({}^{-1}\\) has been detected, in a direction that is carrying the star away from the Upper Scorpius (USco) association [20]. With an assumed distance of about 80 pc, the positions of RX J185635-3754 and this association overlap about 800,000 years ago. The runaway OB star \\(\\zeta\\) Oph is also moving away from USco, appearing to have been ejected on the order of a million years ago. The superposition of these three objects is interesting, and one can speculate that this is not coincidental. If upcoming parallax measurements are consistent with a distance to RX J185635-3754 of about 80 pc, the evidence for this scenario will be strong, and a good age estimate will result. Figure 3: \\(M-R\\) curves for the EOSs listed in Table 1. The diagonal lines represent two theoretical estimates (LP=Ref. [11]; RP=Ref. [12]) of the locus of points for \\(\\Delta I/I=1.4\\%\\) for extremal limits of \\(P_{t}\\), 0.25 and 0.65 MeV fm\\({}^{-3}\\). The large dots on the \\(M-R\\) curves are the exact results. The region to the left of the contours labeled 0.65 is not allowed if current glitch models are correct [13]. In this section, a striking empirical relationship is noted which connects the radii of neutron stars and the pressure of matter in the vicinity of \\(n_{s}\\). In addition, a number of analytic, exact, solutions to the general relativistic TOV equation of hydrostatic equilibrium are explored that lead to several useful approximations for neutron star structure which directly correlate observables such as masses, radii, binding energies, and moments of inertia. The binding energy, of which more than 99% is carried off in neutrinos, will be revealed from future neutrino observations of supernovae. Moments of inertia are connected with glitches observed in the spin down of pulsars, and their observations yield some interesting conclusions about the distribution of the moment of inertia within the rotating neutron star. From such comparisons, it may become easier to draw conclusions about the dense matter EOS when firm observations of neutron star radii or moments of inertia become available to accompany the several known accurate mass determinations. ### Neutron Star Radii The composition of a neutron star chiefly depends on the nature of strong interactions, which are not well understood in dense matter. The several pos \\begin{table} \\begin{tabular}{l|l|l|l} \\hline \\hline Symbol & Reference & Approach & Composition \\\\ \\hline FP & (21) & Variational & np \\\\ PS & (22) & Potential & n\\(\\pi^{0}\\) \\\\ WFF(1-3) & (23) & Variational & np \\\\ AP(1-4) & (24) & Variational & np \\\\ MS(1-3) & (25) & Field Theoretical & np \\\\ MPA(1-2) & (26) & Dirac-Brueckner HF & np \\\\ ENG & (27) & Dirac-Brueckner HF & np \\\\ PAL(1-6) & (28) & Schematic Potential & np \\\\ GM(1-3) & (29) & Field Theoretical & npH \\\\ GS(1-2) & (30) & Field Theoretical & npK \\\\ PCL(1-2) & (31) & Field Theoretical & npHQ \\\\ SQM(1-3) & (31) & Quark Matter & Q \\((u,d,s)\\) \\\\ \\hline \\end{tabular} \\end{table} Table 1: Equations of state used in this work. Approach refers to the basic theoretical paradigm. Composition refers to strongly interacting components (n=neutron, p=proton, H=hyperon, K=kaon, Q=quark); all approaches include leptonic contributions. sible models investigated [15; 32] can be conveniently grouped into three broad categories: nonrelativistic potential models, field-theoretical models, and relativistic Dirac-Brueckner-Hartree-Fock models. In each of these approaches, the presence of additional softening components such as hyperons, Bose condensates or quark matter, can be incorporated. Figure 3 displays the mass-radius relation for several recent EOSs (the abbreviations are explained in Table 1). Even a cursory glance indicates that in the mass range from \\(1-1.5\\) M\\({}_{\\odot}\\) it is usually the case that the radius has little dependence upon the mass. The lone exception is the model GS1, in which a kaon condensate, leading to considerable softening, appears. While it is generally assumed that a stiff EOS leads to both a large maximum mass and a large radius, many counter examples exist. For example, MS3 has a relatively small maximum mass but has large radii compared to most other EOSs with larger maximum masses. Also, not all EOSs with extreme softening have small radii (viz., GS2). Nonetheless, for stars with mass greater than 1 M\\({}_{\\odot}\\), only models with a large degree of softening can have \\(R_{\\infty}<12\\) km. Should the radius of a neutron star ever be accurately determined to satisfy \\(R_{\\infty}<12\\) km, a strong case can be made for the existence of extreme softening. Figure 4: Empirical relation between \\(P\\) and \\(R\\) for various EOSs (see Table 1 for details). The upper and lower panels show results for gravitational masses of 1 M\\({}_{\\odot}\\) and 1.4 M\\({}_{\\odot}\\), respectively. Symbols show \\(PR^{-1/4}\\) in units of MeV fm\\({}^{-3}\\) km\\({}^{-1/4}\\) at the three indicated fiducial densities. It is relevant that a Newtonian polytrope with \\(n=1\\) has the property that the stellar radius is independent of both the mass and central density. In fact, numerical relativists have often approximated equations of state with \\(n=1\\) polytropes. An \\(n=1\\) polytrope has the property that the radius is proportional to the square root of the constant \\(K\\) in the polytropic pressure law \\(P=K\\rho^{1+1/n}\\). This suggests that there might be a quantitative relation between the radius and the pressure that does not depend upon the equation of state at the highest densities, which determines the overall softness or stiffness (and hence, the maximum mass). To make the relation between matter properties and the nominal neutron star radius definite, Fig. 4 shows the remarkable empirical correlation which exists between the radii of 1 and 1.4 M\\({}_{\\odot}\\) stars and the matter's pressure evaluated at densities of 1, 1.5 and 2 \\(n_{s}\\). Table 1 explains the EOS symbols used in Fig. 4. Despite the relative insensitivity of radius to mass for a particular \"normal\" equation of state, the nominal radius \\(R_{M}\\), which is defined as the radius at a particular mass \\(M\\) in solar units, still varies widely with the EOS employed. Up to \\(\\sim 5\\) km differences are seen in \\(R_{1.4}\\), for example, in Fig. 4. This plot is restricted to EOSs which have maximum masses larger than about 1.55 M\\({}_{\\odot}\\) and to those which do not have strong phase transitions (such as those due to a Bose condensate or quark matter). Such EOSs violate these correlations, especially for the case of 1.4 M\\({}_{\\odot}\\) stars. We emphasize that this correlation is valid only for cold, catalyzed neutron stars, i.e., it will not be valid for protoneutron stars which have finite entropies and might contain trapped neutrinos. The correlation has the form \\[R\\simeq\\mbox{constant}\\;\\cdot\\left[P(n)\\right]^{0.23-0.26}, \\tag{21}\\] where \\(P\\) is the total pressure inclusive of leptonic contributions evaluated at the density \\(n\\). An exponent of 1/4 was chosen for display in Fig. 4, but the correlation holds for a small range of exponents about this value. The correlation is marginally tighter for the baryon density \\(n=1.5n_{s}\\) and \\(2n_{s}\\) cases. Thus, instead of the power 1/2 that the Newtonian polytrope relations would predict, a power of approximately 1/4 is suggested when the effects of relativity are included. The value of the constant in Eq. (21) depends upon the chosen density, and can be obtained from Fig. 4. The exponent of 1/4 can be quantitatively understood by using a relativistic generalization of the \\(n=1\\) polytrope, due to Buchdahl (33). For the EOS \\[\\rho=12\\sqrt{p_{*}P}-5P\\,, \\tag{22}\\] where \\(p_{*}\\) is a constant, there is an analytic solution to Einstein's equations:\\[e^{\ u} \\equiv g_{tt}=(1-2\\beta)(1-\\beta-u)(1-\\beta+u)^{-1}\\,;\\] \\[e^{\\lambda} \\equiv g_{rr}=(1-2\\beta)(1-\\beta+u)(1-\\beta-u)^{-1}(1-\\beta+\\beta\\cos Ar^{ \\prime})^{-2}\\,;\\] \\[8\\pi PG/c^{4} = A^{2}u^{2}(1-2\\beta)(1-\\beta+u)^{-2}\\,;\\] \\[8\\pi\\rho G/c^{2} = 2A^{2}u(1-2\\beta)(1-\\beta-3u/2)(1-\\beta+u)^{-2}\\,;\\] \\[u = \\beta(Ar^{\\prime})^{-1}\\sin Ar^{\\prime}\\,;\\qquad r=r^{\\prime}(1- \\beta+u)(1-2\\beta)^{-1}\\,;\\] \\[A^{2} = 288\\pi p_{*}Gc^{-4}(1-2\\beta)^{-1};\\qquad R=\\pi(1-\\beta)(1-2 \\beta)^{-1}A^{-1}. \\tag{23}\\] The free parameters of this solution are \\(\\beta\\equiv GM/Rc^{2}\\) and the scale \\(p_{*}\\). Note that \\(R\\propto p_{*}^{-1/2}(1+\\beta^{2}/2+\\ldots)\\), so for a given value of \\(p_{*}\\), the radius increases only very slowly with mass, exactly as expected from an \\(n=1\\) Newtonian polytrope. It is instructive to analyze the response of \\(R\\) to a change of pressure at some fiducial density \\(\\rho\\), for a fixed mass \\(M\\). One finds \\[\\frac{d\\ln R}{d\\ln P}\\bigg{|}_{\\rho,M}=\\frac{\\frac{d\\ln R}{d\\ln p_{*}}\\big{|} _{\\beta}\\frac{d\\ln p_{*}}{d\\ln P}\\big{|}_{\\rho}}{1+\\frac{d\\ln R}{d\\ln\\beta} \\big{|}_{p_{*}}}=\\bigg{(}1-\\frac{5}{6}\\sqrt{\\frac{P}{p_{*}}}\\bigg{)}\\frac{(1- \\beta)(1-2\\beta)}{2(1-3\\beta+3\\beta^{2})}. \\tag{24}\\] In the limit \\(\\beta\\to 0,P\\to 0\\) and \\(d\\ln R/d\\ln P\\to 1/2\\), the value characteristic of an \\(n=1\\) Newtonian polytrope. Finite values of \\(\\beta\\) and \\(P\\) render the exponent smaller than \\(1/2\\). If the stellar radius is about 15 km, \\(p_{*}=\\pi/(288R^{2})\\approx 4.85\\cdot 10^{-5}\\) km\\({}^{-2}\\). If the fiducial density is \\(\\rho\\approx 1.5m_{b}n_{s}\\approx 2.02\\cdot 10^{-4}\\) km\\({}^{-2}\\) (with \\(m_{b}\\) the baryon mass), Eq. (22) implies that \\(P\\approx 8.5\\cdot 10^{-6}\\) km\\({}^{-2}\\). For \\(M=1.4\\) M\\({}_{\\odot}\\), the value of \\(\\beta\\) is 0.14, and \\(d\\ln R/d\\ln P\\simeq 0.31\\). This result is mildly sensitive to the choices for \\(\\rho\\) and \\(R\\), and the Buchdahl solution is not a perfect representation of realistic EOSs; nevertheless, it provides a reasonable explanation of the correlation in Eq. (21). The existence of this correlation is significant because, in large part, the pressure of degenerate matter near the nuclear saturation density \\(n_{s}\\) is determined by the symmetry properties of the EOS. Thus, the measurement of a neutron star radius, if not so small as to indicate extreme softening, could provide an important clue to the symmetry properties of matter. In either case, valuable information is obtained. The specific energy of nuclear matter near the saturation density may be expressed as an expansion in the asymmetry \\((1-2x)\\), as displayed in Eq. (1), that can be terminated after the quadratic term (28). Leptonic contributions must be added to Eq. (1) to obtain the total energy and pressure; the electron energy per baryon is \\(f_{e}=(3/4)\\hbar cx(3\\pi^{2}nx)^{1/3}\\). Matter in neutron stars is in beta equilibrium, i.e., \\(\\mu_{e}-\\mu_{n}+\\mu_{p}=\\partial(f_{bulk}+f_{e})/\\partial x=0\\), so the electronic contributions may be eliminated to recast the pressure as (34)\\[P = n^{2}\\Bigg{[}S^{\\prime}_{v}(n)(1-2x)^{2}+\\frac{xS_{v}(n)}{n}(1-2x)+ \\tag{25}\\] \\[\\frac{K_{s}}{9n_{s}}\\Big{(}\\frac{n}{n_{s}}-1\\Big{)}-\\frac{K^{\\prime }_{s}}{54n_{s}}\\Big{(}\\frac{n}{n_{s}}-1\\Big{)}^{2}\\Bigg{]}\\,,\\] where \\(x\\) is now the beta equilibrium value. At the saturation density, \\[P_{s}=n_{s}(1-2x_{s})[n_{s}S^{\\prime}_{v}(n_{s})(1-2x_{s})+S_{v}x_{s}]\\,, \\tag{26}\\] where the equilibrium proton fraction at \\(n_{s}\\) is \\[x_{s}\\simeq(3\\pi^{2}n_{s})^{-1}(4S_{v}/\\hbar c)^{3}\\simeq 0.04 \\tag{27}\\] for \\(S_{v}=30\\) MeV. Due to the small value of \\(x_{s}\\), one finds that \\(P_{s}\\simeq n_{s}^{2}S^{\\prime}_{v}(n_{s})\\). If the pressure is evaluated at a larger density, other nuclear parameters besides \\(S_{v}\\) and \\(S^{\\prime}_{v}(n_{s})\\), become significant. For \\(n=2n_{s}\\), one thus has \\[P(2n_{s})\\simeq 4n_{s}[n_{s}S^{\\prime}_{v}(2n_{s})+(K_{s}-K^{\\prime}_{s}/6)/9 ]\\,. \\tag{28}\\] Figure 5: Left panel: \\(M-R\\) curves for selected PAL parametrizations (28) showing the sensitivity to symmetry energy. The left panel shows variations arising from different choices of the symmetry energy at the nuclear saturation density \\(S_{v}=S_{v}(n_{s})\\); the right panel shows variations arising from different choices of the density dependence of the potential part of the symmetry energy \\(F(u)=S_{v}(n)/S_{v}(n_{s})\\) where \\(u=n/n_{s}\\). If it is assumed that \\(S_{v}(n)\\) is linear in density, \\(K_{s}\\sim 220\\) MeV and \\(K_{s}^{\\prime}\\sim 2000\\) MeV (as indicated in Eq. 12), the symmetry contribution is still about 70% of the total. The sensitivity of the radius to the symmetry energy is graphically shown by the parametrized EOS of PAL (28) in Fig. 5. The symmetry energy function \\(S_{v}(n)\\) is a direct input in this parametrization. The figure shows the dependence of mass-radius trajectories as the quantities \\(S_{v}\\) and \\(S_{v}(n)\\) are alternately varied. Clearly, the density dependence of \\(S_{v}(n)\\) is more important in determining the neutron star radius. Note also the weak sensitivity of the maximum neutron star mass to \\(S_{v}\\). At present, experimental guidance concerning the density dependence of the symmetry energy is limited and mostly based upon the division of the nuclear symmetry energy between volume and surface contributions, as discussed in the previous section. Upcoming experiments involving heavy-ion collisions (at GSI, Darmstadt), which might sample densities up to \\(\\sim(3-4)n_{s}\\), will be limited to analyzing properties of the symmetric nuclear matter EOS through a study of matter, momentum, and energy flow of nucleons. Thus, studies of heavy nuclei far off the neutron drip lines will be necessary in order to pin down the properties of the neutron-rich regimes encountered in neutron stars. ### Neutron Star Moments of Inertia and Binding Energies Besides the stellar radius, other global attributes of neutron stars are potentially observable, including the moment of inertia and the binding energy. These quantities depend primarily upon the ratio \\(M/R\\) as opposed to details of the EOS, as can be readily seen by evaluating them using analytic solutions to Einstein's equations. Although over 100 analytic solutions to Einstein's equations are known (35), nearly all of them are physically unrealistic. However, three analytic solutions are of particular interest in neutron star structure. The first is the well-known Schwarzschild interior solution for an incompressible fluid, \\(\\rho=\\rho_{c}\\), where \\(\\rho\\) is the mass-energy density. This is mostly of interest because it determines the maximum compression \\(\\beta=GM/Rc^{2}\\) for a neutron star, namely 4/9, based upon the pressure being finite. Two aspects of the incompressible fluid that are physically unrealistic, however, include the fact that the sound speed is everywhere infinite, and that the density does not vanish on the star's surface. The second analytic solution, B1, due to Buchdahl (33), is described in Eq. (23). The third analytic solution (TolVII) was discovered by Tolman (36) in 1939,and is the case when the mass-energy density \\(\\rho\\) varies quadratically, that is, \\[\\rho=\\rho_{c}[1-(r/R)^{2}]. \\tag{29}\\] In fact, this is an adequate representation, as displayed in Fig. 6 for neutron stars more massive than 1.2 M\\({}_{\\odot}\\). The equations of state used are listed in Table 1. The largest deviations from this general relation exist for models with extreme softening (GS1, GS2, PCL2) and which have relatively low maximum masses (see Fig. 3). It is significant that all models must, of course, approach this behavior at both extremes \\(r\\to 0\\) and \\(r\\to R\\). Because the Tolman solution is often overlooked in the literature (for exceptions, see, for example, Refs. [35, 37]) it is summarized here. It is useful in establishing interesting and simple relations that are insensitive to the equation of state. In terms of the variable \\(x=r^{2}/R^{2}\\) and the parameter \\(\\beta\\), the assumption \\(\\rho=\\rho_{c}(1-x)\\) results in \\(\\rho_{c}=15\\beta c^{2}/(8\\pi GR^{2})\\). The solution of Einstein's equations for this density distribution is: Figure 6: Each panel shows mass-energy density profiles in the interiors of selected stars (masses indicated) ranging from about 1.2 M\\({}_{\\odot}\\) to the maximum mass (solid line) for the given equation of state (see Table 1). The thick black lines show the simple quadratic approximation \\(1-(r/R)^{2}\\). \\[e^{-\\lambda} = 1-\\beta x(5-3x)\\,,\\qquad e^{\ u}=(1-5\\beta/3)\\cos^{2}\\phi\\,,\\] \\[P = \\frac{c^{4}}{4\\pi R^{2}G}[\\sqrt{3\\beta e^{-\\lambda}}\\tan\\phi-\\frac {\\beta}{2}(5-3x)\\,,\\qquad n=\\frac{\\rho c^{2}+P}{m_{b}c^{2}}\\frac{\\cos\\phi}{\\cos \\zeta}\\,,\\] \\[\\phi = (w_{1}-w)/2+\\zeta\\,,\\quad\\phi_{c}=\\phi(x=0)\\,,\\quad\\zeta=\\tan^{- 1}\\sqrt{\\beta/[3(1-2\\beta)]}\\,,\\] \\[w = \\log[x-5/6+\\sqrt{e^{-\\lambda}/(3\\beta)}]\\,,\\qquad w_{1}=w(x=1)\\,. \\tag{30}\\] The central values of \\(P/\\rho c^{2}\\) and \\(c_{s}^{2}\\) are \\[\\frac{P}{\\rho c^{2}}\\bigg{|}_{c}=\\frac{2}{15}\\sqrt{\\frac{3}{\\beta}}\\Big{(} \\frac{c_{sc}}{c}\\Big{)}^{2}\\,,\\quad\\Big{(}\\frac{c_{sc}}{c}\\Big{)}^{2}=\\tan\\phi _{c}\\Big{(}\\tan\\phi_{c}+\\sqrt{\\frac{\\beta}{3}}\\Big{)}\\,. \\tag{31}\\] This solution, like that of Buchdahl's, is scale-free, with the parameters \\(\\beta\\) and \\(\\rho_{c}\\) (or \\(M\\) or \\(R\\)). Here, \\(n\\) is the baryon density, \\(m_{b}\\) is the nucleon mass, and \\(c_{sc}\\) is the sound speed at the star's center. When \\(\\phi_{0}=\\pi/2\\), or \\(\\beta\\approx 0.3862\\), \\(P_{c}\\) becomes infinite, and when \\(\\beta\\approx 0.2698\\), \\(c_{sc}\\) becomes causal (i.e., \\(c\\)). Recall that for an incompressible fluid, \\(P_{c}\\) becomes infinite when \\(\\beta=4/9\\). For the Buchdahl solution, \\(P_{c}\\) becomes infinite when \\(\\beta=2/5\\) and the causal limit is reached when \\(\\beta=1/6\\). For comparison, if causality is enforced at high densities, it has been empirically determined that \\(\\beta<0.34\\)[15; 16]. The general applicability of these exact solutions can be gauged by analyzing the moment of inertia, which, for a star uniformly rotating with angular velocity \\(\\Omega\\), is \\[I=(8\\pi/3)\\int_{0}^{R}r^{4}(\\rho+P/c^{2})e^{(\\lambda-\ u)/2}(\\omega/\\Omega)dr\\,. \\tag{32}\\] The metric function \\(\\omega\\) is a solution of the equation \\[d[r^{4}e^{-(\\lambda+\ u)/2}\\omega^{\\prime}]/dr+4r^{3}\\omega de^{-(\\lambda+\ u )/2}/dr=0 \\tag{33}\\] with the surface boundary condition \\[\\omega_{R}=\\Omega-\\frac{R}{3}\\omega_{R}^{\\prime}=\\Omega\\left[1-\\frac{2GI}{R^{ 3}c^{2}}\\right]. \\tag{34}\\] The second equality in the above follows from the definition of \\(I\\) and the TOV equation. Writing \\(j=\\exp[-(\ u+\\lambda)/2]\\), the TOV equation becomes \\[j^{\\prime}=-4\\pi Gr(P/c^{2}+\\rho)je^{\\lambda}/c^{2}\\,. \\tag{35}\\]Then, one has \\[I=-\\frac{2c^{2}}{3G}\\int\\frac{\\omega}{\\Omega}r^{3}dj=\\frac{c^{2}R^{4}\\omega_{R}^{ \\prime}}{6G\\Omega}\\,. \\tag{36}\\] Unfortunately, an analytic representation of \\(\\omega\\) or the moment of inertia for any of the three exact solutions is not available. However, approximations which are valid to within 0.5% are \\[I_{Inc}/MR^{2} \\simeq 2(1-0.87\\beta-0.3\\beta^{2})^{-1}/5\\,, \\tag{37}\\] \\[I_{B1}/MR^{2} \\simeq (2/3-4/\\pi^{2})(1-1.81\\beta+0.47\\beta^{2})^{-1}\\,,\\] (38) \\[I_{TVII}/MR^{2} \\simeq 2(1-1.1\\beta-0.6\\beta^{2})^{-1}/7\\,. \\tag{39}\\] In each case, the small \\(\\beta\\) limit reduces to the corresponding Newtonian results. Fig. 7 indicates that the Tolman approximation is rather good. Ravenhall & Pethick (12) suggested that the expression \\[I_{RP}/MR^{2}\\simeq 0.21/(1-2u) \\tag{40}\\] was a good approximation for the moment of inertia; however, we find that this expression is not a good overall fit, as shown in Fig. 7. For low-mass stars (\\(\\beta<0.12\\)), none of these approximations is suitable, but it is unlikely Figure 7: The moment of inertia \\(I\\) in units of \\(MR^{2}\\) for the equations of state listed in Table 1. \\(I_{Inc},I_{B1},I_{VII}\\) and \\(I_{RP}\\) are approximations described in the text. that any neutron stars are this rarefied. It should be noted that the Tolman approximation does not adequately approximate some EOSs, especially ones that are relatively soft, such as GM3, GS1, GS2, PAL6 and PCL2. The binding energy formally represents the energy gained by assembling \\(N\\) baryons. If the baryon mass is \\(m_{b}\\), the binding energy is simply \\(BE=Nm_{b}-M\\) in mass units. However, the quantity \\(m_{b}\\) has various interpretations in the literature. Some authors assume it is about 940 MeV/\\(c^{2}\\), the same as the neutron or proton mass. Others assume it is about 930 MeV/\\(c^{2}\\), corresponding to the mass of C\\({}^{12}\\)/12 or Fe\\({}^{56}\\)/56. The latter would yield the energy released in a supernova explosion, which consists of the energy released by the collapse of a white-dwarf-like iron core, which itself is considerably bound. The difference, 10 MeV per baryon, corresponds to a shift of \\(10/940\\simeq 0.01\\) in the value of \\(BE/M\\). In any case, the binding energy is directly observable from the detection of neutrinos from a supernova event; indeed, it would be the most precisely determined aspect. Lattimer & Yahil (38) suggested that the binding energy could be approximated as \\[BE\\approx 1.5\\cdot 10^{51}(M/{\\rm M_{\\odot}})^{2}\\ {\\rm ergs}=0.084(M/{\\rm M_{ \\odot}})^{2}\\ {\\rm M_{\\odot}}\\,. \\tag{41}\\] Figure 8: The binding energy of neutron stars as a function of stellar mass for the equations of state listed in Table 1. The predictions of Eq. (41) are shown by the shaded region. This formula, in general, is accurate to about \\(\\pm 20\\%\\). The largest deviations are for extremely soft EOSs, as shown in Fig. 8. However, a more accurate representation of the binding energy is given by \\[BE/M\\simeq 0.6\\beta/(1-0.5\\beta)\\,, \\tag{42}\\] which incorporates some radius dependence. Thus, the observation of supernova neutrinos, and the estimate of the total radiated neutrino energy, will yield more accurate information about \\(M/R\\) than about \\(M\\) alone. In the cases of the incompressible fluid and the Buchdahl solution, analytic results for the binding energy can be found: \\[BE_{Inc}/M = \\frac{3}{4\\beta}\\Big{(}\\frac{\\sin^{-1}\\sqrt{2\\beta}}{\\sqrt{2\\beta }}-\\sqrt{1-2\\beta}\\Big{)}-1\\,, \\tag{43}\\] \\[BE_{B1}/M = (1-1.5\\beta)\\sqrt{1-2\\beta}(1-\\beta)^{-1}-1\\,. \\tag{44}\\] The analytic results, the Tolman VII solution, and the fit of Eq. (42) are compared to some recent equations of state in Fig. 9. It can be seen that, Figure 9: The binding energy per unit gravitational mass as a function of compactness for the equations of state listed in Table 1. The shaded region shows the prediction of Eq. (42) with \\(\\pm 5\\%\\) errors. except for very soft cases like PS, PCL2, PAL6, GS1 and GS2, both the Tolman VII and Buchdahl solutions are rather realistic. ### Crustal Fraction of the Moment of Inertia In the investigation of pulsar glitches, many models associate the glitch size with the fraction of the moment of inertia which resides in the star's crust, usually defined to be the region in which dripped neutrons coexist with nuclei. The high-density crust boundary is set by the phase boundary between nuclei and uniform matter, where the pressure is \\(P_{t}\\) and the density \\(n_{t}\\). The low-density boundary is the neutron drip density, or for all practical purposes, simply the star's surface since the amount of mass between the neutron drip point and the surface is negligible. We define \\(\\Delta R\\) to be the distance between the points where the density is \\(n_{t}\\) and zero. One can apply Eq. (32) to determine the moment of inertia of the crust alone with the assumptions that \\(P/c^{2}<<\\rho\\), \\(m(r)\\simeq M\\), and \\(\\omega j\\simeq\\omega_{R}\\) in the crust. One finds \\[\\Delta I\\simeq\\frac{8\\pi}{3}\\frac{\\omega_{R}}{\\Omega}\\int_{R-\\Delta R}^{R}\\rho r ^{4}e^{\\lambda}dr\\simeq\\frac{8\\pi}{3GM}\\frac{\\omega_{R}}{\\Omega}\\int_{0}^{P_{t }}r^{6}dP\\,, \\tag{45}\\] where \\(M\\) is the star's total mass and the TOV equation was used in the last step. In the crust, the fact that the EOS is of the approximate polytropic form \\(P\\simeq K\\rho^{4/3}\\) can be used to find an approximation for the integral \\(\\int r^{6}dP\\), _viz._ \\[\\int_{0}^{P_{t}}r^{6}dP\\simeq P_{t}R^{6}\\left[1+\\frac{2P_{t}}{n_{t}m_{n}c^{2}} \\frac{(1+7\\beta)(1-2\\beta)}{\\beta^{2}}\\right]^{-1}\\,. \\tag{46}\\] Since the approximation Eq. (42) gives \\(I\\) in terms of \\(M\\) and \\(R\\), and \\(\\omega_{R}/\\Omega\\) is given in terms of \\(I\\) and \\(R\\) in Eq. (34), the quantity \\(\\Delta I/I\\) can thus be expressed as a function of \\(M\\) and \\(R\\) with the only dependence upon the equation of state (EOS) arising from the values of \\(P_{t}\\) and \\(n_{t}\\); there is no explicit dependence upon the higher-density EOS. However, the major dependence is upon the value of \\(P_{t}\\), since \\(n_{t}\\) enters only as a correction. We then find \\[\\frac{\\Delta I}{I}\\simeq\\frac{28\\pi P_{t}R^{3}}{3Mc^{2}}\\frac{(1-1.67\\beta-0.6 \\beta^{2})}{\\beta}\\left[1+\\frac{2P_{t}}{n_{t}m_{b}c^{2}}\\frac{(1+7\\beta)(1-2 \\beta)}{\\beta^{2}}\\right]^{-1}. \\tag{47}\\] In general, the EOS parameter \\(P_{t}\\), in the units of MeV fm\\({}^{-3}\\), varies over the range \\(0.25<P_{t}<0.65\\) for realistic EOSs. The determination of this parameter requires a calculation of the structure of matter containing nuclei just below nuclear matter density that is consistent with the assumed nuclear matter EOS. Unfortunately, few such calculations have been performed. Like the fiducial pressure at and above nuclear density which appears in the relation Eq. (21), \\(P_{t}\\) should depend sensitively upon the behavior of the symmetry energy near nuclear density. Choosing \\(n_{t}=0.07\\) fm\\({}^{-3}\\), we compare Eq. (47) in Fig. 3 with full structural calculations. The agreement is good. We also note that Ravenhall & Pethick (12) developed a different, but nearly equivalent, formula for the quantity \\(\\Delta I/I\\) as a function of \\(M,R,P_{t}\\) and \\(\\mu_{t}\\), where \\(\\mu_{t}\\) is the neutron chemical potential at the core-crust phase boundary. This prediction is also displayed in Fig. 3. Link, Epstein & Lattimer (13) established a lower limit to the radii of neutron stars by using a constraint derived from pulsar glitches. They showed that glitches represent a self-regulating instability for which the star prepares over a waiting time. The angular momentum requirements of glitches in the Vela pulsar indicate that more than 0.014 of the star's moment of inertia drives these events. If glitches originate in the liquid of the inner crust, this means that \\(\\Delta I/I>0.014\\). A minimum radius can be found by combining this constraint with the largest realistic value of \\(P_{t}\\) from any equation of state. Stellar models that are compatible with this constraint must fall to the right of the \\(P_{t}=0.65\\) MeV fm\\({}^{-3}\\) contour in Fig. 3. This imposes a constraint upon the radius, namely that \\(R>3.6+3.9M/\\)M\\({}_{\\odot}\\) km. ## 5 The Merger of a Neutron Star with a Low-Mass Black Hole The general problem of the origin and evolution of systems containing a neutron star and a black hole was first detailed by Lattimer & Schramm (39), although the original motivation was due to Schramm. Although speculative at the time, Schramm insisted that this would prove to be an interesting topic from the points of view of nucleosynthesis and gamma-ray emission. The contemporaneous discovery (40) of the first-known binary system containing twin compact objects, PSR 1913+16, which was also found to have an orbit which would decay because of gravitational radiation within \\(10^{10}\\) yr, bolstered his argument. Eventually, this topic formed the core of Lattimer's thesis (41), and the recent spate of activity, a quarter century later, in the investigation of the evolution and mergers of such compact systems has wonderfully demonstrated Schramm's prescience. Compact binaries form naturally as the result of evolution of massive stellar binaries. The estimated lower mass limit for supernovae (and neutron star or black hole production) is approximately 8 M\\({}_{\\odot}\\). Observationally, the number of binaries formed within a given logarithmic separation is approximately constant, but the relative mass distributions are uncertain. There is some indication that the distribution in binary mass ratios might be flat. The number of possible progenitor systems can then be estimated. Most progenitor systems do not survive the more massive star becoming a supernova. In the absence of a kick velocity it is easily found that the loss of more than half of the mass from the system will unbind it. However, the fact that pulsars are observed to have mean velocites in excess of a few hundred km/s implies that neutron stars are usually produced with large \"kick\" velocities originating in the supernova explosion. In the case that the kick velocity, which is thought to be randomly directed, opposes the star's orbital velocity, the chances of the post-supernova binary remaining intact increases. In addition, the separation in a surviving binary will be reduced significantly. Subsequent evolution then progresses to the supernova explosion of the companion. More of these systems survive because in many cases the more massive component explodes. But the surviving systems should both have greatly reduced separations and orbits with high eccentricity. Gravitational radiation then causes the binary's orbit to decay, such that circular orbits of two masses \\(M_{1}\\) and \\(M_{2}\\) with initial semimajor axes \\(a\\) satisfying \\[a<2.8[M_{1}M_{2}(M_{1}+M_{2})/{\\rm M}_{\\odot}^{3}]^{1/4}{\\rm R}_{\\odot}\\,, \\tag{48}\\] Figure 10: The reduction of the gravitational radiation orbital decay time as a function of initial orbital eccentricity. The dashed line is the inverse of the Peters (42) \\(f\\) function; the dotted line shows \\(f^{-3/4}\\), which reasonably reproduces the exact result. will fully decay within the age of the Universe (\\(\\sim 10^{10}\\) yr). Highly eccentric orbits will decay much faster, as shown in Fig. 10. The dashed curve shows the inverse of the factor (42) by which the gravitational wave luminosity of an eccentric system exceeds that of a circular system: \\[f=(1+73e^{2}/24+37e^{4}/96)(1-e^{2})^{-7/2}. \\tag{49}\\] Because the eccentricity also decays, the exact reduction factor is not as strong as \\(1/f\\). A reasonable approximation to the exact result is \\(f^{-3/4}\\), shown by the dotted line in Fig. 10. The coefficient 2.8 in Eq. (48) is increased by a factor of \\(f^{-3/16}\\) or about 2 for moderate eccentricities. Ref. (39) argued that mergers of neutron stars and black holes, and the subsequent ejection of a few percent of the neutron star's mass, could easily account for all the _r_-process nuclei in the cosmos. Ref. (39) is also the earliest reference to the idea that compact object binary mergers are associated with gamma-ray bursts. A later seminal contribution by Eichler, Livio, Piran & Schramm (43) argued that mergers of neutron stars occur frequently enough to explain the origin of gamma-ray bursters. Since the timescale of gamma-ray bursts, being of order seconds to several minutes, is much longer than the coalescence timescale of a binary merger (which is of order the orbital frequency at the last stable orbit, a few milliseconds), it is believed that a coalescence involves the formation of an accretion disc. Although neutrino emission from accreting material, resulting in neutrino-antineutrino annihilation along the rotational axis, has been proposed as a source of gamma rays, it seems more likely that amplification of magnetic fields within the disc might trigger observed bursts. In either case, the lifetime of the accretion disc is still problematic, if it is formed by the breakup of the neutron star near the Roche limit. Its lifetime would probably be only about a hundred times greater than the orbital frequency, or less than a second. However, this timescale would be considerably enhanced if the accretion disc could be formed at larger radii than the Roche limit. A possible mechanism is stable mass transfer from the neutron star to the black hole that would cause the neutron star to spiral away as it loses mass (44; 45). The classical Roche limit is based upon an incompressible fluid of density \\(\\rho\\) and mass \\(M_{2}\\) in orbit about a mass \\(M_{1}\\). In Newtonian gravity, this limit is \\[R_{Roche,Newt}=(M_{1}/0.0901\\pi\\rho)^{1/3}=19.2(M_{1}/{\\rm M_{\\odot}}\\rho_{15} )^{1/3}\\ {\\rm km}\\,, \\tag{50}\\] where \\(\\rho_{15}=\\rho/10^{15}\\) g cm\\({}^{-3}\\). Using general relativity, Fishbone (46) found that at the last stable circular orbit (including the case when the black hole is rotating) the number 0.0901 in Eq. (50) becomes 0.0664. In geometrizedunits, \\(R_{Roche}/M_{1}=13(14.4)(M_{1}^{2}\\rho_{15}/\\rm M_{\\odot}^{2})^{-1/3}\\), where the numerical coefficient refers to the Newtonian (last stable orbit in GR) case. In other words, if the neutron star's mean density is \\(\\rho_{15}=1\\), the Roche limit is encountered beyond the last stable orbit if the black hole mass is less than about 5.9 \\(\\rm M_{\\odot}\\). Thus, for small enough black holes, mass overflow and transfer from the neutron star to the black hole could begin outside the last stable circular orbit. And, as now discussed, the mass transfer may proceed stably for some considerable time. In fact, the neutron star might move to 2-3 times the orbital radius where mass transfer began. This would provide a natural way to lengthen the lifetime of an accretion disc, by simply increasing its size. The final evolution of a compact binary is now discussed. Define \\(q=m_{ns}/M_{BH}\\), \\(\\mu=m_{ns}M_{BH}/M\\), and \\(M=M_{BH}+m_{ns}\\), where \\(m_{ns}\\) and \\(M_{BH}\\) are the neutron star and black hole masses, respectively. The orbital angular momentum is \\[J^{2}=G\\mu^{2}Ma=GM^{3}aq^{2}/(1+q)^{4}\\,. \\tag{51}\\] We can employ Paczynski's (47) formula for the Roche radius of the secondary: \\[R_{\\ell}/a=0.46[q/(1+q)]^{1/3}\\,, \\tag{52}\\] or a better fit by Eggleton (48): \\[R_{\\ell}/a=0.49[.6+q^{-2/3}\\ln(1+q^{1/3})]^{-1}\\,. \\tag{53}\\] The orbital separation \\(a\\) at the moment of mass transfer is obtained by setting \\(R_{\\ell}=R\\), the neutron star radius. For stable mass transfer, the star's radius has to increase more quickly than the Roche radius as mass is transferred. Thus, we must have, using Paczynski's formula, \\[\\frac{d\\ln R}{d\\ln m_{ns}}\\equiv\\alpha\\geq\\frac{d\\ln R_{\\ell}}{d\\ln m_{ns}}= \\frac{d\\ln a}{d\\ln m_{ns}}+\\frac{1}{3} \\tag{54}\\] for stable mass transfer. \\(\\alpha\\) is defined in this expression, and is shown in Fig. 11 for a typical EOS. If the mass transfer is conservative, than \\(\\dot{J}=\\dot{J}_{GW}\\), where \\[\\dot{J}_{GW}=-\\frac{32}{5}\\frac{G^{7/2}}{c^{5}}\\frac{\\mu^{2}M^{5/2}}{a^{7/2}}= -\\frac{32}{5}\\frac{G^{7/2}}{c^{5}}\\frac{q^{2}M^{9/2}}{(1+q)^{4}a^{7/2}} \\tag{55}\\] and \\[\\frac{\\dot{J}}{J}=\\frac{\\dot{a}}{2a}+\\frac{\\dot{q}(1-q)}{q(1+q)}\\,. \\tag{56}\\]This leads to \\[\\dot{q}\\left(\\frac{\\alpha}{2}+\\frac{5}{6}-q\\right)\\geq-\\frac{32}{5}\\frac{G^{3}}{c ^{5}}\\frac{q^{2}M^{3}}{(1+q)a^{4}}\\,. \\tag{57}\\] Since \\(m_{ns}<M_{BH}\\), \\(\\dot{q}\\leq 0\\), and the condition for stable mass transfer is simply \\(q\\leq 5/6+\\alpha/2\\). For moderate mass neutron stars, \\(\\alpha\\approx 0\\), so in this case the condition is simply \\(q\\leq 5/6\\), which might even be achievable in a binary neutron star system. Had we used the more exact formula of Eggleton, Eq. (53), we would have found \\(q\\leq 0.78\\). Note that it has often been assumed that \\(R\\propto m_{ns}^{-1/3}\\) in such discussions (45), which is equivalent to \\(\\alpha=-1/3\\). This is unjustified, and results in the upper limit \\(q=2/3\\) which might inappropriately rule out stable mass transfer in the case of two neutron stars. A number of other conditions must hold for stable mass transfer to occur. First, the orbital separation \\(a\\) at the onset must exceed the last stable orbit around the black hole, so that \\(a>6GM_{BH}/c^{2}\\), or \\[q\\geq 6\\frac{R_{\\ell}}{a}\\frac{GM_{BH}}{Rc^{2}}\\,. \\tag{58}\\] Second, the tidal bulge raised on the neutron star must stay outside of the black hole's Schwarzschild radius. Kochanek (44) gives an estimate of the height of the tidal bulge needed to achieve the required mass loss rate: \\[\\frac{\\Delta r}{R}=\\left[\\frac{-\\dot{q}}{\\beta_{t}(1+q)\\Omega}\\right]^{1/3}\\,, \\tag{59}\\] where \\(\\beta_{t}\\) is a dimensionless parameter of order 1 and \\(\\Omega=G^{1/2}M^{1/2}/a^{3/2}\\) is the orbital frequency. For \\(\\dot{q}\\) we use the equality in Eq. (57), which is equivalent to \\[R_{sh}=2GM_{BH}/c^{2}\\leq a-R-\\Delta r\\,. \\tag{60}\\] Finally, so that the assumption of a Roche geometry is valid, it should be possible for tidal synchronization of the neutron star to be maintained. Bildstein & Cutler (49) considered this, and derived an upper limit for the separation \\(a_{syn}\\) at which tidal synchronization could occur by integrating the maximum torque on the neutron star as it spirals in from infinity and finding where the neutron star spin frequency could first equal the orbital frequency. They find \\[a_{syn}\\leq\\frac{M_{BH}^{2}m_{ns}^{2}}{400M^{3}}\\Big{(}\\frac{R}{m_{ns}}\\Big{)} ^{6}\\,, \\tag{61}\\] which translates to \\[400\\Big{(}\\frac{GM_{BH}}{Rc^{2}}\\Big{)}^{5}\\frac{a}{R_{\\ell}}\\frac{(1+q)^{3}}{ q}\\leq 1\\,. \\tag{62}\\] Next we consider the effect of putting some of the angular momentum into an accretion disc. Following the discussion of Ref. (49), we assume an accretion disc contains an amount of angular momentum that grows at the rate \\[\\dot{J}_{d}=-(1-f)M^{3/2}a^{1/2}(1+q)^{-4}\\dot{q}\\,, \\tag{63}\\] where \\(f\\) is a parameter, taken to be a fit to the numerical results of Hut & Paczynski (50): \\[f=5q^{1/3}/3-3q^{2/3}/2\\,. \\tag{64}\\] We then find the new condition for angular momentum conservation to be \\[\\dot{J}+\\dot{J}_{d}=\\dot{J}_{GW}\\,, \\tag{65}\\] which yields \\[\\dot{q}\\bigg{[}\\frac{\\alpha}{2}-\\frac{1}{6}+\\frac{f-q^{2}}{1+q}\\bigg{]}\\geq- \\frac{32}{5}\\frac{G^{3}}{c^{5}}\\frac{q^{2}M^{3}}{(1+q)a^{4}}\\,. \\tag{66}\\]Therefore, the new condition for stable mass transfer is \\[(q^{2}-f)/(1+q)\\leq\\alpha/2-1/6\\,. \\tag{67}\\] The case \\(f=1\\) corresponds to neglecting the existance of an accretion disc. It remains to determine when an accretion disc is likely to form. Initially, matter flowing from the neutron star to the black hole through the inner Lagrangian point passes close to the black hole and falls in. However, as the neutron star spirals away, the accretion stream trajectory moves outside the Schwarzschild radius. When the trajectory doesn't even penetrate the marginally stable orbit, an accretion disc will begin to form. Particle trajectory computations of the Roche geometry by Shore, Livio & van den Huevel (51) suggest that its closest approach to the black hole is \\[R_{c}=a(1+q)(0.5-0.227\\ln q)^{4}\\,. \\tag{68}\\] Equating \\(R_{c}\\) to \\(6GM_{BH}/c^{2}\\) yields \\[(0.5-0.227\\ln q)^{4}(1+q)\\geq 6\\frac{GM_{BH}}{Rc^{2}}\\frac{R_{\\ell}}{a}\\,. \\tag{69}\\] These constraints and allowed regions for stable mass transfer are shown in Fig. 12. Apparently, stable mass transfer ceases when \\(m_{ns}\\approx 0.14\\) M\\({}_{\\odot}\\) if the formation of an accretion disc is ignored. If the effects of disc formation are included, the stable mass transfer ceases when \\(m_{ns}\\approx 0.22\\) M\\({}_{\\odot}\\). In both cases, the neutron star mass remains above its minimum mass (about 0.09 M\\({}_{\\odot}\\) for the equation of state used here). Thus, the neutron star does not \"explode\" by reaching its minimum mass. Fig. 13 shows the time development of the orbital separation \\(a\\) and the neutron star's mass and radius during the inspiral and stable mass transfer phases. Solid lines are calculated assuming there is no accretion disc formed, while dashed lines show the effects of accretion disc formation. The time evolutions during stable mass transfer are obtained from Eq. (57) and Eq. (66), using \\(\\dot{m}_{ns}=\\dot{q}M/(1+q)^{2}\\). With disc formation, the mass transfer is accelerated and the duration of the stable mass transfer phase is shortened considerably. Also, the neutron star spirals out to a smaller radius, and does not lose as much mass, as in the case when the accretion disc is ignored. Therefore, if stable mass transfer can take place, the timescale over which mass transfer occurs will be much longer than an orbital period, and lasts perhaps a few tenths of a second. This is not long enough to explain gamma-ray bursts. However, we have also seen the likelihood that an accretion disc forms is quite large. Furthermore, the accretion disc extends to about 100 km. Even though this is considerably less than Ref. [45] estimated, the lifetime of such an extended disc is considerable. To order of magnitude, it is given by the viscous dissipation time, or \\[\\tau_{visc}\\sim\\frac{D^{2}}{\\alpha c_{s}H}. \\tag{70}\\] Here \\(D\\) is the radial size of the disc, \\(\\alpha\\) is the disc's viscosity parameter, \\(c_{s}\\) is the sound speed and \\(H\\) is the disc's thickness. Note that \\(c_{s}\\approx\\Omega H\\) where \\(\\Omega=2\\pi/P=\\sqrt{GM_{BH}/D^{3}}\\) is the Kepler frequency. Thus, \\[\\tau_{visc}\\sim\\frac{P}{2\\pi\\alpha}\\biggl{(}\\frac{D}{H}\\biggr{)}^{2}\\,. \\tag{71}\\] Since the magnitude of \\(\\alpha\\) is still undetermined, and usually quoted [52] to be about 0.01, and \\(H\\) is likely to be of order \\(R\\), we find \\(\\tau_{visc}\\sim 230\\) s for our case. Figure 12: The dark and light shaded regions show the binary masses for which mass transfer in a black hole–neutron star binary will be stable in the absence of, and the presence of, an accretion disc. The constraints Eq. (58) (\\(a>6M_{BH}\\)), Eq. (60) (Tidal bulge OK), Eq. (62) (Tidal synchronization), and Eq. (69) (Accretion disc forms) are shown by the appropriately labelled curves. The parallel, diagonal, dashed lines show evolutionary tracks for the labelled total BH+NS masses, beginning in each case with \\(m_{ns}=1.5\\) M\\({}_{\\odot}\\). This alleviates the timescale problem for these models. Numerical simulations of such events are in progress, and it remains to be seen if a viable gamma-ray burst model from neutron star-black hole coalescence is possible. If it is, a great deal of the credit should rest with Dave. We thank Ralph Wijers for discussions concerning accretion disks. ## References * [1] G. Baym, H.A. Bethe, and C.J. Pethick, _Nucl. Phys._**A175** (1971) 225. * [2] J.M. Lattimer, C.J. Pethick, D.G. Ravenhall, and D.Q. Lamb, _Nucl. Phys._**A432** (1985) 646. * [3] J.M. Lattimer and F.D. Swesty, _Nucl. Phys._**A535** (1991) 331. * [4] J.P. Blaizot, J.F. Berger, J. Decharge, and M. Girod, _Nucl. Phys._**A591** (1995) 431; D.H. Youngblood, H.L. Clark, and Y.-W. Lui, _Phys. Rev. Lett._**82** (199) 691. Figure 13: The separation of a 1.5 M\\({}_{\\odot}\\) neutron star with a 3 M\\({}_{\\odot}\\) black hole during a merger is indicated by the dot-dashed line during inspiral and by a solid line in the outspiral during stable mass transfer. Other curves show the neutron star mass and radius during the stable mass transfer (outspiral) phase. Solid (dashed) lines are computed by ignoring (including) the effects of an accretion disc. * [5] J.M. Pearson, _Phys. Lett._**B271** (1991) 12. * [6] J.M. Lattimer, in _Nuclear Equation of State_, A. Ansari and L. Satpathy, eds., World Scientific, Singapore, 1996, p. 83. * [7] E. Lipparini and S. Stringari, _Phys. Lett._**B112** (1982) 421. * [8] K. Sato, _Prog. Theor. Phys._**53** (1975) 595; **54** (1975) 1325. * [9] J.M. Lattimer, A. Burrows, and A. Yahil, _Astrophys. J._**288** (1985) 644. * [10] F.D. Swesty, J.M. Lattimer, and E. Myra, _Astrophys. J._**425** (1994) 195. * [11] J.M. Lattimer and M. Prakash, in preparation (2000). * [12] D.G. Ravenhall and C.J. Pethick, _Astrophys. J._**424** (1994) 846. * [13] B. Link, R.I. Epstein, and J.M. Lattimer, _Phys. Rev. Lett._**83** (1999) 3362. * [14] L. Titarchuk, _Astrophys. J._**429** (1994) 340; F. Haberl and L. Titarchuk, _Astron. Astrophys._**299** (1995) 414. * [15] J.M. Lattimer, M. Prakash, D. Masak, and A. Yahil, _Astrophys. J._**355** (1990) 241. * [16] N.K. Glendenning, _Phys. Rev. D_**46** (1992) 4161. * [17] D. Page, _Astrophys. J._**442** (1995) 273. * [18] F.M. Walter, S.J. Wolk and R. Neuhauser, _Nature_**379** (1996) 233; F.M. Walter, _et al._, _Nature_**389** (1997) 358. * [19] R.W. Romani, _Astrophys. J._**313** (1987) 718. * [20] P. An, J.M. Lattimer, M. Prakash and F.M. Walter, in preparation (2000). * [21] B. Friedman and V.R. Pandharipande, _Nucl. Phys._**A361** (1981) 502. * [22] V. R. Pandharipande and R. A. Smith, _Nucl. Phys._**A237** (1975) 507. * [23] R.B. Wiringa, V. Fiks, and A. Fabrocine, _Phys. Rev._**C38** (1988)1010. * [24] A. Akmal and V.R. Pandharipande, _Phys. Rev._**C56** (1997) 2261. * [25] H. Muller and B.D. Serot, _Nucl. Phys._**606** (1996) 508. * [26] H. Muther, M. Prakash, and T.L. Ainsworth, _Phys. Lett._**199** (1987) 469. * [27] L. Engvik, M. Hjorth-Jensen, E. Osnes, G. Bao, and E. Ostgaard, _Phys. Rev. Lett._**73** (1994) 2650. * [28] M. Prakash, T.L. Ainsworth, and J.M. Lattimer, _Phys. Rev. Lett._**61** (1988) 2518. * [29] N.K. Glendenning and S.A. Moszkowski, _Phys. Rev. Lett._**67** (1991) 2414. * [30] N.K. Glendenning and Juergen Schaffner-Bielich, _Phys. Rev._**C60** (1999) 025803. * [31] M. Prakash, J. R. Cooke and J. M. Lattimer, _Phys. Rev._**52** (1995) 661. * [32] M. Prakash, I. Bombaci, M. Prakash, J.M. Lattimer, P. Ellis, and R. Knorren, _Phys. Rep._**280** (1997) 1. * [33] H.A. Buchdahl, _Astrophys. J._**147** (1967) 310. * [34] M. Prakash, in _Nuclear Equation of State_, A. Ansari and L. Satpathy, eds., World Scientific, Singapore, 1996, p. 229. * [35] M.S.R. Delgaty and K. Lake, _Computer Physics Communications_**115** (1998) 395. * [36] R.C. Tolman, _Phys. Rev._**55** (1939) 364. * [37] M.C. Durgapal and A. K. Pande, _J. Pure & Applied Phys._**18** (1980) 171. * [38] J.M. Lattimer and A. Yahil, _Astrophys. J._**340** (1989) 426. * [39] J.M. Lattimer and D.N. Schramm, _Astrophys. J. (Letters)_, **192** (1974) L145; _Astrophys. J._**210** (1976) 549. * [40] R.A. Hulse and J.H. Taylor, _Astrophys. J. (Letters)_, **195** (1975) L51. * [41] J.M. Lattimer, Ph.D thesis, University of Texas at Austin, unpublished (1976). * [42] P.C. Peters, _Phys. Rev._**136** (1964) 1224. * [43] D. Eichler, M. Livio, T. Piran, and D.N. Schramm, _Nature_**340** (1989) 126. * [44] C.S. Kochanek, _Astrophys. J._**398** (1992) 234. * [45] S.F. Portegies Zwart, _Astrophys. J. (Letters)_, **503** (1998) L53. * [46] L. Fishbone, _Astrophys. J. (Letters)_, **175** (1972) L155. * [47] B. Paczynski, _Ann. Rev. Astron. Astrophys._**9** (1971) 183. * [48] P.P. Eggleton, _Astrophys. J._**368** (1978) 369. * [49] L. Bildstein and C. Cutler, _Astrophys. J._**400** (1992) 175. * [50] P. Hut and B. Paczynski, _Astrophys. J._**284** (1984) 675. * [51] S. Shore, M. Livio, and E.P.J. van den Huevel, in _Interacting Binaries_, Saas-Fee Advanced Course 22 for Astronomy and Astrophysics, 1992, 145. * [52] A. Brandenburg, A. Nordlund, R.F. Stein, and U. Torkelsson, _Astrophys. J. (Letters)_**458** (1996) L45.
The equation of state (EOS) of dense matter plays an important role in the supernova phenomenon, the structure of neutron stars, and in the mergers of compact objects (neutron stars and black holes). During the collapse phase of a supernova, the EOS at subnuclear densities controls the collapse rate, the amount of deleptonization and thus the size of the collapsing core and the bounce density. Properties of nuclear matter that are especially crucial are the symmetry energy and the nuclear specific heat. The nuclear incompressibility, and the supernuclear EOS, play supporting roles. In a similar way, although the maximum masses of neutron stars are entirely dependent upon the supernuclear EOS, other important structural aspects are more sensitive to the equation of state at nuclear densities. The radii, moments of inertia, and the relative binding energies of neutron stars are, in particular, sensitive to the behavior of the nuclear symmetry energy. The dependence of the radius of a neutron star on its mass is shown to critically influence the outcome of the compact merger of two neutron stars or a neutron star with a small mass black hole. This latter topic is especially relevant to this volume, since it stems from research prompted by the tutoring of David Schramm a quarter century ago.
Condense the content of the following passage.
arxiv-format/0005252v2.md
# Effective action for the order parameter of the deconfinement transition of Yang-Mills theories Holger Gies Institut fur theoretische Physik, Universitat Tubingen, 72076 Tubingen, Germany ## 1 Introduction As a prelude to a truly nonperturbative evaluation of the effective action of Yang-Mills theory, the one-loop effective action with all-order couplings to a specific background may provide a first glance at the up to now unknown ground state of the theory. Since the problem of confinement is supposed to be intimately related to the quest for the ground state, it is elucidating to investigate the response of several \"confining vacuum candidates\" on quantum fluctuations - even in a perturbative approximation. In this spirit, e.g., the famous Savvidy model [1], which favors a covariant constant magnetic field as ground state, has given rise to much speculation on the nature of the vacuum. Since a useful description of confinement and the ground state should also exhibit the limits of their formation, it is natural to perform a study at finite temperature where a transition to a deconfined phase is expected (as is observed on the lattice). An order parameter for the deconfinement phase transition in pure gauge theory is given by the Polyakov loop [2, 3], i.e., a Wilson line closing around the compactified Euclidean timedirection: \\[L(x)=\\frac{1}{N_{\\rm c}}{\\rm tr\\,T}\\,\\exp\\left({\\rm i}g\\int\\limits_{0}^{\\beta}dx_ {0}\\,{\\sf A}_{0}(x_{0},x)\\right), \\tag{1}\\] where the period \\(\\beta=1/T\\) is identified with the inverse temperature of the ensemble in which the expectation value of \\(L\\) is evaluated. T denotes time ordering, \\(N_{\\rm c}\\) the number of colors, and \\({\\sf A}_{0}\\) is the time component of the gauge field. The negative logarithm of the Polyakov loop expectation value can be interpreted as the free energy of a single static color source living in the fundamental representation of the gauge group [4]. In this sense, an infinite free energy associated with confinement is indicated as \\(\\langle L\\rangle\\to 0\\), whereas \\(\\langle L\\rangle\ eq 0\\) signals deconfinement. Moreover, \\(\\langle L\\rangle\\) measures whether center symmetry, a discrete symmetry of Yang-Mills theory, is realized by the ensemble under consideration [4]. Gauge transformations which differ at \\(x_{0}=0\\) and \\(x_{0}=\\beta\\) by a center element of the gauge group change \\(L\\) by a phase \\({\\rm e}^{2\\pi{\\rm i}n/N_{\\rm c}}\\), \\(n\\) integer (but leave the action invariant); this implies that a center-symmetric ground state automatically ensures \\(\\langle L\\rangle=0\\), whereas deconfinement \\(\\langle L\\rangle\ eq 0\\) is related to the breaking of this symmetry. Therefore, the effective action governing the behavior of \\(L\\) is of utmost importance, because it determines the state of the theory at a given set of parameters, such as temperature, fields, etc. While this scenario has successfully been established in lattice formulations [5], several perturbative continuum investigations have led to various results. In the continuum, it is convenient to work with the \"Polyakov gauge\", which rotates the zeroth component \\({\\sf A}_{0}\\) of the gauge field into the Cartan subalgebra of SU(\\(N_{\\rm c}\\)), \\({\\sf A}_{0}\\to A_{0}\\) (cf. Eq. (2) below); furthermore, the condition \\(\\partial_{0}A_{0}=0\\) is imposed. Then, if the \\(A_{0}\\) ground state of the system is known, \\(L\\) can immediately be read off from Eq. (1), which suggests calculating the effective action of an time-independent \\(A_{0}\\) background field. Several one-loop calculations exist in the literature: considering a pure constant \\(A_{0}\\) background, Weiss [6] obtained an effective potential for the Polyakov loop preferring only the center-asymmetric ground states, i.e., the deconfined phase (see Eq. (40) below). Combining the Savvidy model of a covariant constant magnetic background field with the Polyakov loop background \\(A_{0}\\), Starinets, Vshivtsev and Zhukovskii [7] as well as Meisinger and Ogilvie [8] were able to demonstrate the existence of a confining center symmetric minimum for \\(\\langle L\\rangle\\) at low temperature with a transition to the broken, i.e., deconfined phase for increasing temperature. However, this model still suffers from the instabilities caused by the gluon spin coupling to the magnetic field [8], a problem also plaguing the Savvidy model [9], although an additional \\(A_{0}\\) background can in principle remove the problematic tachyonic mode in the gluon propagator for certain values of \\(gA_{0}\\) and \\(T\\). Perhaps the most promising approach was explored by Engelhardt and Reinhardt [10], who considered a spatially varying \\(A_{0}\\) field and evaluated the effective action for \\(A_{0}\\) in a gradient expansion to second order in the derivatives. The resulting action exhibits both phases, confinement and deconfinement, depending on the value of the temperature; in particular at low temperature, the spatial fluctuations of the Polyakov loop lower the action when fluctuating around the center symmetric (confining) phase. The main drawback of this model is its nonrenormalizability. An explicit cutoff dependence remains; gauge and Lorentz invariance have been broken explicitly during the calculation. Nevertheless, the main lesson to be learned is that spatial variations of the Polyakov loop have to be taken into account while searching for an effective potential of the order parameter for the deconfinement phase transition. The present work is devoted to an investigation of the Polyakov loop potential partly in the spirit of [10]; however, the treatment of the quantum fluctuations, the calculational techniques, and finally the results are quite different. In particular, we employ the background field method to keep track of the symmetries of the functional integral [11]. Unfortunately, the results are not as promising as those found in [10], since the simple picture for the deconfinement phase transition is not visible in the most stringent version of the model. The paper is organized as follows: in Sec. 2, we define the model, clarify our notations, and perform a first analysis of possible scenarios. Section 3 outlines the calculation of the effective action to one loop using the proper-time method, particularly emphasizing the subtleties of the present problem; we work in \\(d\\geq 4\\) dimensions with gauge group \\(\\mathrm{SU}(N_{\\mathrm{c}})\\). The implications of our results are discussed in Sec. 4 for \\(\\mathrm{SU}(2)\\); therein it is pointed out that the main features of the model depend strongly on the treatment of the infrared sector. Section 5 briefly demonstrates the latter point by introducing an additional infrared scale \"by hand\" (gluon mass), which changes the properties of the model drastically, now exhibiting a confining phase. We finally comment on our findings in Sec. 6. One last word of caution: it is obvious from the very beginning that the one-loop approximation performed here is hardly appropriate for dealing with the strongly coupled gauge systems under consideration. In fact, the results presented below mostly represent an extreme extrapolation of perturbation theory to extraordinary large values of the coupling constant \\(g\\) without any reasonable justification. Nevertheless, besides being interesting in its own right, the model can serve as a starting point for more involved investigations. E.g., the renormalization group flow of the true effective action will coincide with the perturbative action at large momentum scales; hence, a detailed knowledge of the perturbative regime will be of use for checking nonperturbative solutions. Moreover, some of the technical results of the present calculation such as the form of the gluon propagator in a fluctuating \\(A_{0}\\) background will be expedient for other problems as well. ## 2 The Model The essence of the model under consideration is determined by the choice of the background field, which is treated as a classical field subject to thermal and quantum fluctuations. At the very beginning, we confine ourselves to quasi-abelian background fields, pointing into a fixed direction \\(n^{a}\\) in color space: \\[\\mathsf{A}_{\\mu}:=A_{\\mu}^{a}T^{a}=:A_{\\mu}\\,n^{a}T^{a},\\qquad n^{2}=1, \\tag{2}\\]where \\((T^{a})^{bc}\\equiv-{\\rm i}f^{abc}\\) denote the hermitean generators of the gauge group SU(\\(N_{\\rm c}\\)) in the adjoint representation. Now we are aiming at a derivative expansion in the time-like component1 of \\(A_{\\mu}\\); such an expansion is usually justified by demanding that the derivatives be smaller than the characteristic mass scale of the theory. However, in the present case, there is no initial mass scale, since the fluctuating particles, gluons and ghosts, are massless. In fact, it turns out to be impossible to establish a unique derivative expansion for the (inverse) gluon propagator by a simple counting of derivatives; this is because a typical expansion generates terms \\(\\sim\\frac{1}{[\\partial_{A}{}_{0}]}\\), acting like a mass scale for the higher derivative terms. Therefore, we propose a different expansion scheme that is guided by the residual quasi-abelian gauge symmetry, which still holds for the background field. Footnote 1: The spatially varying \\(A_{0}\\) field giving rise to an electric field appears to conflict with the assumption of thermal equilibrium, which is inherent in the Matsubara formalism used below; this is because electric fields tend to separate (fundamental) color charges, moving the system away from equilibrium. However, we adopt the viewpoint that the here-considered vacuum model characterizes only a few features of the true vacuum; the latter actually includes quark and magnetic gluon condensates (and higher cumulants) which altogether are in equilibrium. Beyond this, we expect the present approximation to hold for sufficiently weak electric fields, keeping the system close to equilibrium. Therefore, the expansion in the electric field performed below is consistent with (almost) thermal equilibrium. The model is further specified by assuming that there are no magnetic field components in the rest frame of the heat bath; the latter is characterized by its 4-velocity vector \\(u^{\\mu}\\). Therefore, there are only two independent (quasi-abelian) gauge invariants: \\[{\\bf E}^{2} = \\frac{1}{2}F_{\\mu\ u}F_{\\mu\ u}\\equiv F_{\\mu\\alpha}u_{\\alpha}\\, F_{\\mu\\beta}u_{\\beta}\\] \\[\\bar{A}_{u} := \\frac{1}{\\beta}\\int\\limits_{0}^{\\beta}d\\tau\\,A_{u}(x^{\\mu}+\\tau u ^{\\mu}),\\qquad A_{u}:=A_{\\mu}u_{\\mu}. \\tag{3}\\] Here we work in Euclidean finite-temperature space \\(R^{d-1}\\times S^{1}\\), and \\(F_{\\mu\ u}\\) denotes the quasi-abelian field strength of the background field. In the heat-bath rest frame, we simply have \\(u^{\\mu}=(1,{\\bf 0})\\), so that \\(A_{u}\\equiv A_{0}\\). The quantity \\(\\bar{A}_{u}\\) is invariant under quasi-abelian gauge transformations [12], since these transformations are restricted to be periodic in the compactified time direction. (For the complete gauge group, \\(\\bar{A}_{u}\\) can be modified by a gauge transformation that differs at \\(x_{0}=0\\) and \\(x_{0}=\\beta\\) by a center element, e.g., \\(\\bar{A}_{u}\\to\\frac{2\\pi T}{g}-\\bar{A}_{u}\\) for SU(2) modulo Weyl transformations.) If we now perform a derivative expansion in the electric field \\({\\bf E}\\), we will obtain an effective action of the form \\(\\Gamma=f(\\bar{A}_{u})+g(\\bar{A}_{u})\\,E^{2}+{\\cal O}(E^{4},E\\partial^{2}E)\\), (\\(E\\equiv|{\\bf E}|\\), \\(f,g\\) to be determined) for reasons of gauge invariance. The indicated higher-order terms are at least of fourth order in \\(\\partial A\\) and will be omitted in the following. Now, the crucial observation is that there exists a unique choice of gauge for the background field that (i) satisfies the Polyakov gauge condition \\(\\partial_{0}A_{0}=0\\) in order to ensure the correspondence between \\(A_{0}\\) and \\(L\\), and (ii) establishes a one-to-one correspondence between \\({\\bf E}\\) and \\(A_{0}\\), so that an expansion in \\({\\bf E}\\) can be rated as a derivative expansion in \\(A_{0}\\)(from now on, we work in the heat-bath rest frame where \\(A_{0}\\equiv A_{u}\\)): \\[A_{0}(x)=a_{0}-(x-x^{\\prime})_{i}E_{i}\\qquad\\Leftrightarrow\\qquad E_{i}=-\\partial _{i}A_{0}(x), \\tag{4}\\] where \\(a_{0}\\) and \\(E_{i}\\) are considered as constant, and \\(x^{\\prime}\\) is an arbitrary constant vector which can be set equal to zero. This gauge can be viewed as a combination of Polyakov and Schwinger-Fock gauge; the background field considered here lies exactly where the gauge conditions overlap. We remark that this is no longer true for higher derivative terms. The final task is to integrate out the thermal and quantum fluctuations in the background of the gauge field (4) and expand to second order in \\(E_{i}\\). At this point, it is useful to introduce the dimensionless temperature-rescaled variable \\[c:=\\frac{gA_{0}}{2\\pi T},\\qquad c\\in[0,1]. \\tag{5}\\] The compactness of \\(c\\) arises from the fact that \\(A_{0}\\) is a compact variable in finite-temperature Yang-Mills theories2. Then, the resulting effective action can be represented as a derivative expansion in \\(c\\): Footnote 2: This can be inferred from a Hamiltonian quantization starting from the Weyl gauge \\(A_{0}=0\\) and generating an \\(A_{0}\\) field by a time-dependent gauge transformation. This observation will furthermore become obvious when studying the background field dependence of the gluon propagator (cf. Eq. (12)). \\[\\Gamma^{T}_{\\rm eff}[c]=\\int d^{d}x\\,\\Big{(}V(c,n^{a})+W(c,n^{a})\\,\\partial_{i }c\\partial_{i}c\\Big{)}, \\tag{6}\\] where the potential \\(V\\) and the weight function \\(W\\) also depend on the color space unit vector \\(n^{a}\\). Higher-order terms \\(\\sim(\\partial c)^{4}\\) are neglected. The superscript \\(T\\) in Eq. (6) signals that the effective action strongly depends on the presence of a heat bath. Indeed, \\(V\\) as well as \\(W\\) vanish or reduce to simple constants as \\(T\\to 0\\); this is because the \\(A_{0}\\) field (or \\(\\bar{A}_{u}\\) in Eq. (3)) ceases to be an invariant at \\(T=0\\), since it can be gauged away completely when the time direction is noncompact. Already at this stage, typical properties of the model become apparent. First, we observe that if \\(W(c,n^{a})\\geq 0\\) fluctuations of the Polyakov loop are suppressed; then, the ground state is solely determined by the minimum (or minima) of \\(V(c,n^{a})\\), which we denote by \\(c_{V},n^{a}_{V}\\). This ground state then is (not) confining if it corresponds to a center (a-)symmetric state, implying \\(\\langle L\\rangle=0\\) (\\(\\langle L\\rangle\ eq 0\\)). Fluctuations of the Polyakov loop can only be preferred if \\(W(c,n^{a})\\) becomes negative for certain values of \\(c\\) and \\(n^{a}\\), which we denote by \\(c_{W}\\) and \\(n^{a}_{W}\\). Whether or not these fluctuations lead to a confining phase again depends on the question of whether or not the minimum of \\(W(c,n^{a})\\) corresponds to a center-symmetric state. Moreover, it depends on the question of whether these fluctuations are strong enough to compensate for the influence of \\(V(c,n^{a})\\). Here we arrive at a main problem of the model: if \\(W(c_{W},n^{a}_{W})<0\\), then the action (6) is not bounded from below. In other words, arbitrarily strong fluctuations of \\(c\\) around \\(c_{W}\\) will lower the action without any bound. Of course, it is reasonable to assume (which we do in the following) that higher derivative terms \\((\\partial c)^{4}\\) or \\((\\partial^{2}c)^{2}\\) will establish such a lower bound, so that the strength of the fluctuations is dynamically controlled. Nevertheless, one drawback remains: we cannot make any statement about the nature of a possible phase transition. For this, we would have to know everything about the dynamical increase of \\(\\partial_{i}c\\partial_{i}c\\) when \\(W(c_{W},n^{a}_{W})\\) becomes negative for certain values of temperature. Since this is beyond the capacities of our model, we shall always assume in the following that the system is dominated by the weight function \\(W\\) and thus by fluctuations of the Polyakov loop whenever \\(W\\) becomes negative. Let us finally perform a dimensional analysis of the model. For simplicity, let us start with \\(d=4\\). With regard to Eq. (6), the potential has mass dimension 4, while the weight function has mass dimension 2. Due to the compactness of \\(A_{0}\\) as reflected by Eq. (5), the only mass scale which is _a priori_ present is given by the temperature \\(T\\). Hence, if \\(V\\) scaled with \\(T^{4}\\) and \\(W\\) with \\(T^{2}\\), say \\(V(c,n^{a})=T^{4}\\,v(c,n^{a})\\) and \\(W(c,n^{a})=T^{2}\\,w(c,n^{a})\\), where \\(v,w\\) are independent of \\(T\\), then we would never encounter a phase transition in our model; this is because increasing or lowering the temperature could never turn \\(W\\) from positive to negative values or vice versa. At this stage, one may speculate that, since scale invariance is broken in Yang-Mills theories, the phenomenon of dimensional transmutation introduces another scale \\(\\mu\\) (e.g., the scale at which the renormalized coupling is defined). Then, the dimensionless function \\(w\\) can also depend on \\(T/\\mu\\). However, this is far from self-evident, since the breaking of scale invariance is induced by UV effects. But the functions \\(V\\) and \\(W\\) in the effective action \\(\\Gamma_{\\rm eff}^{T}\\) arise at finite temperature only and thus are a product of infrared physics. In particular, there are no UV divergences in the finite-temperature contributions to \\(\\Gamma_{\\rm eff}\\) which require another scale during a regularization procedure. Hence, one is tempted to conclude that the naive scaling argument given above is correct. Nevertheless, the naive scaling breaks down, as we shall see in the next section; but this time, an additional scale is introduced by the properties of the theory in the infrared. As is well known, finite-temperature field theories can develop a more singular infrared behavior than their zero-temperature counterparts. Indeed, while the effective action at zero temperature and even the effective action for thermalized purely magnetic background fields do not suffer from infrared divergences, the case considered here involving thermalized electric fields exhibits such singularities, which must be handled carefully. The massless gluon does not provide for a natural infrared cutoff which could control the low-momentum behavior of the theory. To conclude, the \\(d=4\\) model is in principle capable of describing a phase transition, because the finite-temperature infrared divergences require an additional scale which introduces distinct high- and low-temperature domains. Of course, there are various ways to deal with the infrared singularities; and as we will demonstrate below, they can arise from different physical motivations, leading to different physical results. Two possibilities are proposed in the present work. In the first and more natural one, we regularize the infrared divergences in the same technical way as the ultraviolet ones, so that in toto there is only one more scale than in the classical theory, which we identify with the defining scale of the coupling constant. As a consequence and a consistency check, the running of the coupling with temperature coincides with the running of the coupling with field strength or momenta - they are characterized by the same \\(\\beta\\)-function. In the second possibility, we study by way of example a regularization of the infrared divergences by an effective gluon mass \\(m_{\\rm eff}\\) which we insert by hand, assuming that such an additional scale may be generated dynamically in the full theory. The latter version of the theory exhibits the desired properties of two phases separated by a deconfinement phase transition, while the former does not. At \\(d>4\\) the situation is somewhat different and simpler, since here the coupling constant \\(g\\) is dimensionful, so that two scales are present already at the classical level. Moreover, no additional scale will be introduced at the quantum level, because the theory is infrared finite for \\(d>4\\). ## 3 Calculation of the Effective Action Starting from the standard formulation of Yang-Mills theories via the functional integral in Euclidean space with compactified time dimension, we employ the background field method [11] to fix the gauge for the fluctuating gluon fields, but thereby maintain gauge invariance for the background field. We arrive at the one-loop approximation by neglecting cubic and quartic terms in the fluctuating fields. The remaining two integrals over the gluonic and ghost fluctuations are Gaussian and lead to functional determinants upon integration; the one-loop effective action depending on the background field then reads \\[\\Gamma^{1}_{\\rm eff}[A]=\\frac{1}{2}{\\rm Tr}_{x{\\rm cL}}\\,\\ln\\Delta^{\\rm YM}[A] ^{-1}-{\\rm Tr}_{x{\\rm c}}\\,\\ln\\Delta^{\\rm FP}[A]^{-1}, \\tag{7}\\] where \\(\\Delta^{\\rm YM}[A]^{-1}\\) denotes the inverse gluon propagator, and \\(\\Delta^{\\rm FP}[A]^{-1}\\) the inverse ghost propagator, i.e., the Faddeev-Popov operator. The traces run over coordinate \\((x)\\), color (c), and Lorentz (L) labels. Introducing the abbreviations \\(D^{2}:=D_{\\mu}D_{\\mu}\\) and \\((DD)_{\\mu\ u}:=D_{\\mu}D_{\ u}\\), where the covariant derivative is defined by \\(D_{\\mu}:=\\partial_{\\mu}-{\\rm i}g{\\sf A}_{\\mu}\\), and suppressing the indices, the explicit representations of the propagators read \\[\\Delta^{\\rm YM}_{\\rm E}[A]^{-1} = -\\left[D^{2}-2{\\rm i}g\\,F+\\left(\\frac{1}{\\alpha}-1\\right)\\,DD \\right], \\tag{8}\\] \\[\\Delta^{\\rm FP}_{\\rm E}[A]^{-1} = -D^{2}, \\tag{9}\\] In the following, we will work in the Feynman gauge, \\(\\alpha=1\\), which simplifies the calculations considerably3. For the evaluation of Eq. (7), the spectrum of the inverse propagators is required. In color space, diagonalization can be achieved by introducing the eigenvalues \\(\ u_{l}\\) of the matrix \\(n^{a}(T^{a})^{bc}\\), \\(l=1,\\ldots,N_{\\rm c}^{2}-1\\). The basic building block of the operators in Eqs. (8) and (9) is the covariant Laplacian, which upon insertion of the background field (4) yields (we set \\(x^{\\prime}=0\\)) \\[(-{\\rm i}D[A])^{2}=(-{\\rm i}\\partial_{i})^{2}+(-{\\rm i}D_{0}[a_{0}])^{2}+2g\ u_{ l}E_{i}(-{\\rm i}D_{0}[a_{0}])x_{i}+(g\ u_{l})^{2}E_{i}E_{j}\\,x_{i}x_{j}, \\tag{10}\\] where \\(-{\\rm i}D_{0}[a_{0}]=-{\\rm i}\\partial_{0}-g\ u_{l}a_{0}\\), and the roman indices run over the \\(d-1\\) spatial components. The operator is obviously of harmonic oscillator type and can be diagonalized by a rigid rotation of the spatial part of the coordinate system. A prominent eigenvector is given by the direction of the electric field \\(E_{i}\\), which we may choose to point along the \\(1^{\\rm st}\\) direction of the new system. We finally obtain \\[(-{\\rm i}D[A])^{2}=(-{\\rm i}\\partial_{2})^{2}+\\cdots+(-{\\rm i} \\partial_{d-1})^{2}+\\left(g\ u_{l}Ex_{1}+(-{\\rm i}D_{0})\\right)^{2}+(-{\\rm i} \\partial_{1})^{2}, \\tag{11}\\] where \\(E=\\sqrt{E_{i}E_{i}}\\). Up to now, we have achieved a partial diagonalization of the operators of Eq. (8) and (9). While the Faddeev-Popov operator coincides with the Laplacian (11), the inverse gluon propagator receives additional contributions from the gluon-spin coupling to the electric field \\(\\sim-{\\rm i}gF_{\\mu\ u}\\) which can easily be diagonalized. Performing a Fourier transformation for the \\(d-2\\) unaffected components, \\(-{\\rm i}\\partial_{2},\\ldots,-{\\rm i}\\partial_{d-1}\\to p_{2},\\ldots,p_{d-1}\\), as well as for the time derivative, \\(-{\\rm i}\\partial_{0}\\to\\omega_{n}\\), \\(\\omega_{n}=2\\pi Tn\\), \\(n\\in\\mathbb{Z}\\) (Matsubara frequencies), we may write the inverse gluon propagator in the form \\[\\Delta^{\\rm YM}[A]^{-1}=p_{2}^{2}+\\cdots+p_{d-1}^{2}+\\left(g\ u_{ l}E\\,x_{1}+(\\Pi_{0})\\right)^{2}+(-{\\rm i}\\partial_{1})^{2}+2\\lambda g\ u_{l}E, \\tag{12}\\] where \\(\\Pi_{0}=\\omega_{n}-g\ u_{l}{a_{0}}\\)4. The number \\(\\lambda\\) labels the different eigenvalues in Lorentz space arising from the above-mentioned gluon spin coupling with \\(\\lambda=1,-1,0\\); here, \\(\\lambda=1,-1\\) appears only once, whereas \\(\\lambda=0\\) occurs with multiplicity \\(d-2\\), corresponding to the spatial directions which are unaffected by the electric field. Incidentally, the Faddeev-Popov operator is identical to Eq. (12) with \\(\\lambda=0\\) and multiplicity \\(1\\). Taking the prefactors and signs of the two traces in Eq. (7) into account, the Faddeev-Popov operator cancels exactly against two Lorentz eigenvalues of the spectrum of \\(\\Delta^{\\rm YM}[A]^{-1}\\) with \\(\\lambda=0\\), removing the spurious gauge degrees of freedom, so that only the physical, transverse part of the inverse gluon propagator remains, Footnote 4: The compactness of \\(A_{0}\\) or \\(a_{0}\\) becomes obvious here; e.g., for SU(2), where \\(\ u_{l}=-1,0,1\\), a shift of \\(a_{0}\\) by an integer multiple of \\((2\\pi T)/g\\) can be compensated for by a shift of the Matsubara label \\(n\\). \\[\\Delta^{\\rm YM}_{\\perp}[A]^{-1}=p_{2}^{2}+\\cdots+p_{d-1}^{2}+ \\left(e_{l}\\,x_{1}+(\\Pi_{0}[a_{l}])\\right)^{2}+(-{\\rm i}\\partial_{1})^{2}+2 \\lambda e_{l}, \\tag{13}\\] where \\(\\lambda=0\\) now occurs with multiplicity \\(d-4\\). For reasons of brevity, we introduced the short forms \\[e_{l}:=|g\ u_{l}E|,\\qquad a_{l}:=|g\ u_{l}a_{0}| \\tag{14}\\] in Eq. (13); the use of the moduli in Eq. (14) is justified by the observation that, when tracing over a function of the inverse propagators, the result will not be sensitive to the signs of \\(g\ u_{l}E\\) and \\(g\ u_{l}a_{0}\\). The remaining problem of diagonalizing the 0-1 subspace at first sight resembles the problem of finding the spectrum of a relativistic particle in a constant magnetic field. There, one finds the eigenvalues (Landau levels) by shifting the \\(x_{1}\\) coordinate by \\(x_{1}\\to x_{1}-\\frac{(-{\\rm i}\\Pi_{0})}{e_{l}}\\) in order to arrive at a perfect harmonic oscillator. Here, the situation is not so simple, because the \\(a_{0}\\) field as well as the temperature dependence would drop out of the operator completely. In other words, such a shift is not in agreement with the periodic boundary conditions in time direction. Hence, the usual harmonic oscillator techniques arrive at their limits, and we have to find a different method that does not rely on the explicit knowledge of the spectra as is necessary for, e.g., \\(\\zeta\\)-function methods. We choose Schwinger's proper-time technique, which provides for a more direct handling of the propagators. In terms of the transverse gluon propagator, the effective action reads in proper-time representation \\[\\Gamma^{1}_{\\rm eff}[A] = \\frac{1}{2}{\\rm Tr}_{x{\\rm cL}}\\,\\ln\\Delta^{\\rm YM}_{\\perp}[A]^{- 1}=-\\frac{1}{2}{\\rm tr}_{{\\rm cL}}\\int\\limits_{0}^{\\infty}\\frac{ds}{s}\\, \\langle x|{\\rm e}^{-s\\Delta^{\\rm YM}_{\\perp}[A]^{-1}}|x\\rangle \\tag{15}\\] \\[\\equiv\\] where \\(\\Omega\\) denotes the spacetime volume of \\(\\,\\mathbb{R}^{d-1}\\times S^{1}\\), \\(s\\) is the proper time, and the trace over the continuous part of the spectrum is taken in momentum space. The color trace runs over \\(l\\), which labels the color space eigenvalues, whereas the Lorentz trace runs over \\(\\lambda\\) with its associated multiplicities. The function \\(M\\) is defined via the Fourier representation of the proper-time transition amplitude \\[\\langle x|{\\rm e}^{-s\\Delta^{\\rm YM}_{\\perp}[A]^{-1}}|x^{\\prime}\\rangle= \\sum\\hskip-10.0pt\\int\\hskip-10.0pt\\int\\frac{d^{d}p}{(2\\pi)^{d}}\\,{\\rm e}^{{ \\rm i}p(x-x^{\\prime})}\\,{\\rm e}^{-sM(p,\\lambda,l;s)}, \\tag{16}\\] which can be determined by the differential equation \\[\\mathbb{1}=\\Delta^{\\rm YM}_{\\perp}[A]^{-1}\\,\\Delta^{\\rm YM}_{\\perp}[A]. \\tag{17}\\] When evaluated, for example, in momentum space, Eq. (17) is solved by \\[\\Delta^{\\rm YM}_{\\perp}[A](p,\\lambda,l)=\\int\\limits_{0}^{\\infty}ds\\,{\\rm e}^{ -sM(p,\\lambda,l;s)}, \\tag{18}\\] where \\(M\\) is given by \\[M(p,\\lambda,l;s) = p_{2}^{2}+\\cdots+p_{d-1}^{2}+\\frac{\\tanh 2e_{l}s}{2e_{l}s}(p_{1}+q )^{2}+\\frac{\\tanh e_{l}s}{e_{l}s}(\\omega_{n}-a_{l})^{2} \\tag{19}\\] \\[+\\frac{1}{2s}\\ln\\cosh 2e_{l}s+2e_{l}\\lambda.\\]Here, \\(q\\) denotes some function of \\(e_{l}\\) and \\(s\\) which becomes irrelevant when shifting the \\(p_{1}\\) integration in Eq. (15). Upon insertion of Eq. (19) into Eq. (15), the Gaussian momentum integration and the sum over \\(\\lambda\\) can easily be performed; the sum over Matsubara frequencies can be reorganized by a simple Poisson resummation,5 and we arrive at Footnote 5: For technical details, see, e.g., [12, 14]. \\[\\Gamma^{1}_{\\rm eff}[A] = -\\frac{\\Omega}{2}{\\rm tr}_{\\rm c}\\frac{1}{(4\\pi)^{d/2}}\\int\\limits _{0}^{\\infty}\\frac{ds}{s^{d/2}}\\,e_{l}\\left(4\\sinh e_{l}s+\\frac{d-2}{\\sinh e_{ l}s}\\right) \\tag{20}\\] \\[\\times\\left[1+2\\sum_{n=1}^{\\infty}\\exp\\left(-\\frac{n^{2}}{4T^{2} }e_{l}\\coth e_{l}s\\right)\\cos\\frac{a_{l}}{T}n\\right].\\] Here, we have separated the zero-temperature part, corresponding to the first line times the \"1\" of the second line, from the finite-temperature contributions, corresponding to the first line read together with the \\(n\\) sum. ### Effective Action at Zero Temperature Let us first study the temperature-independent part of the effective action Eq. (20) with particular emphasis on its renormalization: \\[\\Gamma^{1T=0}_{\\rm eff}[A]=-\\frac{\\Omega}{2}{\\rm tr}_{\\rm c}\\frac{1}{(4\\pi)^{ d/2}}\\int\\limits_{0}^{\\infty}\\frac{ds}{s^{d/2}}\\,e_{l}\\left(4\\sinh e_{l}s+ \\frac{d-2}{\\sinh e_{l}s}\\right). \\tag{21}\\] On the one hand, the proper-time integral is divergent at the upper bound, \\(s\\to\\infty\\), owing to the first term \\(\\sim\\sinh e_{l}s\\). Since large values of \\(s\\) correspond to the infrared regime, this divergence is not related to the standard renormalization of bare parameters, which is a UV effect. In fact, this divergence is analogous to the Nielsen-Olesen unstable mode [9] of the Savvidy vacuum6; one can give a meaning to this essential singularity by rotating the contour of the integral over the \\(\\sinh\\) term into the lower complex plane, \\(-{\\rm i}s\\to s\\). The effective action then picks up an imaginary part that characterizes the instability of the constant electric background field considered here. Footnote 6: At \\(T=0\\), the present situation involving an external _electric_ field is identical to the _magnetic_ Savvidy vacuum owing to the Euclidean \\(O(4)\\) symmetry. On the other hand, the proper-time integral is also divergent at the lower bound, corresponding to the ultraviolet. The leading singularity is of the order \\(s^{-d/2}\\), so that \\(m\\) subtractions are required for \\(d=2m\\) or \\(d=2m+1\\). The leading singularity which is field independent can easily be removed by demanding that \\(\\Gamma^{T=0}_{\\rm eff}[A=0]=0\\) (first renormalization condition). The next-to-leading singularity proportional to \\(e_{l}^{2}\\sim E^{2}\\) is removed by the second renormalization condition \\((\\partial{\\cal L}_{\\rm eff}/\\partial E^{2})|_{E\\to 0}=1/2\\), where \\(\\Gamma_{\\rm eff}=\\int{\\cal L}_{\\rm eff}\\); this ensures that the classical Lagrangian is recovered when all nonlinear interactionsare switched off, and corresponds to a field-strength and charge renormalization \\[{\\cal L}_{\\rm cl}^{\\rm R}\\equiv\\frac{1}{2}E_{\\rm R}^{2}=\\frac{1}{2}Z_{3}^{-1}E^{2}, \\tag{22}\\] where \\(E_{\\rm R}\\) denotes the renormalized field, and \\(Z_{3}\\) is the wave function renormalization constant. The latter can be read off from Eq. (21) by isolating the singularity \\(\\sim E^{2}\\), \\[Z_{3}^{-1} = 1-\\frac{26-d}{6(4\\pi)^{d/2}}\\,N_{\\rm c}\\,\\bar{g}^{2}\\int\\limits_ {\\mu^{2}/\\Lambda^{2}}\\frac{ds}{s^{d/2-1}}, \\tag{23}\\] where we have used an explicit cutoff \\(\\Lambda\\), employed \\({\\rm tr}_{\\rm c}|\ u_{l}|^{2}=\\sum_{l=1}^{N_{\\rm c}^{2}-1}|\ u_{l}|^{2}=N_{\\rm c}\\), and have introduced the dimensionless coupling \\(\\bar{g}^{2}=g^{2}\\mu^{d-4}\\) with the aid of a reference scale \\(\\mu\\) (at which \\(g\\) is defined). To one-loop order, the \\(\\beta\\) function can be read off from the coefficient of the UV divergence of \\(Z_{3}^{-1}\\): \\[\\beta_{\\bar{g}^{2}}\\equiv\\partial_{t}\\bar{g}^{2}=(d-4)\\bar{g}^{2}-b_{0}^{d} \\bar{g}^{4}, \\tag{24}\\] where \\[b_{0} = \\frac{11}{3}\\frac{N_{\\rm c}}{8\\pi^{2}},\\quad{\\rm for}\\quad d=4,\\] \\[b_{0}^{d} = \\frac{(26-d)}{3(d-4)}\\frac{N_{\\rm c}}{(4\\pi)^{d/2}},\\quad{\\rm for }\\quad d>4, \\tag{25}\\] and \\(t\\) denotes the \"renormalization group time\" \\(\\ln\\mu/\\Lambda\\). Here we have rediscovered the standard well-known one-loop results, including the remarkable observation that the \\(\\beta\\) function for the dimensionful coupling \\(g^{2}=\\bar{g}^{2}\\mu^{4-d}\\) vanishes precisely in the critical string dimension \\(d=26\\)[15]. Note that in \\(4<d<26\\), the \\(\\beta\\) function develops a UV-stable fixed point: \\[\\bar{g}_{*}^{2}=\\frac{d-4}{b_{0}^{d}}=\\frac{3(d-4)^{2}}{26-d}\\,\\frac{(4\\pi)^{ d/2}}{N_{\\rm c}}. \\tag{26}\\] Of course, this fixed point lies in the perturbative domain (\\(\\bar{g}_{*}^{2}/4\\pi\\ll 1\\)) only for very large \\(N_{\\rm c}\\). As an alternative to these considerations of renormalization, the integral in Eq. (21) can be treated more directly with an appropriate regularization prescription. Let us briefly sketch a proper-time variant of dimensional regularization for later use in the case \\(d=4\\). Shifting the singularities at \\(s\\to 0\\) by \\(\\epsilon\\) and introducing a mass scale \\(\\mu\\), Eq. (21) can be written as \\[{\\cal L}_{\\rm eff}^{1T=0}=-\\frac{1}{8\\pi^{2}}{\\rm tr}_{\\rm c}\\,\\mu^{2\\epsilon }\\left[\\int\\limits_{0}^{-{\\rm i}\\infty}\\frac{ds}{s^{2-\\epsilon}}\\,e_{l}\\sinh e _{l}s+\\frac{d-2}{4}\\int\\limits_{0}^{\\infty}\\frac{ds}{s^{2-\\epsilon}}\\,\\frac{e _{l}}{\\sinh e_{l}s}\\right]. \\tag{27}\\]These integrals can be evaluated [16], and the result for the one-loop contribution to the zero-temperature effective Lagrangian in \\(d=4\\) reads \\[{\\cal L}_{\\rm eff}^{1T=0}=-\\frac{1}{8\\pi^{2}}{\\rm tr}_{\\rm c}\\,e_{l}^{2}\\left[ \\frac{11}{12\\epsilon}-\\frac{11}{12}\\ln\\frac{e_{l}}{\\mu^{2}}+{\\rm const.}+{\\rm imag.\\ parts}+{\\cal O}(\\epsilon)\\right]. \\tag{28}\\] The appearance of the simple pole in \\(\\epsilon\\) implies a charge and field strength renormalization as outlined above. To be precise, in the background field formulation, the coupling runs with the scale set by the strength of the external _field_: \\(g^{2}=g^{2}(gE/\\mu^{2})\\); this is analogous to the _momentum_ dependence of the coupling in the standard formulation. Including the correctly (re-)normalized classical term, the total effective Lagrangian to one loop can then be written as \\[{\\cal L}_{\\rm eff}^{T=0}(gE)={\\cal L}_{\\rm cl}+{\\cal L}_{\\rm eff}^{1T=0}=\\frac {1}{8}b_{0}\\,(gE)^{2}\\ln\\frac{(gE)^{2}}{{\\rm e}\\kappa^{2}}+{\\rm imag.\\ parts},\\qquad\\kappa^{2}=\\mu^{4}{\\rm e}^{- \\frac{4}{b_{0}g^{2}}-1}, \\tag{29}\\] where we have introduced the renormalization group invariant quantity \\(\\kappa\\) corresponding to the minimum of \\({\\cal L}_{\\rm eff}^{T=0}(gE)\\), and \\(b_{0}\\) is given by the first line of Eq. (25). Concerning the imaginary parts, the following comment should be made: within the Savvidy model, the imaginary parts indicate the instability of the constant-field vacuum configuration signaling the final failure of the model. In the present case, they are just an artefact of truncating the derivative expansion of the effective action at second order; this truncation is formally equivalent to the constant-field approximation. Upon an inclusion of non-constant terms which would affect only higher-derivative contributions, we expect the imaginary part to vanish; this is because the unstable modes are then cut off by the length scale of variation of the fields. For the regularization/renormalization program of \\({\\cal L}_{\\rm eff}^{1T=0}\\) in \\(d\\geq 6\\), more subtractions than in \\(d=4\\) are needed; since these theories are nonrenormalizable in the common sense, these subtractions correspond to counter-terms of field operators of higher mass dimension. To be precise, \\(d>4\\) Yang-Mills theories can only be defined with a cutoff (with physical relevance); therefore, some cutoff procedure is implicitly understood. Nevertheless, the precise form of the cutoff procedure only affects higher-order operators which are of no relevance for the present model. Moreover, it is perfectly legitimate to study \\(d>4\\) quantum Yang-Mills theories in the sense of effective theories valid below a certain cutoff scale. ### Effective Action at Finite Temperature Equipped with these preliminaries, we now turn to the more interesting finite-temperature part of Eq. (20): \\[{\\cal L}_{\\rm eff}^{1T} = -\\frac{1}{(4\\pi)^{d/2}}{\\rm tr}_{\\rm c}\\int\\limits_{0}^{\\infty} \\frac{ds}{s^{d/2}}\\,e_{l}\\left(4\\sinh e_{l}s+\\frac{d-2}{\\sinh e_{l}s}\\right) \\sum_{n=1}^{\\infty}\\exp\\left(-\\frac{n^{2}}{4T^{2}}e_{l}\\coth e_{l}s\\right) \\cos\\frac{a_{l}}{T}n.\\]For \\(s\\to 0\\), the integral remains completely finite, since the coth in the exponent develops a \\(1/s\\) pole; i.e., there are no UV divergences in the thermal contribution to \\({\\cal L}_{\\rm eff}\\), as is to be expected. At the opposite end, \\(s\\to\\infty\\), we again encounter the sinh divergence induced by the unstable mode. However, this is not the only infrared problem: an attempt at circumventing this problem by a rotation of the \\(s\\) contour as in the zero-temperature case would lead to a disastrous behavior of the \\(n\\) sum due to the poles of the coth on the imaginary axis. In fact, it is the interplay between the proper-time integral and the \\(n\\) sum that produces further infrared divergences (at least for d=4). It is well known in the literature that particles with Bose-Einstein statistics develop stronger infrared singularities at \\(T\ eq 0\\) than at zero temperature [17]. Unfortunately, the status of these finite-temperature singularities is far from being settled, contrary to the \\(T=0\\) case. In the present paper, we shall investigate two different methods. The consequences of an explicit mass-like cutoff are discussed in Sec. 5. Here, we propose a more natural treatment by regularizing the thermal infrared divergences of Eq. (30) by the same method used to treat the UV divergences in Eq. (27) in the \\(T=0\\) case. Thereby, the same scale \\(\\mu\\) which serves to define the value of the coupling constant is introduced. Taking these considerations into account, the Lagrangian is modified according to (substitution \\(\\mu^{2}s=u\\)) \\[{\\cal L}_{\\rm eff}^{1T} = -\\frac{4}{(4\\pi)^{d/2}}{\\rm tr}_{\\rm c}\\,\\mu^{d}\\int\\limits_{0}^{ \\infty}\\frac{du}{u^{d/2-\\epsilon}}\\left(\\frac{e_{l}}{\\mu^{2}}\\sinh\\frac{e_{l} }{\\mu^{2}}u+\\frac{d-2}{4}\\frac{e_{l}/\\mu^{2}}{\\sinh\\frac{e_{l}}{\\mu^{2}}u}\\right) \\tag{31}\\] \\[\\times\\sum_{n=1}^{\\infty}\\exp\\left(-\\frac{n^{2}}{4}\\frac{\\mu^{2}} {T^{2}}\\frac{e_{l}}{\\mu^{2}}\\coth\\frac{e_{l}}{\\mu^{2}}u\\right)\\cos\\frac{a_{l} }{T}n.\\] In the context of our approximation in terms of derivatives of \\(A_{0}\\), we need only the terms \\(\\sim e_{l}^{0}\\) and \\(\\sim e_{l}^{2}\\) of Eq. (31). Expanding in \\(e_{l}/\\mu^{2}\\) and performing the \\(s\\) integral, we arrive at \\[{\\cal L}_{\\rm eff}^{1T}\\big{|}_{0} = -\\frac{(d-2)\\Gamma(d/2)}{\\pi^{d/2}}\\sum_{l=1}^{N_{\\rm c}^{2}-1} \\sum_{n=1}^{\\infty}\\frac{\\cos\\frac{a_{l}}{T}n}{n^{d}}\\,T^{d}=:V(c,n^{a}), \\tag{32}\\] \\[{\\cal L}_{\\rm eff}^{1T}\\big{|}_{e_{l}^{2}} = -\\frac{1}{6\\pi^{d/2}}\\sum_{l=1}^{N_{\\rm c}^{2}-1}\\left(\\frac{e_{ l}}{\\mu^{2}}\\right)^{2}\\sum_{n=1}^{\\infty}\\frac{\\cos\\frac{a_{l}}{T}n}{n^{d}} \\left(\\frac{n^{2}\\mu^{2}}{4T^{2}}\\right)^{2+\\epsilon}\\] (33) \\[\\times\\Gamma(d/2\\!-\\!2\\!-\\!\\epsilon)\\big{[}(26-d)-(d-2)(d-4-2 \\epsilon)\\big{]}T^{d},\\] For the term \\(\\sim e_{l}^{0}\\) in the first line, the \\(\\epsilon\\to 0\\) limit could safely be performed for \\(d\\geq 0\\); by construction, this term depends only on \\(a_{l}\\sim a_{0}\\sim c\\) (cf. Eq. (5)) and therefore corresponds to the potential \\(V(c,n^{a})\\) as introduced in Eq. (6). The term \\(\\sim e_{l}^{2}\\) in the second line contributes to the function \\(W(c,n^{a})\\) (in addition to the classical term). It turns out that, for \\(d>4\\), the limit \\(\\epsilon\\to 0\\) can be performed immediately without running into an \\(\\epsilon\\) pole. This means that, in these dimensions, the thermally modified infrared behavior of the theory is under control. The order \\(e_{l}^{2}\\) term of the one-loop effective action then reads \\[{\\cal L}_{\\rm eff}^{1T}\\big{|}_{e_{l}^{2}}=-\\frac{\\Gamma(d/2-2)}{96\\pi^{d/2}} \\sum_{l=0}^{N_{\\rm c}^{2}-1}\\Big{(}\\frac{e_{l}}{T^{2}}\\Big{)}^{2}\\sum_{n=1}^{ \\infty}\\frac{\\cos\\frac{a_{l}}{T}n}{n^{d-4}}\\big{[}(26\\!-\\!d)-(d\\!-\\!2)(d\\!-\\!4) \\big{]}T^{d},\\quad d>4. \\tag{34}\\] Obviously, the \\(\\mu\\) dependence has dropped out as a consequence of the well-behaved \\(\\epsilon\\to 0\\) limit. Nevertheless, there is a second scale besides the temperature, which is given by the dimensionful coupling constant \\(g\\) in \\(d>4\\). In \\(d=4\\), the situation is more involved, since Eq. (33) develops a simple pole in \\(\\epsilon\\) for \\(\\epsilon\\to 0\\). In order to isolate the pole and the terms of order \\(\\epsilon^{0}\\) which contain the physics, we first have to perform the \\(n\\) sum; this can be achieved with the aid of the polylogarithmic function (also Jongquieres function) \\[{\\rm Li}(z,q):=\\sum_{n=1}^{\\infty}\\frac{q^{n}}{n^{z}} \\tag{35}\\] and its analytical continuation for arbitrary real values of \\(z\\)[18]. We finally find for Eq. (33) in \\(d=4\\): \\[{\\cal L}_{\\rm eff}^{1T}\\big{|}_{e_{l}^{2}} = -\\frac{1}{8\\pi^{2}}\\sum_{l=1}^{N_{\\rm c}^{2}-1}e_{l}^{2}\\left[ \\frac{11}{12\\epsilon}-\\frac{11}{12}\\ln\\frac{T^{2}}{\\mu^{2}}+\\frac{11}{6}{\\rm Li }^{\\prime}(0,{\\rm e}^{{\\rm i}\\frac{a_{l}}{T}})+\\frac{11}{6}{\\rm Li}^{\\prime}(0,{\\rm e}^{-{\\rm i}\\frac{a_{l}}{T}})\\right. \\tag{36}\\] \\[\\left.+\\frac{1}{6}+\\frac{11}{12}C-\\frac{11}{12}\\ln 4\\right], \\quad d=4,\\] where the prime at Li denotes the derivative with respect to the first argument, and \\(C\\) is Euler's constant \\(C\\simeq 0.577216\\). Our first observation is that the \\(\\epsilon\\) pole in this thermal contribution is identical to the one for the zero-temperature Lagrangian in Eq. (28). Since the latter is responsible for the usual charge and field strength renormalization leading to a field-strength-dependent coupling \\(g^{2}=g^{2}(gE/\\mu^{2})\\), the present \\(\\epsilon\\) pole analogously suggests a running of the coupling with the scale set by the temperature: \\(g^{2}=g^{2}(T^{2}/\\mu^{2})\\). And because the residues of each pole are identical, the thermal running is governed by the same \\(\\beta\\) function. This can be viewed as a consistency check of our treatment of the infrared singularities. Furthermore, the terms \\(\\sim\\epsilon^{0}\\) depend on the ratio \\(T^{2}/\\mu^{2}\\) (even in the limit \\(a_{l}\\to 0\\)). This implies that they cannot be normalized away as in the zero-temperature case, but lead to a thermal renormalization of the two-point function. This is in perfect analogy to QED, where an equivalent modification of the two-point function appears with the prefactor (= Yang-Mills \\(\\beta\\) function) replaced by the QED \\(\\beta\\) function, and the role of \\(\\mu\\) is played by the natural scale of QED: the electron mass [19, 14]. In conclusion, it is the \\(\\ln\\frac{T^{2}}{\\mu^{2}}\\) term in Eq. (36) which leads to a breakdown of the naive scaling as outlined in Sec. 2 and allows for a separation of high- and low- temperature regimes. This could in principle facilitate a description of a phase transition within the \\(d=4\\) model. However, as we shall find in the next section, the model does not make use of this option. ## 4 Analysis of the Effective Action In the following analysis of the previously derived effective action for arbitrary \\(d\\) and \\(N_{\\rm c}\\), for simplicity we confine ourselves to \\(N_{\\rm c}=2\\), which provides for a convenient study of all the essential features of the model. Then, the color space eigenvalues \\(\ u_{l}\\) are simply given by \\[\ u_{l}=-1,0,1,\\quad{\\rm for}\\quad{\\rm SU}(2). \\tag{37}\\] The results given above can be summarized in the effective Lagrangian (cf. Eqs. (5) and (6)): \\[{\\cal L}_{\\rm eff}^{T}[c]=V(c)+W(c)\\,\\partial_{i}c\\partial_{i}c, \\tag{38}\\] where we have used the relations (cf. Eq. also (4)) \\[c=\\frac{ga_{0}}{2\\pi T},\\quad{\\rm and}\\quad\\partial_{i}c=\\frac{-gE_{\\rm i}}{2 \\pi T},\\quad c\\in[0,1]. \\tag{39}\\] The convenient dimensionless quantity \\(c\\) is now considered as the dynamical variable of the effective theory; for SU(2), the center symmetric point is given by \\(c=1/2\\), since center symmetry relates \\(c\\) with \\(1-c\\). If the vacuum state is characterized by \\(c=1/2\\), our model is confining, whereas a vacuum state different from \\(c=1/2\\) characterizes the deconfinement phase. ### Four Dimensions \\(d=4\\) Beginning with the most relevant case of four spacetime dimensions, the potential can be read off from Eq. (32). Performing the \\(n\\) sum leads to a Bernoulli polynomial, \\[V(c)=-\\frac{3\\pi^{2}}{45}\\,T^{4}+\\frac{4\\pi^{2}}{3}\\,T^{4}\\,c^{2}(1-c)^{2}, \\tag{40}\\] in agreement with [6]. While the first term is simply the free energy of \\(N_{\\rm c}^{2}-1=3\\) free gluons, the second models the shape of the potential revealing a maximum at \\(c=1/2\\) and minima at \\(c=0,1\\) and thereby characterizing the deconfinement phase (see Fig. 1(a)). However, even if the potential had displayed a minimum at \\(c=1/2\\), it would have been of no use, since the potential by itself would remain confining for arbitrarily high temperatures. There would be no comparative scale separating two different phases. A Polyakov loop potential depending on \\(c\\) and \\(T\\) only can never model the deconfinement phase transition of Yang-Mills theories!The weight function \\(W(c)\\) can be read off from Eq. (36) in combination with the classical Lagrangian \\({\\cal L}_{\\rm cl}=E^{2}/2=\\frac{2\\pi^{2}T^{2}}{g^{2}(\\mu)}\\partial_{i}c\\partial_{i}c\\): \\[W(c) = 2\\pi^{2}T^{2}\\left\\{\\!\\frac{1}{g^{2}(\\mu)}-b_{0}\\!\\left[-\\ln\\frac {T}{\\mu}+\\frac{C}{2}+\\frac{1}{11}-\\ln 2+{\\rm Li}^{\\prime}(0,{\\rm e}^{2\\pi{\\rm i}c})+{ \\rm Li}^{\\prime}(0,{\\rm e}^{-2\\pi{\\rm i}c})\\right]\\!\\right\\} \\tag{41}\\] \\[= 2\\pi^{2}T^{2}b_{0}\\left[\\ln\\frac{T}{\\sqrt{\\kappa}}-\\frac{1}{4}- \\frac{1}{11}-\\frac{C}{2}+\\ln 2-{\\rm Li}^{\\prime}(0,{\\rm e}^{2\\pi{\\rm i}c})-{ \\rm Li}^{\\prime}(0,{\\rm e}^{-2\\pi{\\rm i}c})\\right],\\] where \\(b_{0}\\) denotes the \\(\\beta\\) function coefficient given in the first line of Eq. (25) (for \\(N_{\\rm c}=2\\)). In the second line, we have expressed the running coupling and the scale \\(\\mu\\) by the renormalization group invariant \\(\\kappa\\) defined in Eq. (29), so that \\(W(c)\\) is itself renormalization group invariant! In fact, lowering the temperature can turn the weight function negative for any value of \\(c\\) so that fluctuations of the Polyakov loop are energetically preferred for low \\(T\\). However, the confining value \\(c=1/2\\) always represents a local maximum of the weight function \\(W(c)\\), as is visible in Fig. 1(b). For \\(c\\to 0,1\\), the weight function diverges to \\(-\\infty\\), but at \\(c=1,0\\) it jumps to its absolute maxima. Analytically, one finds \\[W\\!\\left([c=0,1;c=1/2]\\right)=2\\pi^{2}T^{2}b_{0}\\left(\\left[\\ln 4\\pi;\\ln \\pi\\right]-\\frac{15}{44}-\\frac{C}{2}+\\ln\\frac{T}{\\sqrt{\\kappa}}\\right). \\tag{42}\\] To conclude, although our model indicates that fluctuations of the Polyakov loop become important at low temperatures, they do not fluctuate around the confining minimum, but energetically prefer a center asymmetric ground state for \\(c\\). Hence, our model is not capable of finding a confinement phase7. Nevertheless, it should be stressed that the present treatment of the infrared modes is part of the definition of the model, although we have tried to formulate the present version as \"universally\" as possible. In fact, the regularization method considered here, which belongs to the standard class of regularization techniques, guarantees scheme-independent results. But it is also possible that the infrared modes are screened by a physical mechanism which involves another scale and thereby introduces \"nonuniversal\" information. Such a version of the model is discussed by way of example in Sec. 5. Footnote 7: The discontinuous behavior of the weight function for \\(c\\to 0,1\\) gives rise to speculations. Physically, such behavior is not acceptable (nor interpretable); rather, one may expect that some mechanism will lead to a wash-out of these singularities unveiling a smooth functional form of \\(W(c)\\) for \\(c\\in[0,1]\\) (although the origin of such a mechanism is still unclear to us). Probably, this will lead to a weight function of mexican-hat type with deconfining minima. However, with even more reservations, one might speculate upon the possibility of a smooth curve for \\(W(c)\\) which directly interpolates between the extremal values at \\(c=0,1/2,1\\) given in Eq. (42) with a confining minimum at \\(c=1/2\\). Then, the model would exhibit a confining phase for small enough temperatures when \\(W(c)\\) becomes negative for \\(c=1/2\\). The reason for mentioning such vague speculations is to demonstrate how possible predictions could in principle arise from the model: following the reasoning of Sec. 2, the temperature of the phase transition is then given by \\(W(c=1/2)|_{T=T_{\\rm cr}}=0\\). From Eq. (42), we obtain: \\(T_{\\rm cr}/\\sqrt{\\kappa}\\simeq 0.60\\). Identifying \\(\\kappa\\) with the string tension \\(\\sigma\\) (as it is the case in the leading-log model [20]), our speculative estimate is in remarkably good agreement with the lattice value [21], \\(T_{\\rm cr}/\\sqrt{\\sigma}\\simeq 0.69\\). ### Beyond Four Dimensions \\(d>4\\) In spacetime dimensions larger than four, the situation simplifies owing to the absence of infrared problems. The Polyakov loop potential is again given by Eq. (32), which, for \\(N_{\\rm c}=2\\), reads \\[V(c)=-\\frac{(d-2)\\Gamma(d/2)\\zeta(d)}{\\pi^{d/2}}\\,T^{d}-\\frac{2}{\\pi^{d/2}}(d-2 )\\Gamma(d/2)\\sum_{n=1}^{\\infty}\\frac{\\cos 2\\pi cn}{n^{d}}\\,T^{d}, \\tag{43}\\] where \\(\\zeta(d)\\) denotes Riemann's \\(\\zeta\\) function. Equation (43) is in perfect agreement with [22], where it is demonstrated that a representation of \\(V(c)\\) in terms of Bernoulli polynomials of \\(d\\)th degree exists in \\(d=2,4,6,8,\\dots\\). We could as well choose a representation in terms of polylogarithmic functions which interpolate smoothly between the Bernoulli polynomials. In toto, the qualitative behavior of \\(V(c)\\) does not change significantly for different \\(d\\): \\(V(c=1/2)\\) is always a (deconfining) maximum (cf. Fig. 1(a)). The situation is different for the weight function \\(W(c)\\): in terms of the dimensionless coupling \\(\\bar{g}^{2}=\\mu^{d-4}g^{2}\\) and polylogarithmic functions, the contributions from Eq. (34) together with the classical Lagrangian can be represented as \\[W(c) = 2\\pi^{2}\\frac{T^{2}}{\\mu^{2}}\\,\\mu^{d-2}\\Bigg{\\{}\\frac{1}{\\bar{g }^{2}}-\\frac{T^{d-4}}{\\mu^{d-4}}\\,\\frac{\\Gamma(d/2-2)}{48\\pi^{d/2}}\\big{[}(26 \\!-\\!d)-(d\\!-\\!2)(d\\!-\\!4)\\big{]} \\tag{44}\\] \\[\\times\\Big{[}{\\rm Li}(d-4,{\\rm e}^{2\\pi{\\rm i}c})+{\\rm Li}(d-4,{ \\rm e}^{-2\\pi{\\rm i}c})\\Big{]}\\Bigg{\\}}.\\] On the one hand, we again encounter the combination of polylogarithmic functions that interpolate between the Bernoulli polynomials of \\((d-4)\\)th degree for \\(d=6,8,\\dots\\), essentially maintaining their typical shape. On the other hand, there is an important sign Figure 1: (a) SU(2)-Polyakov loop potential \\(V(c)\\) in units of \\(T^{d}\\) for \\(d=4,7,8,9\\) (cf. Eqs. (40) and (43)). The \\(d=4,8\\) potentials correspond to Bernoulli polynomials \\(B_{4}\\) and \\(B_{8}\\). (b) SU(2) weight function \\(W(c)\\) in units of \\(\\kappa\\) in \\(d=4\\) for different values of the temperature \\(t:=T/\\sqrt{\\kappa}=0.2,0.6,1\\) (cf. Eq. (41)). The disconnected absolute maxima at \\(W(c=0,1)\\) are not depicted. change owing to the factor \\((26\\!-\\!d)-(d\\!-\\!2)(d\\!-\\!4)\\) at the \"critical dimension\" \\[d_{\\rm cr}=\\frac{1}{2}(5+\\sqrt{97})\\simeq 7.42. \\tag{45}\\] For \\(d<d_{\\rm cr}\\), \\(W(c)\\) has a maximum at \\(c=1/2\\), implying that there is no confining phase in these dimensions. But for \\(d>d_{\\rm cr}\\), the weight function exhibits an absolute minimum at the center symmetric value \\(c=1/2\\) (see Fig. 2(a)). As a consequence, \\(W(c)\\) can become negative at \\(c=1/2\\) for _increasing_ temperature, as is depicted in Fig. 2(b). This is in agreement with the fact that the dimensionless coupling grows large in the _high_-momentum limit with a UV-stable fixed point given by Eq. (26). Therefore, the model, somewhat counter-intuitively, describes a system with two different phases, a deconfined phase at low temperature and a confining strong-coupling phase at high temperature. In terms of the dimensionful coupling constant, the critical temperature where \\(W(c=1/2)|_{T=T_{\\rm cr}}=0\\) is given by \\[g^{2}T_{\\rm cr}^{d-4}=\\frac{24\\pi^{d/2}}{\\Gamma(d/2-2)\\zeta(d-4)}\\,\\frac{2^{d- 5}}{(2^{d-5}-1)\\big{[}(d-2)(d-4)-(26-d)\\big{]}},\\quad d>d_{\\rm cr}. \\tag{46}\\] Because of the strong increase of the \\(\\Gamma\\) and \\(\\zeta\\) function in the denominator, the left-hand side rapidly falls off for increasing \\(d\\). Typical values are \\(g^{2}T_{\\rm cr}^{d-4}\\simeq 411.4,12.0,0.036\\) for \\(d=8,16,26\\). Therefore, the deconfined phase vanishes in the formal limit \\(d\\to\\infty\\). Incidentally, it is interesting to observe that the discontinuities of the weight function \\(W(c)\\) for \\(c\\to 0,1\\) vanish for \\(d>5\\); there, \\(W(c)\\) runs continuously to a finite extremal value for \\(c\\to 0,1\\). Between four and five dimensions, the discontinuity persists and \\(W(c=0,1)\\) increases for increasing \\(d\\), finally approaching plus infinity at \\(d\\to 5^{-}\\). Figure 2: (a) SU(2) weight function \\(W(c)\\) in units of \\(\\mu\\) for \\(d=6,7,8,10\\) and fixed \\(T\\) and \\(\\bar{g}\\) (cf. Eq. (44)). Above \\(d=d_{\\rm cr}\\simeq 7.42\\), \\(c=1/2\\) represents the minimum of \\(W(c)\\). (b) The same weight function is now plotted for fixed \\(\\bar{g}\\) and \\(d=10>d_{\\rm cr}\\) for various temperature values close to \\(T_{\\rm cr}\\). Additional Infrared Scales in \\(d=4\\) The preceding section revealed that the \\(d=4\\) model required additional instructions on how to treat the singular infrared modes. Although we rate the procedure established above as the most general one of a \"universal\" character, we shall now suggest another method, involving an additional scale. In the following investigation, we exemplarily pick out one (physically motivated) possibility of regularizing the infrared modes, and study its consequences. Let us assume that Yang-Mills theory dynamically generates a scale in the infrared which can be reformulated in terms of an effective mass8\\(m_{\\rm eff}\\) for the transverse fluctuating gluons9. Although this scale may in itself depend on some parameters, we shall consider it to be constant within the limits of our investigation. Footnote 8: This mass should not be associated with a thermal gluon mass; the latter represents a collective excitation of the thermal plasma and is a typical feature of the high-temperature domain, being proportional to \\(T\\). By contrast, the effective mass considered here shall particularly affect the low-temperature modes and be approximately constant in \\(T\\). Footnote 9: In this way, gauge invariance with respect to the background field is maintained. Adding the effective mass term to the inverse transverse gluon propagator, e.g., in Eq. (13), it appears in a standard way in the proper-time representation of the effective action; for example, the integrand of the thermal one-loop contribution in Eq. (30) is multiplied by \\({\\rm e}^{-m_{\\rm eff}^{2}s}\\) which damps away the infrared singularities. Upon the substitution \\(u=m_{\\rm eff}^{2}s\\), we obtain \\[{\\cal L}_{\\rm eff}^{1T} = -\\frac{m_{\\rm eff}^{4}}{4\\pi^{2}}{\\rm tr}_{\\rm c}\\int\\limits_{0}^ {\\infty}\\frac{du}{u^{2}}\\,{\\rm e}^{-u}\\frac{e_{l}}{m_{\\rm eff}^{2}}\\left(\\sinh \\frac{e_{l}}{m_{\\rm eff}^{2}}u+\\frac{1}{2\\sinh\\frac{e_{l}}{m_{\\rm eff}^{2}}u}\\right) \\tag{47}\\] \\[\\qquad\\qquad\\qquad\\times\\sum_{n=1}^{\\infty}\\exp\\left(-\\frac{n^{2 }}{4}\\frac{m_{\\rm eff}^{2}}{T^{2}}\\frac{e_{l}}{m_{\\rm eff}^{2}}\\coth\\frac{e_{l} }{m_{\\rm eff}^{2}}u\\right)\\cos\\frac{a_{l}}{T}n.\\] Expanding for small \\(e_{l}/m_{\\rm eff}^{2}\\) in order to arrive at a consistent derivative expansion for \\(A_{0}\\), we find to order \\(e_{l}^{0}\\) and \\(e_{l}^{2}\\) \\[{\\cal L}_{\\rm eff}^{1T}\\big{|}_{0} = -\\frac{1}{\\pi^{2}}\\,m_{\\rm eff}^{2}{\\rm tr}_{\\rm c}\\sum_{n=1}^{ \\infty}\\frac{T^{2}}{n^{2}}\\,K_{2}(m_{\\rm eff}n/T)\\,\\cos\\frac{a_{l}}{T}n\\equiv V (c,n^{a},m_{\\rm eff}), \\tag{48}\\] \\[{\\cal L}_{\\rm eff}^{1T}\\big{|}_{e_{l}^{2}} = -\\frac{11}{24\\pi^{2}}\\,{\\rm tr}_{\\rm c}\\,e_{l}^{2}\\sum_{n=1}^{ \\infty}K_{0}(m_{\\rm eff}n/T)\\cos\\frac{a_{l}}{T}n\\] (49) \\[+\\frac{1}{24\\pi^{2}}\\,{\\rm tr}_{\\rm c}\\,e_{l}^{2}\\sum_{n=1}^{ \\infty}\\left(\\frac{m_{\\rm eff}}{T}n\\right)K_{1}(m_{\\rm eff}n/T)\\cos\\frac{a_{l} }{T}n,\\] where we have employed a representation of the modified Bessel function \\(K_{\ u}(x)\\)[16]. Since we are interested in a possible formation of a confinement phase, let us study Eqs. (48)and (49) in the low-temperature limit \\(T\\ll m_{\\rm eff}\\). Then it is sufficient to use the asymptotic form of the Bessel functions for large argument, \\(K_{\ u}(x)\\to\\sqrt{\\pi/2x}{\\rm e}^{-x}\\). Confining ourselves to the simplest case SU(2), we can deduce the form of the potential from Eq. (48): \\[V(c,m_{\\rm eff})\\simeq-\\sqrt{\\frac{2}{\\pi^{3}}}T^{4}\\left(\\frac{m_{\\rm eff}}{T }\\right)^{3/2}{\\rm e}^{-m_{\\rm eff}/T}\\left(\\cos 2\\pi c+\\frac{1}{2}\\right),\\quad T \\ll m_{\\rm eff}. \\tag{50}\\] Again, we encounter a potential with a (deconfining) maximum at \\(c=1/2\\), so that the effective mass does not induce significant changes to the potential term. Including the contribution from the classical Lagrangian, the weight function can be deduced from Eq. (49) in the same limit: \\[W(c,m_{\\rm eff}) = T^{2}\\left[\\frac{2\\pi^{2}}{g^{2}}-\\frac{1}{3}\\sqrt{\\frac{\\pi}{ 2}}\\left(11\\sqrt{\\frac{T}{m_{\\rm eff}}}-\\sqrt{\\frac{m_{\\rm eff}}{T}}\\right){ \\rm e}^{-m_{\\rm eff}/T}\\cos 2\\pi c\\right]. \\tag{51}\\] We first observe that, since the \\(m_{\\rm eff}\\) dependent term is exponentially small for \\(T\\ll m_{\\rm eff}\\), a small coupling \\(g^{2}\\) will always ensure that \\(W(c,m_{\\rm eff})\\) is positive so that Polyakov loop fluctuations are suppressed and the system is in the deconfined phase. Therefore, the model predicts that confinement requires a strong coupling. Indeed, if the coupling is (very) strong, we may neglect the first term in Eq. (51), and find that \\(W(c,m_{\\rm eff})\\) develops a minimum at \\(c=1/2\\), if10 Footnote 10: Taking the Bessel functions and the \\(n\\) sum more accurately into account, the actual value of \\(T_{\\rm cr}\\) changes slightly: \\(T_{\\rm cr}/m_{\\rm eff}\\simeq 2/21\\). \\[T<T_{\\rm cr},\\quad\\frac{T_{\\rm cr}}{m_{\\rm eff}}=\\frac{1}{11},\\quad\\mbox{for $g^{2}\\gg 1$}. \\tag{52}\\] The situation can be rephrased as follows: if \\(T<T_{\\rm cr}\\), \\(c=1/2\\) is the absolute minimum of \\(W(c,m_{\\rm eff})\\). But \\(W(c=1/2,m_{\\rm eff})\\) only becomes negative (thereby allowing for a confinement phase) if the coupling is sufficiently large, so that the first term of Eq. (51) can be neglected. Therefore, our main conclusion of the present section is that a different treatment of the infrared modes changes the behavior of the model significantly! Although the present version of the model exhibits the desired features, it requires more input and thus is less meaningful: we need to specify the value of \\(m_{\\rm eff}\\) and the value of \\(g^{2}\\); the latter is involved with another scale \\(\\mu\\). Let us end this section with the comment that the introduction of a masslike infrared cutoff as employed in Eq. (47) can also be used as an alternative regularization scheme for the infrared modes. This means that, giving up the meaning of \\(m_{\\rm eff}\\) as a physical scale, but treating it as an arbitrary infrared cutoff scale for Eq. (47), we may remove it after the calculation by taking the limit \\(m_{\\rm eff}\\to 0\\) in Eqs. (48) and (49). We exactly recover Eq. (32) (for \\(d=4\\)), and, after analytical continuation, also Eq. (36) with the association \\(m_{\\rm eff}\\sim\\mu\\). The same procedure in \\(d>4\\) dimensions also leads to results identical with those in the preceding section. It is in this sense that the treatment of the infrared modes as performed in the preceding section can be rated \"universal\". Conclusions In the present work, we have established and analyzed a dynamical model for the order parameter of the deconfinement phase transition in Yang-Mills theories - the vacuum expectation value of the Polyakov loop operator. We have calculated the effective action for this order parameter to second order in a derivative expansion, and have treated the gluonic fluctuations in one-loop approximation. As a first conclusion, we observed that the \"constant-Polyakov-loop\" approximation, \\(A_{0}=\\)const., as considered in the literature, is in principle incapable of describing two different phases owing to the lack of an additional scale separating a high- and low-temperature phase in \\(d=4\\). This can also be inferred from the observation that the vacuum expectation value of the trace of the energy momentum tensor for a constant quasi-abelian \\(A_{\\mu}\\) background vanishes: \\[\\langle T^{\\mu}{}_{\\mu}\\rangle=\\beta_{g^{2}}\\,F_{\\mu\ u}F_{\\mu\ u}=0,\\quad \\mbox{for $A_{\\mu}^{a}=n^{a}A_{\\mu}=\\mbox{const.}$} \\tag{53}\\] Therefore, a vacuum model of this type must necessarily preserve scale invariance even at finite temperature so that the theory must remain in a single phase. In the present model, scale-breaking is induced by fluctuations of the Polyakov loop which, in a particular choice of gauge, are associated with a nonvanishing electric field. The question of whether or not these fluctuations are energetically favored can in principle be answered by the dynamics of the model. In turns out that the deconfinement phase is the generic phase in the absence of fluctuations (this holds for all \\(d\\geq 4\\)). Whether spatial Polyakov loop fluctuations drive the model into a confining phase depends on the form of the weight function \\(W(c)\\) of the kinetic term. In four spacetime dimensions, thermal infrared singularities complicate the investigation of the weight function and require additional specifications of how to deal with these singularities. Within a regularization-independent scheme that introduces no other scale than already present, the \\(d=4\\) model does not reveal a confinement phase; instead, fluctuations of the Polyakov loop even favor a deconfining vacuum state. By contrast, when regularizing the infrared by a physically effective cutoff comparable to a gluon mass for the transverse modes, a phase transition into a confining phase for low temperature becomes visible in the strong-coupling regime. Whether one of these scenarios is realized in Yang-Mills theory cannot, of course, be answered within a perturbative approach like that employed in the present paper. Not only does the enormous extrapolation of a one-loop calculation into the strong-coupling sector present a major problem, but, with regard to the infrared singularities, (even nonperturbatively) integrating out the gluonic fluctuations at one fell swoop seems to be inappropriate. Instead, the integration over the fluctuations should be performed step by step in order to control a possible emergence of a dynamically generated mass scale. The one-loop model at least facilitates a concrete investigation of possible scenarios, and at most displays some features in a qualitatively correct manner. An appraisal of the different scenarios requires further arguments. The first scenario of Sect. 4.1 without an effective mass can be preferred only from a theoretical viewpoint owing to its simplicity and universality. Though the second scenario of Sect. 5 needs more input, the appearance of an additional infrared mass scale is common to almost all conjectured confining low-energy effective theories of Yang-Mills theory; therefore, a phenomenological viewpoint supports this scenario from the beginning, and so does the final result. Nevertheless, a reliable investigation of the infrared requires nonperturbative methods. Let us finally comment on the differences of our results to Ref. [10] which inspired the model considered in the present work; although the representations of the effective action in the form of Eq. (6) are congruent, the meaning of the results is quite different in comparison: in [10], the fluctuations of the \\(A_{0}\\) field have not been taken into account, implying that the resulting \"effective action\" remains a complete quantum theory of the \\(A_{0}\\) field. The \\(A_{0}\\) ground state is then _approximated_ by the effective potential which is obtained by transforming the kinetic term to standard canonical form. By contrast, we integrated over _all_ quantum fluctuations of the \\(A_{\\mu}\\) field in the present work; therefore, the resulting effective action is the generating functional of the 1PI diagrams and governs the dynamics of the background fields in the sense of classical field theory. To conclude, it is not astonishing that the explicit results of [10] in particular for the weight function \\(W(c)\\) do not agree with ours because they have a different origin and a different meaning.11 Footnote 11: From [10], it is in principle possible to arrive at our results (and get rid of the renormalization problems) by integrating over the \\(A_{0}\\) fluctuations; however, for a consistent treatment, the second weight function of the \\({\\cal O}((\\partial c)^{4})\\) term has to be known before integrating over the \\(A_{0}\\) fluctuations. In spacetimes with more than four dimensions, the situation simplifies considerably: on the one hand, the model is infrared finite, thereby producing unambiguous results; on the other hand, there already exists another dimensionful scale given by the coupling constant. We discovered a phase transition from the (generic) deconfining to a confining phase for _increasing_ temperature for \\(d>d_{\\rm cr}\\simeq 7.42\\). This is consistent with the fact that the dimensionless coupling constant grows for increasing energies, reaching an UV-stable fixed point. Beyond perturbation theory, the latter statement has also been confirmed in the nonperturbative framework of exact renormalization group flow equations [23]. ## Acknowledgments The author wishes to thank W. Dittrich for helpful conversations and for carefully reading the manuscript. Furthermore, the author profited from insights provided by M. Engelhardt, whose useful comments on the manuscript are also gratefully acknowledged. ## References * [1] G.K. Savvidy, Phys. Lett. B **71**, 133 (1977); S.G. Matinyan and G.K. Savvidy, Nucl. Phys. B **134**, 539 (1978). * [2] A.M. Polyakov, Phys. Lett. B **72**, 477 (1978). * [3] L. Susskind, Phys. Rev. D **20**, 2610 (1979). * [4] B. Svetitsky, Phys. Rep. **132**, 1 (1986). * [5] J. Polonyi and K. Szlachanyi, Phys. Lett. B **110**, 395 (1982); M. Mathur, preprint hep-lat/9501036 (1995). * [6] N. Weiss, Phys. Rev. D **24**, 475 (1981). * [7] A.O. Starinets, A.S. Vshivtsev and V.Ch. Zhukovskii, Phys. Lett. B **322**, 403 (1994). * [8] P.N. Meisinger and M.C. Ogilvie, Phys. Lett. B **407**, 297 (1997). * [9] N.K. Nielsen and P. Olesen, Nucl. Phys. B **144**, 376 (1978); Phys. Lett. B **79**, 304 (1978). * [10] M. Engelhardt and H. Reinhardt, Phys. Lett. B **430**, 161 (1998). * [11] L.F. Abbott, Nucl. Phys. B **185**, 189 (1981); W. Dittrich and M. Reuter, _Selected Topics in Gauge Theories_, Lecture Notes in Physics **244**, Springer-Verlag Berlin (1986). * [12] H. Gies, Phys. Rev. D **60**, 105002 (1999). * [13] T.H. Hansson and I. Zahed, Nucl. Phys. B **292**, 725 (1987). * [14] W. Dittrich and H. Gies, _Probing the Quantum Vacuum_, Springer Tracts in Modern Physics, Vol. 166, Springer, Heidelberg (2000). * [15] E.S. Fradkin and A.A. Tseytlin, Nucl. Phys. B **227**, 252 (1983); Phys. Lett. B **123**, 231 (1983); R.R. Metsaev and A.A. Tseytlin, Nucl. Phys. B **298**, 109 (1988). * [16] I.S. Gradshteyn and I.M. Ryzhik, _Tables of Integrals, Series and Products_, Academic Press (1965). * [17] M. Le Bellac, _Thermal Field Theory_, Cambridge University Press (1996). * [18] The polylogarithmic functions can numerically as well as partly algebraically be treated by Mathematica, Version 4.0.1.0, Wolfram Research, Champaign (1999). * [19] P. Elmfors and B.-S. Skagerstam, Phys. Lett. B **427**, 197 (1998). * [20] S.L. Adler and T. Piran, Phys. Lett. B **113**, 405 (1982); **117**, 91 (1982). * [21] M. Teper, preprint hep-th/9812187 (1998). * [22] A. Actor, Phys. Rev. D **27**, 2548 (1983). * [23] M. Reuter and C. Wetterich, Nucl. Phys. B **417**, 181 (1994).
The effective action for the Polyakov loop serving as an order parameter for deconfinement is obtained in one-loop approximation to second order in a derivative expansion. The calculation is performed in \\(d\\geq 4\\) dimensions, mostly referring to the gauge group SU(2). The resulting effective action is only capable of describing a deconfinement phase transition for \\(d>d_{\\rm cr}\\simeq 7.42\\). Since, particularly in \\(d=4\\), the system is strongly governed by infrared effects, it is demonstrated that an additional infrared scale such as an effective gluon mass can change the physical properties of the system drastically, leading to a model with a deconfinement phase transition.
Write a summary of the passage below.
arxiv-format/0006060v1.md
# Remote sensing of bubble clouds in seawater Piotr J. Flatau, Maria Flatau Scripps Institution of Oceanography, University of California, San Diego, La Jolla, California 92093-0221. e-mail [email protected] J. R. V. Zaneveld Oregon State University, USA Curtis D. Mobley Sequoia Scientific, Inc., USA In press: 2000, Quarterly Journal of the Royal Meteorological Society ## 1 Introduction \"The effect of bubbles on the color of the sea may be observed in breaking waves Where a great many bubbles have been entrained by a breaking wave it is white. But where there are fewer of them it is blue-green or green, brighter than the sea but not as bright as the foamiest parts of the wave. Even after a wave has broken and the water is again quiescent, a past green patch often remains, slowly fading into the surrounding sea as the bubbles dissipate. Thus the effect of bubbles on the color of the sea is similar to that of solid particles (Bohren, 1987).\" Bubbles within the water and foam on its surface (Bukata, 1995; Frouin et al., 1996;Stramski, 1994) can predominate in determining the radiative transfer properties of the sea surface at higher wind speeds. However, there is a limited knowledge about the radiative transfer properties of bubble clouds, their inherent optical properties (IOP), and their global climatology. Mobley (1994) and Bukata (1995) discuss qualitatively the surface properties of bubble clouds. Frouin et al. (1996) performed spectral reflectance measurements of sea foam at the Scripps Institution of Oceanography Pier. They observed a decrease of the foam reflectance in the near-infrared and proposed that the foam reflectance can not be decoupled from the reflectance by bubbles. Stramski (1994) concentrates on light scattering by submerged bubbles in quiescent seas and shows the scattering coefficient and the backscattering coefficient at 550 nm in comparison with scattering and backscattering coefficients of sea water as estimated from the chlorophyll-based bio-optical models for Case 1 waters. In this exploratory paper, we report on the influence of bubble clouds generated by breaking waves on the remote sensing reflectance and calculate not only the inherent optical, but also apparent optical properties using the radiative transfer model. We show that the optical effects of bubbles on remote sensing of the ocean color are significant. Furthermore, we present a global map of volume fraction of air in water. This map, together with the parameterization of the microphysical properties, shows the significance of bubble clouds on the global albedo of incoming solar energy. By proxy, we show the influence of the bubble clouds on the remote sensing retrieval of organic and inorganic components of the natural waters. It is worth mentioning that the bubble clouds coincide with the upper range of the euphotic zone and will, therefore, contribute to the dynamics of the upper-ocean boundary layer, heat distribution, and sea surface temperature (Thorpe et al., 1992). In fact, our initial motivation for this work was an observation that the asymptotic radiance distribution is established close to the ocean surface in apparent contradiction with theoretical studies (Flatau et al., 1999). Thus, the light field must become diffuse at shallower depths than usually modeled. This leads to search for alternative mechanisms influencing the light distribution. Thus, the importance to light scattering of the bubble clouds goes beyond the remote sensing issues considered in this work. In the next section, we discuss in more detail the microphysical and morphological properties of bubble clouds, because they have a direct bearing on their optical properties and radiative transfer. ## 2 Physical properties of bubble clouds ### Morphology of bubble clouds in natural waters Individual bubble clouds are generated by breaking waves, persist for several minutes (Thorpe, 1995), and reach to mean depths of about \\(4H_{s}\\), where \\(H_{s}\\) is the significant wave height, but with some clouds extending to about \\(6H_{s}\\). There is evidence that at high wind speeds, separate bubble clouds near the surface coalesce, producing a stratus layer (Thorpe, 1995). Fig 1 is based on a sonograph of Thorpe (1984). The \"bubble-stratocumulus\" (b-Sc) is often observed by acoustic means (Farmer and Lemon, 1984; Thorpe, 1995). The depth of the b-Sc layer is related to the wind speed and wind variability, but more specifically it is set by larger waves, such us those breaking predominantly in groups (Thorpe, 1995). The \"stratus layer\" description should not be taken too literally. For sufficiently high winds there will be significant concentrations through out the upper layer, but the variability within this layer can be very high. ### Bubble cloud climatology According to Thorpe et al. (1992) in the absence of precipitation and in wind speeds exceeding about 3 m s\\({}^{-1}\\), wave breaking generally provides the dominant source of bubbles. The wind speed \\(W_{10}\\) at 10 m above the mean sea surface level is used to parameterize the volume fraction of air in water, \\(f=V_{\\rm air}/V_{w}\\), where \\(V_{\\rm air}\\) is the volume of air, and \\(V_{w}\\) is the volume of water. This parameterization is an approximation of a more complex wind-wave relationship. Figure 2 shows the volume fraction of air in water estimated from the \\(W_{10}\\) winds for January of 1992. The volume fraction and the wind speed are assumed to follow a non-linear relationship (Walsh and Mulhearn, 1987) \\[f=f_{0}W_{10}^{4.4}+f_{1} \\tag{1}\\] The coefficients \\(f_{0}\\) and \\(f_{1}\\) were calculated at 2m assuming that f\\(=10^{-8}\\) for \\(W_{10}=6.2{\\rm m\\ s^{-1}}\\) and f\\(=10^{-7}\\) for \\(W_{10}=10.5{\\rm m\\ s^{-1}}\\) (Walsh and Mulhearn, 1987). Vagle and Farmer (1992) show that the volume fraction decreases with depth, changing from about \\(10^{-6}\\) at 0.3 m to \\(10^{-7}\\) at 2.7 m. We base our parameterization on these findings and extrapolate (1) to near-sea-surface depth using exponential fit. The monthly averages of \\(f\\) were obtained by employing 1992 daily surface winds from the NCAR/NCEP (Kalnay et al., 1996) reanalysis project, and averaging the daily volume fractions for each month. Thus, Figure 2 is based on the variability of the wind field on the scale of one day. In the winter of the northern hemisphere, one can observe maxima associated with the midlatitude storm tracks in the Northern Pacific. Cyclogenesis, common in western parts of the oceans during the winter, contributes to mixing and a large bubble cloud volume fraction. This can be observed to the east of the North American continent. The Intertropical Convergence Zone (ITCZ) region, with its associated deep convection, may also be a region of enhanced production of bubbles. The winds of the Southern Ocean have a strong effect on bubble formation during both summer and winter. In subtropical regions, to the west of the continents, the subsidence associated with the descending branch of the Hadley circulation is Figure 1: Bubble-stratocumulus (b-Sc) layer generated by breaking waves with inhomogeneous deeper intrusions. The depth of the cloud is related to the maximum significant wave height. Figure 2: January 1992, monthly averaged volume fraction. Volume fraction \\(\\times 10^{6}\\). responsible for the relative minimum in \\(f\\). It should be stressed that these results are qualitative and that they can be improved by more detailed breaking wave climatology models (Kraus and Businger, 1994). The data in Fig 2 is indicative of regions where bubble clouds are potentially important in the interpretation of remotely sensed reflectance. ### Optical thickness The size distribution of bubble clouds determines the optical thickness and is, therefore, one of the most critical parameters entering the theory. There are two assumptions which simplify the development here: (a) we consider light scattering in the geometric optics regime for which the ratio proportional to bubble radius to wavelength \\(x=2\\pi r/\\lambda\\) is large and (b) we assume that there is an effective radius \\(r_{\\rm eff}\\) which determines the optical properties of the size distribution. Both (a) and (b) are quite probable. The size parameter, \\(x=50\\), corresponding to a bubble radius of approximately 5 \\(\\mu{\\rm m}\\), is already in the geometric optics regime, and \\(r_{\\rm eff}=10\\mu{\\rm m}\\) will satisfy (a). For bubbles with an effective radius of \\(r_{\\rm eff}\\), the volume attenuation (equal to scattering for a non-absorbing sphere) can be expressed as \\[b=Q_{\\rm sca}\\frac{N}{V_{w}}s=2\\frac{V_{\\rm air}}{V_{w}}\\frac{N}{V_{\\rm air}}s =2f\\frac{s}{v}. \\tag{2}\\] Thus \\[b=2\\frac{f}{r_{\\rm eff}} \\tag{3}\\] where \\(s=\\pi r^{2}\\) is the cross-section of a bubble with radius \\(r\\), \\(v\\) is the volume of such a bubble. \\(N\\) is the number of bubbles in volume \\(V_{w}\\) of water and \\(f\\) is the fraction of air in a volume of water. The effective radius is defined as \\(r_{\\rm eff}=v/s\\) (Stephens et al., 1990; King et al., 1993; Bricaud and Morel, 1986). Scattering efficiency defines how much of incoming light is being \"blocked\" by a particle by scattering processes, for large size parameters \\(Q_{\\rm sca}\\) tends to 2. This issue is discussed in detail by Bohren and Huffman (1983). The physical significance of \\(f\\) comes from the fact that it is determined by the large scale forcing such as the wind field. Thus, for a given synoptic or climatological setting, the mixing ratio \\(f\\) is, to some extent, pre-determined. On the other hand, the effective radius \\(r_{\\rm eff}\\) depends on processes of much smaller scale then the large scale. These processes are coagulation, coalescence, coating by organic material, saturation, buoyancy, pressure, etc. Thus, Eq. 3 defines the optical properties of bubble clouds on both the large- and sub-scales. The expression \\(b=2f/r_{\\rm eff}\\) holds for polydispersions, and the only difference with a monodispersion is that \\(r_{\\rm eff}\\) is defined via distribution averaged \\(s\\) and \\(v\\). ### Effective radius of the size distribution Numerous observations of bubble size distributions are reported in the literature based on acoustic, photographic, optical, and holographic methods (Wu, 1988a). Akulichev and Bulanov (1987) summarize results from 22 experiments using different techniques. As the origins of bubbles are biological (within the volume and at the bottom), as well as physical (at the rough surface), we may expect large regional and temporal variations of bubble concentration between coastal and open oceanic waters and between plankton bloom or no-bloom conditions (Thorpe et al., 1992). Figure 3 presents a comparison of bubble spectra under breaking waves and quiescent sea. Recently published observations, using laser holography near the ocean surface, have shown that the densities of 10 to 15\\(\\mu\\)m radius bubbles can be as high as \\(10^{6}\\) (per cubic meter per micron radius increment) within 3 m of the surface of quiescent seas (O'Hern et al., 1988). These results are plotted as solid squares connected with a vertical solid line. The majority of bubbles injected into the surface layers of natural waters is unstable, either dissolving due to enhanced surface tension and hydrostatic pressures or rising to the air-water interface where the bubbles break (Johnson and Wangersky, 1987). However, bubbles with long residence times, i.e. stable microbubbles, have been observed. For example Medwin (1977) observed nearly \\(2.5\\times 10^{6}\\)of bubbles per cubic meter in the radius range \\(18-355\\)\\(\\mu\\)m for small wind speeds. One of the stabilization mechanisms (Mulhearn, 1981; Johnson and Wangersky, 1987) assumes that the surfactant material is a natural degradation product of chlorophyll, present in almost all photosynthesizing algae. Isao et al. (1990) have observed very large populations of neutrally buoyant particles with radii between \\(0.1-1\\mu\\)m. Johnson and Wangersky (1987) and Thorpe et al. (1992) proposed another stabilization mechanism based on monolayers of adsorbed particles. Numerical modeling (Thorpe et al., 1992) can be used to study the effects of water temperature, dissolved gas saturation levels, and particulate concentrations on the size distribution of subsurface bubbles. The results of such numerical models provide additional evidence for the existence of a small size bubble fraction which is not adequately measured by acoustic or photographic techniques. The dashed line on Fig. 3 presents the mean concentration in the model steady-state for the water temperature of 0C. It can be seen that maximum concentration is around \\(15\\mu\\)m and it is 2.5 orders of magnitude larger than for 100\\(\\mu\\)m bubbles. Other results presented (Figure 3) are those of Johnson and Cooke (1979) observations at 4m in wind speed 11-13\\(\\rm m~{}s^{-1}\\) (open squares). In situ acoustic measurements of microbubbles at sea by Medwin (1977) are plotted as solid triangles. The solid triangles joined by a solid line are concentrations at 4m depth and 3.3ms\\({}^{-1}\\) wind speed. These spectra were obtained on August 7, 1975 in Monterey Bay. The solid triangles are for midafternoon, February 10-16, 1965 at Mission Bay, San Diego, 3m below the surface, and in 1.7-2.8\\(\\rm m~{}s^{-1}\\) winds. The open triangles are from Baldy (1988) and include data at 30cm depth with wind and swell and at 25cm with wind only. They are based on extensive laboratory experiments. The solid hexagons are data from Medwin and Breitz (1989) acoustic measurements obtained in the open sea at 25cm depth under the water surface during 12\\(\\rm m~{}s^{-1}\\) winds under spilling breakers. The literature reviewed here and encapsulated in Figure 3 is a mixture of descriptions of the effect of active wave breaking, and of stabilised microbubbles observed largely in coastal situations. Currently, it is not clear how to parameterise the stabilized bubbles. From results such as those presented in Fig. 3 in the case of \"transient,\" open ocean bubbles, it can be estimated that the size distribution follows the power law dependence \\(n(r)\\propto r^{-a}\\) and \\(a\\approx 4\\) (Walsh and Mulhearn, 1987; Wu, 1988a). Even though small microbubbles may not contribute to the total mass, they may be important for the light scattering. Therefore, it is of interest to estimate the contribution of small bubbles to the optical thickness. Assuming that microbubbles are spherical (\\(V_{\\rm air}=4/3\\pi r^{3}N\\)) we can show that the optical thickness (\\(\\tau=bh\\)) of a layer with geometrical thickness \\(h\\) is \\[b\\propto hV_{\\rm air}^{2/3}N^{1/3} \\tag{4}\\]Figure 3: Comparison of bubble spectra under breaking waves and quiescent sea. The solid squares connected with the vertical solid line are from laser holographic data O’Hern et al. (1988). Open triangles are from Baldy (1988) and include data at 30cm depth with wind and swell, and at 25cm with wind only. The solid hexagons are data based on Medwin and Breitz (1989) obtained in the open sea at 25cm depth under the water surface during winds of 12\\(\\rm m~{}s^{-1}\\). The solid triangles joined by the solid line and solid triangles are average bubble densities measured under comparable conditions at different seasons. The solid triangles joined by solid lines are concentrations at 4m depth, 3.3\\(\\rm m~{}s^{-1}\\) wind speed, obtained on August 7, 1975 in Monterey Bay. The solid triangles are for midafternoon, February 10-16, 1965 at Mission Bay, San Diego, 3m below the surface, 1.7-2.8\\(\\rm m~{}s^{-1}\\) winds. The dashed line is from a numerical model (Thorpe et al., 1992). The mean concentration in the steady-state model are plotted at temperature 0C. The open squares show (Johnson and Cooke, 1979) observations at 4m in wind speed 11-13\\(\\rm m~{}s^{-1}\\). Both \\(h\\) and \\(V_{\\rm air}\\) are assumed to be fixed. It can be seen from Eq. 4 that the contribution to optical thickness by very small particles will be the same as that by very large particles if their concentration varies as \\(N^{3}\\) (or steeper). Indeed is the case (Walsh and Mulhearn, 1987; Wu, 1988a). What remains to be defined is the effective radius \\(r_{\\rm eff}\\). On the basis of measurements, estimates of small particle fraction, existence of background 1\\(\\mu\\)m microbubbles, modeling predictions, and steep slope of microbubble size distributions, we decided to use \\(r_{\\rm eff}=10\\mu{\\rm m}\\) as a typical \"radiative response radius.\" This choice does not exclude existence of larger or smaller particles. The real value may be between \\(1\\mu\\)m for stabilized particles and \\(50\\mu\\)m for the open ocean and will depend on many environmental factors such as storm passage, wind speed, swell, wind variability, phytoplankton concentration, water temperature, gas saturation, and other properties. The 10-15 fold increase in the size of effective radius or similar decrease in the air volume fraction will reduce the importance of air bubbles to a very small effect. It may be instructive to calculate the scattering coefficient \\(b\\) for typical size distribution of bubbles in water. In such case we have \\[b=\\int Q_{\\rm sca}\\pi r^{2}\\frac{dN(r)}{V_{w}} \\tag{5}\\] or \\(b=2fs/v\\) where \\(s=\\int\\pi r^{2}dN(r)\\) and \\(v=\\int 4/3\\pi r^{3}dN(r)\\), and \\(dN(r)\\) is the number of bubbles between \\(r\\) and \\(r+dr\\) in a volume of water \\(V_{w}\\). For typical size distribution of bubbles in water \\(dN(r)/dr\\propto 1/r^{4}\\) we have \\[r_{\\rm eff}=\\frac{4}{3}\\ln(r_{1}/r_{0})/\\left(\\frac{1}{r_{0}}-\\frac{1}{r_{1}}\\right) \\tag{6}\\] Consider \\(r_{1}=150\\) and \\(r_{0}=10\\) micrometers. This gives \\(r_{\\rm eff}\\sim 3.6r_{0}\\) which shows that the choice of small bubble cut-off is important for the bubbles' optical properties. However, the choice of this cutoff is non-trivial because the spectrum of small bubbles is not understood well at present. We close this section with some general comment about the effective radius. It is not a directly measurable quantity and, in essence, it defines how dispersed the given amount of mass is. Scattering of incoming solar radiation is sensitive to total projected surface rather than to total mass. For this reason the effective radius is commonly used in radiation calculations. However, it should be stressed that the effective radius is a semi-inherent optical property because it carries information not only about the size itself, but also about the orientation of particles, their morphology, coating, size distribution, or departure from a spherical shape. In addition, estimates of effective radius, as used in satellite remote sensing, often contain bias due to unrealistic assumptions about other optical properties such as optical thickness, leakage of photons due to horizontal transfer, wavelengths, or technique employed in retrieval. In that sense the effective radius is also used (or abused) as a semi-apparent optical property. ### Numerical model The numerical radiative transfer model used in this study is a slightly modified version of the Hydrolight 3.0 code (Mobley, 1994; Mobley et al., 1994). In brief, this model computes from first principles the radiance distribution within, and leaving, any plane-parallel water body. Input to the model consists of the absorbing and scattering properties of the water body, the nature of the wind-blown sea surface and of the bottom of the water column, and the sun and sky radiance incident on the sea surface. Pure sea water absorption and scattering coefficients are determined from the data of Pope and Fry (1997). 35 model wavebands were specified to cover the 400-700 nm region with a typical resolution of 10nm. The water column was specified as infinitely deep. Up to 62 depth layers, extending to 50 meters, were specified with a resolution of 0.5 m close to the surface. A clear sky was assumed, but the diffuse sky radiance was included. Three or four component systems were considered, consisting of pure water, particulates with or without bubble clouds, and dissolved organic matter. The spectral absorption of dissolved organic matter was defined as \\[a(\\lambda)=a(\\lambda_{0})\\exp[-0.014(\\lambda-\\lambda_{0})] \\tag{7}\\] where \\(a(\\lambda_{0})=0.1\\) m\\({}^{-1}\\), \\(\\lambda_{0}=440\\) nm (Bricaud et al., 1981). The phase function of phytoplankton was defined as an average of Petzold's clear ocean, coastal ocean, and turbid harbor cases (Mobley, 1994; Tyler, 1977). The bubble cloud phase function was calculated for the real relative refractive index \\(m=0.75\\) as an average for the size parameter range between \\(x=100-300\\) with a resolution \\(\\Delta x=1\\) using Wiscombe (Fiedler-Ferrari et al., 1991) code. Figure 4 compares Petzold and bubble phase functions. The main reason for the size distribution average was to remove transient spikes. This normalized phase function is scaled by \\(b\\) calculated from Eq. 3. The volume scattering and absorption coefficients for particulates were determined from the Gordon-Morel model (Mobley, 1994; Gordon and Morel, 1983; Morel, 1988) \\[b_{p}=\\frac{550}{\\lambda}0.3C^{0.62} \\tag{8}\\] \\[a^{\\rm Case1}(\\lambda)=[a_{w}(\\lambda)+0.06a_{c}^{\\star}(\\lambda)C^{0.65}][1+ 0.2\\exp(-0.014(\\lambda-440))] \\tag{9}\\] Here \\(a_{w}(\\lambda)\\) is the absorption coefficient of pure water, \\(a_{c}^{\\star}\\) is the chlorophyll-specific absorption coefficient, and \\(C\\) is the chlorophyll concentration in \\({\\rm mg~{}m^{-3}}\\) (Mobley, 1994). The chlorophyll concentration was set as constant (well-mixed) with depth and equal to 0.8\\({\\rm mg~{}m^{-3}}\\) or 0.08\\({\\rm mg~{}m^{-3}}\\). We used Case 1 water parameterization but assumed that not all dissolved organic material is correlated with the chlorophyll concentration. In an apparent contradiction, the wind speed which defined surface reflectance and transmittance functions due to the wind-blown water surface was set to 0. The reason for this was to estimate the effect on remote sensing properties of the sub-surface bubble clouds. However, we investigated the sensitivity of the reflectance to change in the wind speed between 0 and 10 \\({\\rm ms^{-1}}\\), and the effect was small compared to the influence of bubbles. The azimuth direction was divided into 24 equally spaced sectors, the zenith-nadir range was divided into 20 equally spaced sectors. The profile of the bubble cloud volume fraction was determined by the expression \\[f(z)=f_{0}(f_{1}/f_{0})^{p} \\tag{10}\\]Figure 4: Unpolarized phase function for a uniform distribution of bubbles between size parameter 100 and 300 and refractive index \\(m=3/4\\) as a function of scattering angle \\(\\theta\\). Also shown is Petzold’s phase function. The bubble phase function has peak at 70 degrees. where \\(f_{0}=f(z_{0})=10^{-6}\\), \\(f_{1}=f(z_{1})=10^{-7}\\), \\(z_{0}=0\\) m, \\(z_{1}=8\\) m, and \\(p=(z-z_{0})/(z_{1}-z_{0})\\). Our choice of the speed of attenuation is perhaps unreasonably gradual, except for the case of stable microbubbles. On the other hand this choice is of secondary importance for the remote sensing properties of bubble clouds which are dominated by the surface volume fraction. In-water asymptotic light field is discussed by Flatau et al. (1999). The scattering was conservative. The volume attenuation was determined from the asymptotic expression (3). The probability distribution function for scattering was discretized and was set as constant throughout the depth range. We calculated the asymmetry parameter to be \\(g\\approx 0.85\\). This parameter defines the probability of photon scattering towards the forward or backward hemisphere. It is equal to 1 if all photons are scattered forward and to -1 if all photons are scattered backward. ### Remote sensing reflectance The reference model runs were performed with the 3 component system of phytoplankton, pure water, and DOM with a constant chlorophyll profile. Two cases were computed (but only one is presented), representative of clear coastal (\\(C=0.8\\)) and oceanic (\\(C=0.08\\)) water. The remote sensed reflectance is defined as \\[R_{\\rm rs}(\\lambda)=\\frac{L_{w}(\\lambda)}{E_{d}(\\lambda)} \\tag{11}\\] where \\(E_{d}\\) is the downwelling irradiance onto the sea surface, and \\(L_{w}(\\lambda)\\) is the upwelling water-leaving radiance. The remote-sensing radiance is a measure (Mobley, 1994) of how much of the downwelling light that is incident onto the water surface is returned into the zenith direction. The remote sensing reflectances are plotted in Fig. 5. The physical importance of the remote-sensing reflectance is evident from asymptotic theories which relate \\(R_{\\rm rs}\\) to inherent optical properties (Zaneveld, 1995) \\[R_{\\rm rs}\\propto\\frac{\\beta(\\pi-\\theta_{m})}{a} \\tag{12}\\] where \\(\\beta\\) is the phase function, \\(\\theta_{m}\\) is related to the sun zenith angle, and \\(a\\) is the absorption coefficient. Thus, \\(R_{\\rm rs}\\) is approximately proportional to the probability of back- or side-scattering, and inversly proportional to the absorption of water column. Figure 5 shows the remote-sensing reflectance for the 3- and 4-component system with and without ocean bubbles but with constant pigment amount. The total single scattering albedo is strongly influenced by scattering from air bubbles. This leads to enhanced reflectance at all wavelengths. The results for \\(C=0.08\\) (not presented) show even larger sensitivity. In Fig 5 the gray rectangles indicate bands (wavelengths) which are used by the current ocean color satellite instrument (SeaWiFS). It is of interest to comment on the remote sensing of pigments and bubble cloud retrievals. Consider an algorithm based on the ratio of remote-sensing reflectance and define the ratio of remote-sensing reflectances without (Chl) and with (Chl+b) microbubbles as \\[{\\rm ratio}(\\lambda)=R_{\\rm rs}^{\\rm Chl}/R_{\\rm rs}^{\\rm Chl+b}. \\tag{13}\\] Figure 6 shows \\({\\rm ratio}(\\lambda)\\). Performance of the pigment algorithms based on the ratio of reflectances will depend on \\({\\rm ratio}(\\lambda_{1})/{\\rm ratio}(\\lambda_{2})\\). Submerged microbubble clouds seem to be wavelength-selective and even the ratio algorithms may require slight systematic correction. Given the increased sensitivity of the current generation of ocean color instruments, the absolute value of the radiances at the top of the atmosphere can be used for pigment retrievals. ## 4 Summary Our calculations indicate that the optical effects of submerged microbubbles on the remote sensing reflectance of the ocean color are significant. These results are of importance for the retrievals of pigments from the ocean color measurements and for studies of the energetics of the ocean mixed layer. We provide information on how to reduce the systematic error due to microbubbles in pigment retrieval schemes via the \\(\\mathrm{ratio}(\\lambda)\\). We also derive apparent optical property of bubbles - remote sensing reflectance - for the whole solar spectrum. This AOP is directly observable by the satellites and remote sensors. We expect that these and similar AOPs will have to be invoked in the case of hyperspectral retrievals for Case 2 waters where the signals from minerals, bubbles, chlorophyll, and dissolved organic material (CDOM) are not well correlated. New algorithms for current satellite instruments such as MODIS and SeaWIFS should employ this information. We also present a global map of the volume fraction of air in water derived from daily wind speed data. We expect that such a map can be improved by knowing the day-to-day variability of the wind-wave relationship and better estimates of the volume fraction. The paper is exploratory. Therefore, it is perhaps worth playing _advocatus diaboli_ and speculate why the bubble clouds may not be important, at least in current satellite ocean color retrieval practice. Here are some reasons: (1) The high wind and clouds are correlated. This masks (bias) the effect of bubbles as observed from the satellites; (2) Both whitecaps (Wu, 1988b; Gordon and Wang, 1994; Frouin et al., 1996) and bubble clouds are correlated via their dependence on wind speed. Therefore, our results, as well as the hypothesis of Frouin et al. (1996), indicate that reflectance of foam has to be considered together with the reflectance due to bubble clouds. On the other hand, there are cases in which strong wind is not correlated with clouds. For example, the cross equatorial flow during the summer monsoon in the southern Indian Ocean is strong but the ITCZ position is in the northern hemisphere. It is interesting to note that stabilized, coated microbubbles are hypothesized to be correlated to phytoplankton and CDOM concentrations; we need parameterization of this process. The optical properties of the first several meters below the surface are difficult to measure and are often removed from data due to experimental problems such as ship shadow or wave activity. This is the region where more detailed studies are needed. ## Acknowledgements P. J. Flatau was supported in part by the Office of Naval Research Young Investigator Program and NASA SIMBIOS program. M. Flatau acknowledges NOAA/UCAR Global Climate Change Fellowship and J. R. V. Zaneveld acknowledges support of the Environmental Optics program of the Office of Naval Research and the Biogeochemistry program of NASA. C. D. Figure 5: The solid line is the remote-sensing reflectance for the 3-component system composed of water, DOM, and particulates (no microbubbles) and the dashed line is for the 4-component system (microbubbles included). Same chlorophyll concentration (0.8\\(\\rm mg~{}m^{-3}\\)) in both cases. Effective radius \\(r_{\\rm eff}=10\\mu m\\). Hydrolight run with 62 layers, maximum depth 50m, maximum bubble depth 8m, 35 wavenumbers between 400-700nm, sun zenith angle 50. The grey rectangles indicate SeaWiFS (ocean color satellite) wavelengths. Figure 6: The ratio of remote-sensing reflectances \\(\\mathrm{ratio}(\\lambda)=R_{\\mathrm{rs}}^{\\mathrm{Chl}}/R_{\\mathrm{rs}}^{\\mathrm{ Chl+b}}\\). Same experiment as in Fig. 5. Mobley acknowledges support of the Environmental Optics program of the Office of Naval Research, which also supported in part the development of the Hydrolight model. ## References * Akulichev and Bulanov (1987) Akulichev, V. and V. Bulanov, 1987: The study of sound backscattering from micro-inhomogeneities is sea water. In _Progress in underwater acoustics_, Merklinger, H. M., editor xv + 839. Plenum Press, New York. * Baldy (1988) Baldy, S., 1988: Bubbles in the close vicinity of breaking waves: statistical characteristics of the generation and dispersion mechanism. _J. Geophys. Res._, **93**(C7), 8239-8248. * Bohren (1987) Bohren, C. F., 1987: _Clouds in a glass of beer: simple experiments in atmospheric physics_. Wiley, New York xv+195. The Wiley science editions. * Bohren and Huffman (1983) Bohren, C. F. and D. R. Huffman, 1983: _Absorption and scattering of light by small particles_. Wiley, New York xiv + 530 p. * Bricaud and Morel (1986) Bricaud, A. and A. Morel, 1986: Light attenuation an scattering by phytoplankton cells: a theoretical modeling. _Appl. Opt._, **25**(4), 571-580. * Bricaud et al. (1981) Bricaud, A., A. Morel, and L. Prieur, 1981: Absorption by dissolved organic matter of the sea (yellow substance) in the UV and visible domains. _Limnol Oceanogr_, **26**, 43-53. * Bukata (1995) Bukata, R. P., 1995: _Optical properties and remote sensing of inland and coastal waters_. CRC Press, Boca Raton, Fla. 362. * Farmer and Lemon (1984) Farmer, D. M. and D. D. Lemon, 1984: The influence of bubbles on ambient noise in the ocean at high wind speeds. _J. Phys. Oceanogr._, **14**(11), 1762-1778. * Fiedler-Ferrari et al. (1991) Fiedler-Ferrari, N., H. M. Nussenzveig, and W. J. Wiscombe, 1991: Theory of near-critical-angle scattering from a curved interface. _Phys. Rev. A_, **43**(2), 1005-1038. * Flatau et al. (1999) Flatau, P. J., J. Piskozub, and J. R. V. Zaneveld, 1999: Asymptotic light field in the presence of a bubble-layer. _Optics Express_, **5**(5), 120-124. * Frouin et al. (1996) Frouin, R., M. Schwindling, and P.-Y. Deschamps, 1996: Spectral reflectance of sea foam in the visible and near-infrared: In situ measurements and remote sensing implications. _J. Geophys. Res._, **101**(C6), 14361-14371. * Gordon and Morel (1983) Gordon, H. R. and A. Y. Morel, 1983: _Remote assessment of ocean color for interpretation of satellite visible imagery : a review_. Springer-Verlag, New York 114. Lecture notes on coastal and estuarine studies ; 4. * Gordon and Wang (1994) Gordon, H. R. and M. Wang, 1994: Influence of oceanic whitecaps on atmospheric correction of ocean-color sensors. _Appl. Opt._, **33**(33), 7754-7763. * Gudmund and Gudmund (1995)Isao, K., S. Hara, K. Terauchi, and K. Kogure, 1990: Role of sub-micrometre particles in the ocean. _Nature_, **345**(6272), 242-244. * Johnson and Cooke (1979) Johnson, B. D. and R. C. Cooke, 1979: Bubble populations and spectra in coastal waters: a photographic approach. _J. Geophys. Res._, **84**(C7), 3761-3766. * Johnson and Wangersky (1987) Johnson, B. D. and P. J. Wangersky, 1987: Microbubbles: stabilization by monolayers of adsorbed particles. _J. Geophys. Res._, **92**(C13), 14641-14647. * Kalnay et al. (1996) Kalnay, E., M. Kanamitsu, R. Kistler, et al., 1996: The NCEP/NCAR 40-year reanalysis project. _Bull. Amer. Meteorol. Soc._, **77**(3), 437-471. * King et al. (1993) King, M. D., L. F. Radke, and P. V. Hobbs, 1993: Optical properties of marine stratocumulus clouds modified by ships. _J. Geophys. Res._, **98**(D2), 2729-2739. * Kraus and Businger (1994) Kraus, E. B. and J. A. Businger, 1994: _Atmosphere-ocean interaction_. Oxford University Press Clarendon Press, New York Oxford England xxii + 362 p. Oxford monographs on geology and geophysics ; no. 27. * Medwin (1977) Medwin, H., 1977: In situ acoustic measurements of microbubbles at sea. _J. Geophys. Res._, **82**(6), 971-976. * Medwin and Breitz (1989) Medwin, H. and N. D. Breitz, 1989: Ambient and transient bubble spectral densities in quiescent seas and under spilling breakers. _J. Geophys. Res_, **94**(C9), 12751-12759. * Mobley (1994) Mobley, C. D., 1994: _Light and water : radiative transfer in natural waters_. Academic Press, San Diego xvii + 592. * Mobley et al. (1994) Mobley, C. D., B. Gentili, H. R. Gordon, et al., 1994: Comparison of numerical models for computing underwater light fields. _Appl. Opt._, **32**(36), 7484-7504. * Morel (1988) Morel, A., 1988: Optical modeling of the upper ocean in relation to its biogenous matter content (case I waters). _J. Geophys. Res._, **93**(C9), 10749-10768. * Mulhearn (1981) Mulhearn, P. J., 1981: Distribution of microbubbles in coastal waters. _J. Geophys. Res._, **86**(C7), 6429-6434. * O'Hern et al. (1988) O'Hern, T. J., L. d'Agostino, and A. J. Acosta, 1988: Comparison of holographic and Coulter Counter measurements of cavitation nuclei in the ocean. _Trans. ASME, J. Fluids Eng._, **110**(2), 200-207. * Pope and Fry (1997) Pope, R. M. and E. S. Fry, 1997: Absorption spectrum (380-700nm) of pure water: II. Integrating cavity measurements. submitted to Appl. Opt. * Stephens et al. (1990) Stephens, G. L., S.-C. Tsay, J. Stackhouse, P. W., and P. J. Flatau, 1990: The relevance of the microphysical and radiative properties of cirrus clouds to climate and climatic feedback. _J. Atmos. Sci._, **47**(14), 1742-1753. Stramski, D., 1994: Gas microbubbles: An assessment of their significance to light scattering in quiescent seas. In _Ocean optics XII : 13-15 June 1994, Bergen, Norway_, Jaffe, J. S., editor. SPIE, Bellingham, Wash., USA 704-710. Proceedings of SPIE-the International Society for Optical Engineering ; v. 2258. * Thorpe (1984) Thorpe, S. A., 1984: The effect of Langmuir circulation on the distribution of submerged bubbles caused by breaking wind waves. _J. Fluid Mech._, **14**, 151-170. * Thorpe (1995) Thorpe, S. A., 1995: Dynamical processes of transfer at the sea surface. _Prog. Oceanogr._, **35**(4), 315-352. * Thorpe et al. (1992) Thorpe, S. A., P. Bowyer, and D. K. Woolf, 1992: Some factors affecting the size distributions of oceanic bubbles. _J. Phys. Oceanogr._, **22**(4), 382-389. * Tyler (1977) Tyler, J. E., 1977: _Light in the sea_. Dowden, Hutchinson and Ross, Stroudsburg, Pa. New York xiii + 384 p. Benchmark papers in optics ; 3. * Vagle and Farmer (1992) Vagle, S. and D. M. Farmer, 1992: The measurement of bubble-size distributions by acoustical backscatter. _J. Atmos. Ocean. Technol._, **9**(5), 630-644. * Walsh and Mulhearn (1987) Walsh, A. L. and P. J. Mulhearn, 1987: Photographic measurements of bubble populations from breaking wind waves at sea. _J. Geophys. Res._, **92**(C13), 14553-14565. * Wu (1988a) Wu, J., 1988a: Bubbles in the near-surface ocean: a general description. _J. Geophys. Res._, **93**(C1), 587-590. * Wu (1988b) Wu, J., 1988b: Variations of whitecap coverage with wind stress and water temperature. _J. Phys. Oceanogr._, **18**(10), 1448-1453. * Zaneveld (1995) Zaneveld, J. R. V., 1995: A theoretical derivation of the dependence of the remotely sensed reflectance of the ocean on the inherent optical properties. _J. Geophys. Res._, **100**(C7), 13135-13142.
We report on the influence of submerged bubble clouds on the remote sensing properties of water. We show that the optical effect of bubbles on radiative transfer and on the estimate of the ocean color is significant. We present a global map of the volume fraction of air in water derived from daily wind speed data. This map, together with the parameterization of the microphysical properties, shows the possible significance of bubble clouds on the albedo of incoming solar energy. Keyword: Remote sensing reflectance, Bubble clouds, Radiative transfer
Condense the content of the following passage.
arxiv-format/0006061v1.md
# Monte Carlo study of the scattering error of a quartz reflective absorption tube Jacek Piskozub Institute of Oceanology PAS Powstancow Warszawy 55 81-712 Sopot, Poland [email protected] Piotr J. Flatau Scripps Institution of Oceanography, University of California, San Diego La Jolla, CA 92130-0221 [email protected] J. V. Ronald Zaneveld College of Oceanic and Atmospheric Sciences, Oregon State University 104 Ocean Admin Bldg Corvallis, OR 97331-5503 ## 1 Introduction Light absorption is the essential element required for marine phytoplankton growth. The region of the upper ocean illuminated by sunlight is responsible for most of marine primary production. Absorption and scattering of phytoplankton control the light field (at least forCase 1 waters which comprise a large majority of ocean waters), and thus, affect the spectral reflectance of the ocean surface. Such spectral changes provide information about ocean color (Gordon and Morel, 1983) In all radiative transfer modeling of the marine environment the inherent optical properties (IOPs) are either the necessary input parameters, or in the case of inverse problems, the output of the calculations. This makes them important in marine optics studies. The measurement of the attenuation coefficient is relatively easy; an attenuation meter consists of a collimated light source and a collimated light detector at a known distance. The only major problem in the attenuation measurement are photons from outside sources or the ones reflected on the instrument scattered into the light beam, causing an underestimation of the attenuation coefficient value. The detector has a finite aperture so that some of the forward scattered photons will be counted as part of the direct beam, also causing an underestimation. However, the measurement of the components of the attenuation, the absorption and scattering coefficients, is inherently much more difficult, and researchers have been striving to minimize the measurement errors of these parameters for most of this century. The central idea behind any absorption measurement is to project a beam of light through an absorbing medium. If one could measure all of the unabsorbed light of the direct beam, as well as all of the scattered light, the only light lost would be the absorbed light. Thus, absorption meters tend to be arranged to collect as much of the scattered light as possible. The absorption coefficient measured in any optical device has two major sources of error, both of which are due to the fact that natural suspensions tend to scatter light as well as absorb it. In practice, the scattered light traverses a longer path through the absorbing medium and so is more likely to be absorbed (the path length amplification factor). Secondly, not all of the scattered light is collected due to the geometry of the absorption meter (the scattering error). One of the choices to measure in situ absorption is the cylindrical reflection tube. Such a tube needs to be be long enough to provide a sufficient optical path to measure the low absorption values typical for Case1 waters below 580nm. Any photon scattered forward off the instrument axis is, at least in theory, reflected by the tube walls until it reaches the detector. An ideal instrument of this kind should also have a reflector inside of the source end to collect the backscattered photons. The first working prototypes of such a device were developed by Zaneveld and co- workers (Zaneveld and Bartz, 1984; Zaneveld et al., 1990). One of the greatest problems of making a practical reflective tube is the reflectivity coefficient of the walls. A Monte Carlo study of the performance of a reflective tube absorption meter (Kirk, 1992) shows that the results quickly deteriorate with reflectivity decreasing from 100%. As it is virtually impossible to produce perfect reflecting walls, especially ones that would not deteriorate with prolonged use of the instrument, the concept of a quartz glass tube surrounded by air was proposed instead (Zaneveld et al., 1990). Assuming smooth surfaces of the tube, all photons encountering the wall at an angle to the wall surface smaller than the critical angle, \\(41^{\\circ}\\), must be internally reflected. Therefore, a clean quartz reflection tube should collect all photons scattered in the angular range between \\(0^{\\circ}\\) and \\(41^{\\circ}\\) (if multiple scattering is neglected). However, the large losses of photons scattered at angles above \\(41^{\\circ}\\) are the main theoretical source of error for quartz tube absorption meters. The Monte Carlo calculations by Kirk (1992) were conducted for a prototype absorption tube with an almost parallel light beam resembling the Wetlabs ac-9 absorption meter. The results showed that the relative error of absorption is always positive and increases linearly with the ratio of scattering to absorption. The error increases with decreasing wall reflectance and acceptance angle of the receiver. Another study by Hakvoort and Wouts (1994) used a Lambertian light source. In this case the absorption error decreases with the decreasing angle of photon acceptance. However, early prototypes of HiStar, the new Wetlabs spectrophotometer, has a diverging beam limited to \\(20^{\\circ}\\). The receiving end of the tube is illustrated in Fig. 1. It consists of a Light Shaping Diffuser (LSD) in front of a lens. The fiber transmits light to a spectrometer. Such an arrangement results in a large receiving area which through the use of the LSD and the lens translates into the small acceptance angle needed by the spectrometer. In this paper we take account of the reflections at both ends of the tube, that were neglected by previous studies of this kind. We also extend previous results by considering a more realistic arrangement, by introducing weighting functions that quantitatively show the scattering error as a function of angle, and by providing calculations for some cases of practical interest. ## 2 Calculation Setup The Monte-Carlo code used was adopted from the code written to determine the effects of self-shading on an in-water upwelling irradiance meter (Piskozub 1994). This is a forward Monte Carlo algorithm, meaning that the photons are traced in the forward direction starting from the light source. An absorption event ends a photon's history. The low values of the optical depth inside the absorption tube make this approach comparatively efficient. The absorption tube studied (called henceforth \\(\\alpha\\)-TUBE) is a cylindrical shell of inner radius r=0.006 m and length d=0.23 m which corresponds to the dimensions of the HiStar quartz reflective tube (see Fig. 1). Thickness of the quartz wall is \\(\\Delta\\)r=0.002 m. The indices of refraction used are 1.33 for water and 1.41 for quartz. The photons inside the \\(\\alpha\\)-TUBE are traced along their three dimensional trajectories. The scattering or absorption events inside the volume of the water sample are defined by inherent optical properties of the medium: the absorption coefficient of the medium a, scattering coefficient b, and the scattering phase function \\(\\beta(\\theta)\\). Random numbers are used to choose whether a given photon traveling from point (x,y,z) towards direction \\((\\theta,\\varphi)\\) will reach the wall or the end of the cylinder. Otherwise, it is assumed that the photon ends its trajectory inside the liquid medium, and the type of the event (scattering or absorption) is determined by comparing a random number to the single scattering albedo \\(\\omega_{0}\\) value. All random number used in the code are uniformly distributed in the open interval (0, 1). We define the Cartesian coordinate system such that the axis of the tube is co-located with the z-axis, and the center of the source end of the tube defines the origin of the coordinate system. Therefore, \\(\\theta\\) is the angle between the photon direction and the tube axis and \\(\\varphi\\) is the angle of projection of the photon direction onto the x-y plane. In every scattering event the new direction of the photon is chosen using the relevant phase function. The source of photons is assumed to be within a circle of the radius r = 1.5 mm emitting photons into a cone of \\(25^{\\circ}\\) half-width. This represents the fiber head. The angular distribution of photons is assumed to be Lambertian up to the \\(25^{\\circ}\\). The position of the photon entering the tube itself is calculated, taking into account the distance from the fiber head to the tube entrance (0.019 m) and refraction from air to water at the mouth of the reflective tube. uch an \\(\\alpha\\)-TUBE, defined above, is similar to the prototype WET Labs HiStar, but it does not precisely model the detailed radiance structure surrounding both the source and the receiver ends of the reflective tube. We believe that those details are of minor importance in comparison to neglecting the albedo of the tube ends. Therefore we introduced the albedo of the source end for photons returning to it from inside the tube, for which we estimated the albedo as 0.3 (0.2 being the diffuse albedo and 0.1 the specular albedo). This is a rough assumption of the combined effects of the input window and the silver colored metal surface around the fiber head. The quartz walls are treated as ideally smooth, reflecting and refracting photons according to geometrical optics. The direction of a refracted photon is determined in the code by Snell's law and probability of reflection and refraction by Fresnel's law. However, no polarization effects are included. The photons are also followed inside the quartz, which is assumed to be non-absorbing. All photons leaving the outer surface of the quartz tube are treated as lost. The tube is assumed to be surrounded by an ideally black medium. The receiver end of the tube is treated as a smooth surface reflecting 8% of the incident photons. This corresponds to the angular average of reflection for the glass window and the Light Shaping Diffuser (LSD) used. It is assumed that a photon hitting the diffuser plate at an angle \\(\\theta\\) reaches the receiver with a probability proportional to the angular diffusion distribution (or scattering angle profile) of the LSD. The Light Shaping Diffuser used in the instrument is a Physical Optics Corporation \\(20^{\\circ}\\) LSD. Also, we use in the calculations two other diffusers: \\(60^{\\circ}\\) LSD and \\(95^{\\circ}\\) LSDs. The angles are half-widths of the angular diffusion function. This means that, for example, a \\(60^{\\circ}\\) LSD has a 50% transmission efficiency at angle \\(30^{\\circ}\\). The angular diffusion distribution used was determined by the least-square fits of the experimental curves provided by the producer. The \\(20^{\\circ}\\) LSD was approximated by two Gaussian functions (with all angles in degrees): \\[L_{20}(\\theta)=A\\exp\\left[-0.5{\\left(\\frac{\\theta}{B}\\right)}^{2}\\right]+(1-A )\\left[-0.5{\\left(\\frac{\\theta}{C}\\right)}^{2}\\right] \\tag{1}\\] where \\(A=0.85\\), \\(B=8.0^{\\circ}\\), \\(C=15.3^{\\circ}\\). The \\(60^{\\circ}\\) LSD was approximated by a single Gaussian Figure 1: Schematic representation of the reflective tube absorption meter setup discussed in the paper function: \\[L_{60}(\\theta)=A\\exp\\left[-0.5{\\left(\\frac{\\theta}{B}\\right)}^{2}\\right] \\tag{2}\\] where \\(A=1.0\\) and \\(B=24.5^{\\circ}\\). Finally, the \\(95^{\\circ}\\) LSD was approximated by a Gompertz function (Gompertz, 1825): \\[L_{95}(\\theta)=A\\exp\\left[-\\exp\\left(-\\left(\\frac{\\theta-\\theta_{0}}{B}\\right) \\right)\\right] \\tag{3}\\] where \\(A=1.0\\), \\(B=7.33^{\\circ}\\) and \\(\\theta_{0}=50.75\\). The shapes of the three LSD diffusion angle profiles are presented in Figure 2. The standard Petzold scattering phase function for open turbid waters was used (Petzold 1972) except for the calculations of the effect of the phase function shape on the measurement error where Henyey-Greenstein phase functions (Henyey and Greenstein, 1941) were used. Each Monte Carlo calculation was performed twice: one to determine how many photons are recorded if there is no absorption and scattering (\\(P_{0}\\)) and the other for the IOPs (absorption, scattering and phase function) of the aquatic medium being studied (P). We define the Figure 2: Light diffusing characterisities of the Light Shaping Diffusers for three different types of diffusers. measured absorption am as Kirk (1992): \\[a_{m}=(1/d)\\ln(P_{0}/P) \\tag{4}\\] where d is the length of the cylinder. It must be noted that the first program run corresponds to calibration of the instrument in \"ideal water,\" not in air. We use the water index of refraction for this medium so as not to influence the new photon direction in refraction events. Therefore, the results are different from what one would obtain if the tube was filled with air. Such a non-scattering and non-absorbing liquid medium does not exist. However, it is the most obvious reference value to use in the calculations aimed at studying the calibration of the instrument, because it does not introduce any photon losses due to scattering (b=0) and does not change the behavior of photons on the quartz-liquid border. ## 3 Results ### Semi-Analytical Considerations In this section we present qualitative arguments related to the optical phenomena influencing the fate of photons in the absorption tube. The purpose of this is to provide approximate results that can be compared with exact Monte Carlo results later. The semi- analytic results give a better understanding of the underlying physics. If, for example, the IOPs of the studied water sample are assumed to be: b=0.8 m\\({}^{-1}\\), a=0.2 m\\({}^{-1}\\), and the Petzold San Diego Harbor (turbid water) phase function, one can expect that 4.5% (\\(\\exp(-ad)\\)) of all photons will be absorbed and 16.8% (\\(\\exp(-bd)\\)) will be scattered in the water volume inside the quartz tube. The phase function used, determines that 92.9% of all scattering events take place in the 0-41\\({}^{\\circ}\\) range. Assuming for simplicity, that before scattering, all photons traveled parallel to the tube axis, all those scattered photons will be reflected back into the tube on all encounters with the tube wall. Similarly, all photons backscattered into the range of 139-180\\({}^{\\circ}\\) stay in the tube on their way back to the source end. These represent 0.3% of all photons. The remaining 6.8% of all scattered photons are scattered into the range 41-139\\({}^{\\circ}\\). These photons have a large probability of leaving the tube on the first encounter with the wall. The angles to the z-axis at which those photons travel make it virtually certain that they will leave the tube before reaching either end, unless they are scattered close to \\(41^{\\circ}\\). For example at \\(\\theta=45^{\\circ}\\), a photon scattered in the very center of the tube will hit the wall 9 times before reaching the end of the tube making the probability of not leaving it \\(<10^{-8}\\). The 6.8% of scattered photons that are lost translates into 1.1% of all traced photons being lost through the walls. After the absorption and scattering losses are taken into account, 94.3% of all traced photons reach the receiver end of the tube. Thus, the assumption of 8% specular albedo of the receiver end of the tube means that 7.5% of all traced photons are reflected from the receiver end of the tube. Absorption and scattering losses result in only 7.1% of all initial photons coming back to the source end after being reflected. The backscattered photons, which are attenuated in a pathlength approximately equal to the tube length, increase this number to 7.2%. Another important aspect of a quartz absorption tube is the length of the path that the photons travel in quartz instead of water. Any distance traveled in quartz decreases the attenuation due to the negligible absorption coefficient in the visible range of the spectrum (Kirk 1992). This decreases the measured absorption values because the receiver is reached by more photons due to the shorter path through the absorbing medium. In the studied tube geometry, the ratio of tube diameter to wall thickness is only 6:1, and considering that every time the photon goes across the water volume it needs to cross the wall twice (out and back into the tube) it would seem that this effect must be overwhelming. However, due to Snell's law and the narrow angles between the direction of most photons and the wall, the average path of photons in quartz is much smaller. ### Sanity Checks of the Monte Carlo Results Even if rough approximations of the results are possible with simple arithmetic calculations, Monte Carlo modeling includes all optical phenomena taking place inside the tube. This can be illustrated by comparing the approximate values from the previous section to the results of a Monte Carlo run using 40 million photons with the same IOPs. The number of photons lost through the walls is 1.74% (estimated above as 1.1%). The discrepancy can be explained by the divergence of the input beam: photons entering the tube at an angle may leave it through the walls, even if scattered from its original direction at less than the angle of total internal reflection, resulting in higher scattering losses. The Monte Carlo derived absorption losses percentage is 4.09% (semi-analytically estimated as 4.5%). The reason for this is that the path length of the photons is in quartz instead of in water. The Monte Carlo code makes it possible, by tracing each photon individually, to calculate the average path traveled by the photons in water to be 21.15 cm and in quartz: 2.83cm. This means that the path in water is 8% shorter than the length of the reflective tube, a value much higher than that calculated by Kirk (1992) for a parallel beam in which only the scattered photons encounter the walls. The losses by absorption on the front end of the tube are 4.84% of all photons (estimated as 5.0%). The small difference is the net result of smaller absorption losses and greater scattering losses. The effect of the shortened optical path, described above, is not very grave if the way the instrument is calibrated is taken into account. All calibration techniques involve a comparison of the received signal (that is the number of photons reaching the receiver) for the measured sample with pure water or air values. Using pure water almost completely removes the error due to non-scattered photons traveling partly inside the wall, as the effect will be identical for water of any IOPs. Calibrating the instrument in air leaves some error due to path length in quartz because air has a different index of refraction making the path length in quartz shorter. There is, however, a much bigger source of error in calibrating the instrument without water, which is the removal of total internal reflection on the quartz wall by surrounding both its sides by the same medium. ### Variation in the Scattering-Absorption Ratio It was shown by Kirk (1992) that for a reflective tube absorption meter propagating an almost parallel beam, that the ratio of the measured to the true value of the absorption coefficient \\(a_{m}/a\\) increases linearly with \\(b/a\\) at a rate depending on the phase function used. We decided to test whether such a relationship will hold for the relatively divergent light beam and limited acceptance angle of the \\(\\alpha\\)-TUBE. Figure 3 shows that this is indeed the case for all studied angles of acceptance (depending on the LSD type used). The slope coefficients w of the linear fit \\[a_{m}=a+wb \\tag{5}\\] are 0.194 for \\(20^{\\circ}\\) LSD, 0.139 for \\(60^{\\circ}\\) LSD, and 0.0957 for \\(95^{\\circ}\\) LSD. These results suggest that the scattering losses of the instrument increase with the decreasing angle of acceptance of the receiver. It must be noted that the values of the coefficient w depend on the scattering phase function used. Figure 3 shows that there is a small offset in Equation 5. Its general version \\(a_{m}=a+wb+o\\) (where o is the offset) can be transformed to \\[a=\\frac{a_{m}-wc-o}{1-w} \\tag{6}\\] which, for a given phase function and absorbing tube geometry, allows for determining the true a value if attenuation c is measured independently. The parameters w and o may be calculated for the given absorption meter geometry if the phase function is known at least approximately. The values of the coefficient w, for the physically important range of Henyey-Greenstein asymmetry parameter g and the \\(20^{\\circ}\\) LSD receiver are shown in Figure 4. ### Variation in the Scattering Phase Function The measurement error in absorption depends on the scattering phase function of the aquatic medium (Kirk, 1992). To study the effect of the scattering phase function on absorption measured by the given instrument setup with \\(20^{\\circ}\\) LSD, we performed a sequence of Monte Carlo calculations for the Henyey-Greenstein phase function (Henyey and Greenstein 1941) by varying the asymmetry parameter g \\[\\beta(\\theta)=\\frac{1-g^{2}}{\\left(1+g^{2}-2g\\cos\\theta\\right)^{3/2}} \\tag{7}\\] where g is a parameter determining the shape of the phase function. In this paper we assume range of g from \\(g=0\\) (isotropic scattering) to \\(g=1\\) (forward scattering). The results in Fig. 5 show \\(a_{m}/a\\) as a function of the asymmetry parameter. They were obtained by running the code for the studied tube equipped with three different LSD plates (\\(20^{\\circ}\\), \\(60^{\\circ}\\), and \\(95^{\\circ}\\)). As expected, for g approaching 1 the \\(a_{m}/a\\) ratio is close to 1 as all photons are scattered almost in the direction that they were headed before the scattering. For a more realistic range of g values between 0.8 and 0.9 (Mobley, 1994) the measured absorption increases when g decreases due to the greater scattering losses of the photons. The value of \\(a_{m}/a<1\\) for \\(g=0.95\\) and \\(95^{\\circ}\\) LSD is not a statistical error, but the effect of the slightly longer path of scattered photons in the quartz, in comparison to the non-scattering \"ideal water\" which results in decrease of absorption. For small g values the \\(a_{m}/a\\) values (Fig. 5) approach the maximum (a+b)/a at which all the scattered photons are lost and the measured value of absorption equals the total attenuation. Figure 3: Ratio of the measured to the true absorption coefficient \\(a_{m}/a\\) as a function of the scattering-absorption ratio \\(b/a\\) and \\(a=0.2m^{-1}\\), Petzold ”turbid” phase function for three different Light Shaping Diffusers in front of the receiver lens. Figure 4: The slope coefficient w as a function of Henyey-Greenstein asymmetry factor g for the \\(20^{\\circ}\\) LSD receiver geometry. Figure 5: Ratio of the measured to the true absorption coefficient \\(a_{m}/a\\) as a function of the Henyey-Greenstein scattering phase function asymmetry parameter g; \\(a=0.2m^{-1}\\), \\(b=0.8m^{-1}\\), Petzold turbid phase function, for three different Light Shaping Diffusers in front of the receiver lens. Solid lines are third order polynomial fit. In the case studied (a+b)/a = 5. However, even for an isotropic scattering phase function (g=0) the value of the scattering error does not reach this maximum value because some photons are scattered at angles small enough to be recorded by the receiver. Again, the figure shows that increasing the angle of acceptance of the receiver decreases the absorption error for all realistic phase functions. ### Variation in the Acceptance Angle Results presented by Kirk (1992) for a reflective tube absorption meter with an almost parallel beam show a rapidly increasing value of \\(a_{m}/a\\) with the acceptance angle decreasing below \\(90^{\\circ}\\). On the other hand, Hakvoort and Wouts (1994) suggest that for the reflective tube with Lambertian light source and metallic reflective walls, the measurement error increases with the acceptance angle of the receiver. The Monte Carlo code was run for several values of the receiver angle of acceptance to test if the similiar situation arises for the \\(\\alpha\\)-TUBE. No Light Shaping Diffuser was used in this case. The results are presented in Figure 6. It can be seen that for large acceptance angles (\\(>40^{\\circ}\\)) the \\(a_{m}/a\\) values have a similar shape Figure 6: Ratio of the measured to the true absorption coefficient \\(a_{m}/a\\) as a function of the acceptance angle of the receiver, \\(a=0.2m^{-1}\\), \\(b=0.8m^{-1}\\), Petzold turbid phase function. to that of a parallel beam case. However, for acceptance angles in the \\(0-40^{\\circ}\\) range a minimum can be observed, instead of a high peak as in the case of a parallel beam. The reason for that is that the source beam is more divergent than the angle of acceptance. For small acceptance angles the receiver accepts some scattered photons that would not be accepted if they reached the receiver without a scattering event. This is because their angle of incidence is reduced by the scattering event. This reduces the scattering error of the absorption measurement. Unlike the Lambertian source case, there is no major increase of \\(a_{m}/a\\) values for large acceptance angles (although a small increase above \\(120^{\\circ}\\) may be discerned). It should be noted that the LSD plate used in the instrument corresponds roughly to an acceptance angle of \\(20^{\\circ}\\). Although this angle is close to a local minimum of \\(a_{m}/a\\), it is still within the range of the highest levels of absorption error. ### Angular Function of Photon Loss Probability \\(W(\\theta)\\) Full understanding of the mechanism of scattering loss is not possible without determining the angular relationship of the scattering error of the reflective tube absorption meter. The fraction of scattered light lost because of the absorption after being scattered at a given angle can be defined as a loss function \\(W(\\theta).\\) The angular integral of this function multiplied by the phase function is the error of the measured absorption value \\(\\varepsilon=a_{m}-a\\) \\[\\varepsilon=\\int_{0}^{\\pi}W(\\theta)\\beta(\\theta)\\sin(\\theta)d\\theta \\tag{8}\\] The advantage of the W function is that it is not dependent on the shape of the phase function but only on the tube geometry. This is true only if multiple scattering is neglected because a second scattering event influences the fate of a photon already scattered. In order to differentiate among the possible sources of the scattering loss error of the absorption value we decided to calculate three variants of the W function (a) \\(W_{0}(\\theta)\\) for scattering losses due to photons lost through the tube walls, (b) \\(W_{1}(\\theta)\\) same as \\(W_{0}(\\theta)\\) and for scattering losses due to additional photons absorbed by the source (\"front\") end of the tube, (c) \\(W_{2}(\\theta)\\) same as \\(W_{1}(\\theta)\\) and for scattering losses due to additional photons lost on the LSD. By \"additional photons\" we mean that only the difference between the number of photons lost by the absorption and scattering and the \"ideal\" water is taken into account. In each of the three cases the actual value of W for angle range is derived by dividing the number of photons scattered into the angular sector and subsequently lost by the sum of these photons plus the number of photons scattered into the same sector which are subsequently recorded by the receiver. Fig. 7 presents the W functions calculated by running the code for 40 million photons with \\(a=0.2m^{-1}\\), \\(b=0.8m^{-1}\\), Petzold turbid phase function, and \\(20^{\\circ}\\) LSD. In this case the number of multiple scattered photons is 7% of all which are scattered. That means that the influence of the phase function shape on the W function is of secondary importance. All three functions have the sigmoid shape for the forward scattered photons between \\(0-90^{\\circ}\\). There is no sharp step at \\(41^{\\circ}\\) and \\(139^{\\circ}\\) because the divergence of the beam causes some photons to travel at an angle to the tube axis before scattering which allows photons to escape through the wall even if scattered at an angle smaller than the total internal reflection angle. On the other hand, a photon scattered at more than \\(41^{\\circ}\\) back towards the axis may survive the encounter with the wall without being refracted out of the tube. The symmetrical shape of \\(W_{0}\\) is caused by the fact that losses on the \"front\" end are not taken into account. A photon backscattered at angle \\(180-\\phi\\) has the same chance of being lost through the walls as the one scattered at \\(\\phi\\). The symmetry is broken only by the assumed diffusive albedo of the light source end of the tube because some photons leave the tube by the wall after being diffused on its source end. Adding the photon losses on the light source end of the tube (function \\(W_{1}\\)) changes the symmetry. Most of the backscattered photons are lost in this way as well as some of the forward scattered. The latter is caused by two reasons. Some forward-scattered photons are reflected by the receiver end of the tube back towards the source end. However, it is even more important that some of the forward scattered photons are absorbed on the \"front\" end because they were scattered after they were reflected on the receiver end. The reverse direction of those photons before scattering complicates the \\(W(\\theta)\\) function, but we decided to include these scattering events in the statistics used to calculate the functions because they contribute to the total scattering losses. Receiver end losses are caused by more photons missing the receiver due to reaching it at a wider angle after a scattering event. This changes the picture by, paradoxically, decreasing the photon losses for photons that were scattered in the forward direction (function \\(W_{2}\\)). This means that scattering a photon by a small angle increases its probability of being accepted by the receiver compared to the average of non-scattered photons. This is caused again by the source beam being wider than the acceptance angle of the receiver. The discrepancy between \\(W_{1}\\) and \\(W_{2}\\), especially at scattering angles close to 0, is also partly caused by the inclusion of photons scattered on their way back to the source end of the tube. One of its consequences is that the population of photons scattered at close to 0 degree is different from the population of non-scattered photons. The shape of the W functions for angles greater than \\(90^{\\circ}\\) is not important for the scattering error estimates because only a fraction of all the photons is backscattered. For the \\(W_{2}\\) function which includes all the photon losses due to scattering, the loss of backscattered photons is close to 100%. Therefore, the best fit approximations to \\(W_{0}\\), \\(W_{1}\\) and \\(W_{2}\\) were calculated for the range \\(0-90^{\\circ}\\) only. The three functions were fitted with a 4-parameter sigmoid \\[W(x)=y_{0}+\\frac{A}{(1+\\exp(-(x-x_{0})/B))} \\tag{9}\\] where for \\(W_{0}\\) we have A = 0.935, \\(B=2.53^{\\circ}\\), \\(x_{0}=26.72^{\\circ}\\), \\(y_{0}=0.061\\), for \\(W_{1}\\) we have \\(A=0.841\\), \\(B=3.35^{\\circ}\\), \\(x_{0}=25.75^{\\circ}\\), \\(y_{0}=0.157\\), and for \\(W_{2}\\) we have \\(A=0.901\\), \\(B=2.58^{\\circ}\\), \\(x_{0}=26.79^{\\circ}\\), \\(y_{0}=0.095\\). ## 4 Conclusions Monte Carlo calculations of the scattering error for the HiStar prototype absorption meter show that the different blueprint of the instrument (more divergent light beam and a limited angular view of the receiver) in comparison to other reflective tube absorption meters does influence the error. Due to additional scattering losses by the view limited receiver the absorption error is greater for all combinations of moderate optical parameters. It is possible to correct the error by careful calibration, and the results presented in this paper for the turbid water Petzold phase Figure 7: The photon loss probability \\(W(\\theta)\\) as a function of the scattering angle, \\(a=0.2m^{-1}\\), \\(b=0.8m^{-1}\\), Petzold turbid phase function. These three functions represent (a) \\(W_{0}\\) which defines losses of photons leaving the tube through the quartz walls, (b) \\(W_{1}\\) same as \\(W_{0}\\) and losses on the light source end of the cylinder, (c) \\(W_{2}\\) which is the same as \\(W_{1}\\) but with losses on the receiver-end. function are a step in this direction. Independent measurements of attenuation may improve the error correction for water samples of known (at least approximately) phase functions as shown in the paper. However, the source of inter-instrumental discrepancy in absorption measurement that is more difficult to correct is their different responses to phase function variability. One possible solution is to use the photon loss function W. If the phase function of the studied sea-water is known (or is possible to be estimated by comparing it to known phase functions of similar water samples), the W function makes it possible to calculate directly the value of the scattering error for the sample. Our calculations show that the main source of the absorption error is the view limited receiver. Therefore, we suggest that if technically possible (it would diminish the amount of light collected by the receiver optical fiber) a diffuser of wider angular characteristics should be used in the instrument receiver setup. ## 5 References Bricaud A., Babin M., Morel. A., and Claustre H., 1995: \"Variability in the chlorophyll-specific absorption coefficients of natural phytoplankton: analysis and parameterization,\" J. Geophys. Res., 100, 13321-13332. Gompertz B., 1925: \"On the Nature of the Function Expressive of the Law of Human Mortality,\" Phil. Trans. Roy. Soc. London., 115, 513. Gordon H. R. and Morel, A. 1983: \"Remote assessment of ocean color for interpretation of satellite visible imagery, a review\" in Lecture notes on coastal and estuarine studies, vol. 4, Springler Verlag, New York, 114pp. Hakvoort J.H.M, Wouts R. 1994: \"Monte Carlo modelling of the light field in reflective tube type absorption meter,\" Proc. SPIE, 2258, Ocean Optics XII, 529-538. Henyey, L. G., and J. L. Greenstein, 1941: \"Diffuse radiation in the galaxy,\" Astrophys. J., 93, 70-83. Kirk, J.T.O, 1992: \"Monte Carlo modeling of the performance of a reflective tube absorption meter,\" Appl. Opt., 31, 6463-6468 Mobley, C. D., 1994, Light and water: radiative transfer in natural waters, Academic Press, San Diego. Petzold, T. J., 1972: \"Volume scattering functions for selected ocean waters,\" SIO Ref. 72-78, Scripps Institution of Oceanography, Univ. of California, San Diego. Piskozub J., 1994: \"Effects of surface waves and sea-bottom on self-shading of in-water optical instruments\" in Ocean Optics XII, Proc. SPIE, 2258, 300-308. Pope R.M., Fry E.S., 1997: \"Absorption spectrum (380-700 nm) of pure water. II. Integrating cavity measurements,\" Appl. Opt., 36, 8710-8723. * [22] Zaneveld J.R.V, Bartz R, 1984: \"Beam attenuation and absorption meters\" in Ocean Optics VII, M.A. Blizard ed., Proc. SPIE, 489, 318-324. * [23] Zaneveld J.R.V, Bartz R., Kitchen J.C, 1990: \"A reflective tube absorption meter,\" in Ocean Optics X, R.W. Spinard ed., Proc. SPIE, 1302, 124-136.
A Monte Carlo model was used to study the scattering error of an absorption meter with a divergent light beam and a limited acceptance angle of the receiver. Reflections at both ends of the tube were taken into account. Calculations of the effect of varying optical properties of water, as well as the receiver geometry, were performed. A weighting function showing the scattering error quantitatively as a function of angle was introduced. Some cases of the practical interests are discussed In Press: Journal of Atmospheric and Oceanic Technology, 2000
Condense the content of the following passage.
arxiv-format/0007013v1.md
Decoherence-Free Subspaces for Multiple-Qubit Errors: (II) Universal, Fault-Tolerant Quantum Computation Daniel A. Lidar,1 Dave Bacon,1,2 Julia Kempe1,3,4 and K.B. Whaley1 Departments of Chemistry1, Physics2 and Mathematics3, University of California, Berkeley, CA 94720 Ecole Nationale Superieure des Telecommunications, Paris, France4 Footnote 1: As shown, e.g., in [8], the operator sum representation can be derived from a Hamiltonian model by considering the reduced dynamics of a system coupled to a bath \\(B\\): \\(\\rho(t)={\\rm Tr}_{B}[U(t)(\\rho(0)\\otimes\\rho_{B}(0))U^{\\dagger}(t)]\\). Here the trace is over the bath degrees of freedom, \\(U=\\exp(-iH_{SB}t)\\) is the unitary evolution operator of the combined system-bath, and \\(H_{SB}\\) is their interaction Hamiltonian. One finds: \\(A_{d=(\\mu,\ u)}=\\sqrt{\\mu}\\langle\\mu|U|\ u\\rangle\\), where \\(|\\mu\\rangle,|\ u\\rangle\\) are bath states in the spectral decomposition of the bath density matrix: \\(\\rho_{B}=\\sum\\mu|\\mu\\rangle\\langle\\mu|\\). November 4, 2021 ## I Introduction Methods to protect fragile quantum superpositions are of paramount importance in the quest to construct devices that can reliably process quantum information [1, 2]. Compared to their classical counterparts such devices feature spectacular advantages in both computation and communication, as discussed in a number of recent reviews [3, 4, 5]. The dominant source of the fragility of a quantum information processor (QIP) is the inevitable interaction with its environment. This coupling leads to _decoherence_: a process whereby coherence of the QIP-wavefunction is gradually destroyed. Formally, the evolution of an open system (coupled to an environment) such as a QIP can be described by a completely-positive map [6], which can always be written in the explicit form known as the Kraus operator sum representation [7]: \\[\\rho(t)=\\sum_{d}A_{d}(t)\\rho(0)A_{d}^{\\dagger}(t). \\tag{1}\\] Here \\(\\rho\\) is the system density matrix, and the \"Kraus operators\" \\(\\{A_{d}\\}\\) are time-dependent operators acting on the system Hilbert space, constrained only to sum to the identity operator: \\(\\sum_{d}A_{d}^{\\dagger}A_{d}=I\\) (to preserve \\({\\rm Tr}[\\rho]\\)).1 Decoherence is the situation where there are at least two Kraus operators that are inequivalent under scalar multiplication. The Kraus operators are in that case related to the different ways in which errors can afflict the quantum information contained in \\(\\rho\\)[9]. Conversely, if there is only one Kraus operator, then from the normalization condition it must be unitary: \\(A=\\exp(-iHt)\\) with \\(H\\) Hermitian, so that \\(\\rho\\) satisfies the _closed_-system Liouville equation \\(\\dot{\\rho}=-i[H,\\rho]\\), \\(H\\) being the system Hamiltonian. In this case there is no decoherence. Footnote 1: As shown, e.g., in [8], the operator sum representation can be derived from a Hamiltonian model by considering the reduced dynamics of a system coupled to a bath \\(B\\): \\(\\rho(t)={\\rm Tr}_{B}[U(t)(\\rho(0)\\otimes\\rho_{B}(0))U^{\\dagger}(t)]\\). Here the trace is over the bath degrees of freedom, \\(U=\\exp(-iH_{SB}t)\\) is the unitary evolution operator of the combined system-bath, and \\(H_{SB}\\) is their interaction Hamiltonian. One finds: \\(A_{d=(\\mu,\ u)}=\\sqrt{\\mu}\\langle\\mu|U|\ u\\rangle\\), where \\(|\\mu\\rangle,|\ u\\rangle\\) are bath states in the spectral decomposition of the bath density matrix: \\(\\rho_{B}=\\sum\\mu|\\mu\\rangle\\langle\\mu|\\). Two principal _encoding_ methods have been proposed to solve the decoherence problem: (i) Quantum Error Correcting Codes (QECCs) [10, 11, 12, 13, 14, 15, 16] (for a recent review see [17]), (ii) Decoherence-Free Subspaces (DFSs) [18, 19, 20, 21, 22, 23, 24], also known as \"noiseless\", or \"error-avoiding quantum codes\". In both methods quantum information is protected against decoherence by encoding it into \"codewords\" (entangled superpositions of multiple-qubit states) with special symmetry properties. To exhibit these, it is useful to expand the Kraus operators over a fixed operator basis. For qubits a particularly useful basis is formed by the elements of the Pauli group: the group of tensor products of all Pauli matrices \\(\\{\\sigma_{k}^{\\alpha_{k}}\\}\\), where \\(\\alpha=0,x,y,z\\) (\\(\\sigma^{0}\\) is the \\(2\\times 2\\) identity matrix) and \\(k=1..K\\) is the qubit index. An element of the Pauli group can be written as \\(E_{a}=\\otimes_{k=1}^{K}\\sigma_{k}^{\\alpha_{k}}\\), where \\(a=(\\alpha_{1}, ,\\alpha_{K})\\). The \\(4^{K+1}\\)elements \\(\\{E_{a}\\}\\) of the Pauligroup (we include factors of \\(\\pm\\),\\(\\pm i\\) in this count) square to identity, are both unitary and Hermitian, either commute or anti-commute, and satisfy \\({\\rm Tr}[E_{a}^{\\dagger}E_{b}]=\\delta_{ab}/2^{K}\\). When the Kraus operators are expanded as \\[A_{d}(t)=\\sum c_{ad}(t)E_{a}, \\tag{2}\\] the operators \\(\\{E_{a}\\}\\) acquire the significance of representing the different physical errors that can corrupt the quantum information. The weight \\(w(E_{a})\\) is the number of non-zero \\(\\alpha_{k}\\) in \\(a\\). Let us now assume a short-time expansion of the \\(c_{ad}(t)\\) (relative to the bath-correlation time). The situation where only those \\(E_{a}\\) with \\(w(E_{a})=1\\) have non-vanishing \\(c_{ad}(t)\\) is called the \"independent errors\" model (assuming the \\(c_{ad}\\), which are essentially bath correlation functions [8], are statistically independent). Correlated errors corresponds to the situation where some \\(E_{a}\\) with \\(w(E_{a})>1\\) have non-vanishing \\(c_{ad}(t)\\): two or more qubits are acted upon non-trivially with the same coefficient \\(c_{ad}\\). QECCs can be classified according to the maximum weight of the errors they can still correct (this is related to the notion of a \"distance\" of a code [17]). QECCs can generally deal at least with errors of weight 1. Barring accidental degeneracies, non-trivial DFSs, on the other hand, generally do not exist if there are errors with weight 1 [22]. To make these ideas more precise, let us briefly recall the definitions of QECCs and DFSs. A QECC is a subspace \\({\\cal C}={\\rm Span}[\\{|i\\rangle\\}]\\) of the system Hilbert space with the symmetry property that different errors take orthogonal codewords \\(|i\\rangle\\) and \\(|j\\rangle\\) to orthogonal states [16]: \\[\\langle i|E_{a}^{\\dagger}E_{b}|j\\rangle=\\gamma_{ab}\\delta_{ij}. \\tag{3}\\] Here \\(\\gamma_{ab}\\) are the elements of an Hermitian matrix \\(\\gamma\\) and \\(\\delta_{ij}\\) is the Kronecker delta. This property ensures that if an error \\(E_{a}\\) occurs it can be detected and subsequently reversed [16]. A large variety of QECCs have been found [17]. A particularly useful and large class, one which will occupy our attention in this paper, arises when one considers Abelian subgroups \\(Q\\) of the Pauli group. Given such an Abelian Pauli-subgroup, or _stabilizer_\\(Q\\) (we will use both terms interchangeably in this paper), its \\(+1\\) eigenspace is a QECC known as a _stabilizer code_[15]. The set of errors \\(\\{E_{a}\\}\\) is correctable by this code if for every two errors \\(E_{a},E_{b}\\) there exists some \\(q\\in Q\\) such that \\[\\{E_{a}^{\\dagger}E_{b},q\\}=0. \\tag{4}\\] This is because under the stipulated condition \\(\\langle i|E_{a}^{\\dagger}E_{b}|j\\rangle=\\langle i|E_{a}^{\\dagger}E_{b}q|j \\rangle=-\\langle i|qE_{a}^{\\dagger}E_{b}|j\\rangle=-\\langle i|E_{a}^{\\dagger}E_ {b}|j\\rangle\\propto\\delta_{ij}\\)[15]: the QECC condition [Eq. (3)] is satisfied. To correct an error \\(E_{a}\\) one simply applies the unitary operator \\(E_{a}^{\\dagger}\\) to the code. Note that this involves active intervention: measurements to diagnose the error, and error reversal. DFSs can be viewed as highly \"degenerate\" QECCs, where degeneracy refers to the rank of \\(\\gamma\\): DFSs are rank-1 QECCs (i.e., \\(\\gamma_{ab}=\\gamma_{a}\\gamma_{b}\\)) [23, 25]. Equivalently, a DFS can be defined as the simultaneous eigenspace \\(\\tilde{\\cal H}={\\rm Span}[\\{|\\tilde{j}\\rangle\\}]\\) of all Kraus operators [23]: \\[A_{d}|\\tilde{j}\\rangle=a_{d}|\\tilde{j}\\rangle \\tag{5}\\] (\\(\\{a_{d}\\}\\) are the eigenvalues). Viewed in this way, DFSs have the remarkable property that they offer complete protection for quantum information without the need for any active intervention: \\(\\tilde{\\rho}(t)=\\sum_{d}A_{d}(t)\\tilde{\\rho}(0)A_{d}^{\\dagger}(t)=\\tilde{\\rho }(0)\\sum_{d}|a_{d}|^{2}=\\tilde{\\rho}(0)\\), for \\(\\tilde{\\rho}\\) with support exclusively on \\(\\tilde{\\cal H}\\). Thus a DFS is a \"quiet corner\" of the system Hilbert space, which is completely immune to decoherence. Like stabilizer-QECCs, DFSs can also be characterized as the \\(+1\\) eigenspace of a stabilizer, which however is generally _non-Abelian_ over the Pauli group [26, 27] (i.e., a DFS is generally a non-additive code [28]). Most work on DFSs to date has focused on a model of highly correlated errors, known as \"collective decoherence\". In this model the (non-Abelian) stabilizer is composed of tensor products of identical \\(SU(2)\\) rotations \\(+\\) contractions on all qubits. Here we will not concern ourselves with the collective decoherence model, and the term stabilizer will be reserved for the Abelian subgroups of the Pauli group. In a companion paper [29] (referred to from here on as \"paper 1\") we began a study of DFSs for non-collective errors. We derived a necessary and sufficient condition for a subspace to be decoherence-free when the Kraus operators are expanded as linear combinations over the elements of an arbitrary group. The decoherence-free states were shown to be those states that transform according to the one-dimensional irreducible representations (irreps) of this group. As above, it is natural to focus on the case where this group is the Pauli group. This is so not only because of the connection to stabilizer-QECCs, but also because the Pauli group arises in the context of many-qubit systems, where it is often natural to expand the Hamiltonians in terms of tensor products of Pauli matrices. To find DFSs, therefore, we focus here on subgroups of the Pauli group. Note that the non-Abelian subgroups of the Pauli group do not have one-dimensional irreps [29], and hence in this case a DFS can be associated only with the Abelian subgroups (which of course have only one-dimensional irreducible representations). We can now define the error model that will concern us in this paper. Unlike the stabilizer-QECCs case, where the errors that the code can correct are those that anti-commute with the stabilizer, _in the DFS case the errors are the elements of the stabilizer itself_. We shall refer to these errors as \"stabilizer-errors\". The Abelian subgroups of the Pauli group cannot contain single-qubit operators, since these would generally generate the whole Pauli group.2 Hence as errors the elements of the subgroup represent _multiple-qubit_ couplings to the bath. As explained above, this is therefore a correlated-errors model, which is distinguished from previous work on DFSs in that it does not involve any spatial-symmetry assumptions. The physical relevance of this error-model was discussed in paper 1, and will be embellished here. The DFS is not affected by these stabilizer-errors, but the rest of the Hilbert space is and may decohere under their influence. Several examples of DFSs corresponding to Abelian subgroups were given in paper 1. Our purpose in this sequel paper is to complete our study of this class of DFSs by showing how to perform universal fault-tolerant quantum computation on them. Footnote 2: The exceptions are: (i) The subgroup operators have constant \\(\\alpha=x,y\\) or \\(z\\) – the Pauli matrix index; (ii) The single-qubit operators act only on those qubits where all other operators act as identity. The central challenge in demonstrating universal fault-tolerant quantum computation on DFSs is to show how this can be done using only 1- and 2-body Hamiltonians, and a small number of measurements.3 Several previous publications have addressed the issue of universal quantum computation on DFSs, but left this challenge unanswered [22, 30, 31]. In Refs. [26, 27] we accomplished this task for the first time, in the collective decoherence model. Collective decoherence is the situation where all qubits are coupled in an identical manner to the bath, i.e., there is a strong _spatial symmetry_: qubit permutation-invariance. In this case, by using exchange operations, it is possible to implement universal quantum computation without ever leaving the DFS. The procedure is therefore naturally fault-tolerant. In the present paper we will show how to implement universal fault-tolerant quantum computation on DFSs that arise from the Pauli subgroup error model, without requiring any spatial symmetry assumption. However, it will not be possible to do so without leaving the DFS, thus exposing the states to the subgroup errors. As will be shown here, fault-tolerance is obtained by using the encoded states twice: in a dual DFS-QECC mode. This duality arises from the fact that the DFS remains a perfectly valid QECC for the errors that the stabilizer anticommutes with. Footnote 3: By “small” we mean that the measurements do not have to be fast compared to the bath correlation time. If they are then the decoherence is avoided essentially by use of the quantum Zeno effect. There are several ways to achieve universal fault-tolerant quantum computation on stabilizer-QECCs; e.g., use of the sets of gates {Hadamard, \\(\\sigma_{z}^{1/2}\\), Toffoli} [32, 14], or {Hadamard, \\(\\sigma_{z}^{1/4}\\), controlled-NOT} [33]. Additional methods were provided in [34, 35]. Our construction reverts to the early ideas on the implementation of universal quantum computing: we use single-qubit \\(SU(2)\\) operations and a controlled-NOT gate [36, 37, 38, 39], except that these are _encoded_ operations, acting on codewords (not on physical qubits). In general such encoded operations involve multiple qubits, and are not naturally available. The key to our construction is a method to generate many-qubit Hamiltonians by composing operations on (at most) pairs of physical qubits. This is done by selectively turning certain interactions on and off. A difficulty is that the very first such step can transform the encoded states and take them outside of the DFS. However, by carefully choosing the interactions we turn on/off and their order, we show that the transformed states become a QECC with respect to the stabilizer-errors that the DFS was immune to. This fact is responsible for the fault-tolerance of our procedure. After the final interaction is turned off, the states return to the DFS, and are once again immune to the stabilizer-errors. The structure of the paper is as follows. In Section II we briefly review the main result of paper 1, and the connection between the DFSs considered here and stabilizer-QECCs. We then discuss in Section III the meaning of fault-tolerance in light of the error-model considered in this paper. In the following two sections we present the main new ideas and results of this paper: in IV we show how to generate many-qubit Hamiltonians by composing two- and single-qubit Hamiltonians, and in V prove the fault-tolerance of this procedure. We use it to generate encoded \\(SU(2)\\) operations on the DFS qubits. Section VI shows how, by using similar methods, we can fault-tolerantly perform encoded CNOT operations on the encoded qubits, thus coupling blocks of qubits and completing the set of operations needed for universal computation. The final ingredient is presented in Section VII, where we show how to fault-tolerantly measure the error syndrome throughout our gate construction. While our main motivation in this paper is to study computation on DFSs in the presence of stabilizer-errors, it is also interesting to consider the implications of the techniques we develop here to the usual model of errors that anticommute with the stabilizer. We consider this question briefly in Section VIII, and show that our methods provide a new way to implement universal quantum computation which is fault tolerant with respect to error _detection_. We conclude and summarize in section IX. ## II Connection between Pauli Subgroup DFSs and Stabilizer Codes In paper 1 we proved the following result: _Theorem 1. --_ Suppose that the Kraus operators belong to the group algebra of some group \\({\\cal G}=\\{G_{n}\\}\\), i.e., \\({\\bf A}_{d}=\\sum_{n=1}^{N}a_{d,n}G_{n}\\). If a set of states \\(\\{|\\tilde{j}\\rangle\\}\\) belong to a given _one-dimensional_ irrep of \\({\\cal G}\\), then the DFS condition \\({\\bf A}_{d}|\\tilde{j}\\rangle=c_{d}|\\tilde{j}\\rangle\\) holds. If no assumptions are made on the bath coefficients \\(\\{a_{d,n}\\}\\), then the DFS condition \\({\\bf A}_{d}|\\tilde{j}\\rangle=c_{d}|\\tilde{j}\\rangle\\) implies that \\(|\\tilde{j}\\rangle\\) belongs to a _one-dimensional_ irrep of \\({\\cal G}\\). This theorem provides a characterization of DFSs in terms of the group-representation properties of the basis set used to expand the Kraus operators in. There are good physical reasons to choose the Pauli group as this basis set: as argued in paper 1, the Pauli group naturally appears as a basis in Hamiltonians involving qubits. Furthermore, using the Pauli group allows us to make a connection to the theory of stabilizer-QECCs. To see this consider the identity irrep, for which each element \\(G_{n}\\) in the group \\({\\cal G}\\) acts on a decoherence free state \\(|\\psi\\rangle\\) as \\[G_{n}|\\psi\\rangle=|\\psi\\rangle. \\tag{1}\\] Choosing \\({\\cal G}\\) from now on as a Pauli subgroup \\(Q\\), the DFS fixed by the identity irrep is a stabilizer code, where \\(Q\\) is the stabilizer group. As mentioned above, a stabilizer code is defined as the \\(+1\\) eigenspace of the Abelian group \\(Q\\).4 It is thus clear that the states fixed by \\(Q\\) play a dual role: _they are at once a DFS with respect to the stabilizer errors and a QECC with respect to the errors that anticommute with some element of \\(Q\\)._ Footnote 4: The DFSs corresponding to the other 1D irreps can also be turned into stabilizer codes by a redefinition of the subgroup, taking into account the minus signs appearing in the irrep under question. This kind of freedom is well known in the stabilizer theory of QECCs [15]. It is simple to verify that basic properties of stabilizer codes hold, e.g., that if the stabilizer group has \\(K-l\\) generators, then the code space (in this case the DFS) has dimension \\(2^{l}\\) (i.e., there are \\(l\\) encoded qubits) [15]. Indeed, the dimension of an Abelian group with \\(K-l\\) generators is \\(N=2^{K-l}\\), and we showed in paper 1 that the dimension of the DFS is \\(2^{K}/N=2^{l}\\). ## III The meaning of fault-tolerance The observation that the Pauli subgroup DFSs are stabilizer codes allows us to employ some results from stabilizer theory, and aids in the analysis of when it is possible to perform universal fault-tolerant computation on these DFSs.5 Before delving into the analysis, however, we should clarify what we mean by fault-tolerance in the present context. The usual meaning of fault-tolerance, as it is used in the theory of QECC, is the following: an operation (gate \\(U\\)) is _not_ fault-tolerant if an error \\(E\\) that the code could fix before application of the gate has become an unfrable error (\\(UEU^{\\dagger}\\)) after application of the gate. For example, a single qubit phase error (\\(I\\otimes Z\\)) becomes a two-qubit phase error (\\(Z\\otimes Z\\)) due to the application of a CNOT gate [34]; if the code used could only correct single-qubit errors then as a result of the CNOT gate (unless it is applied transversally, i.e., not coupling physical qubits involved in representing the same encoded qubit) this code can no longer offer protection. In this scenario, therefore, the CNOT gate was not a fault-tolerant operation. Conversely, an operation _is_ fault-tolerant if the code offers the same protection against the errors that appear after application of the operation (\\(UEU^{\\dagger}\\)) as it does against the errors before the operation (\\(E\\)). Footnote 5: The reader may wonder whether it should not be possible to simply take over the results about universal fault tolerant computation from stabilizer theory, and apply them directly in the present case. However, a problem is encountered when that construction is applied to the error-model considered here, because multiple-qubit errors may propagate back as (non-perturbative) single-qubit errors due to interaction with a “bare” (non-DFS) ancilla. We are indebted to Dr. Daniel Gottesman for pointing out this problem to us. A complementary (\"Heisenberg\" [40]) picture to the (\"Schrodinger\") description above is to consider the errors as unchanged and the code \\({\\cal C}\\), as well as the stabilizer \\(Q\\), as transformed after the application of each gate: \\({\\cal C}\\longmapsto UC\\) and \\(Q\\longmapsto UQU^{\\dagger}\\). Then fault-tolerance can be viewed as the requirement that the new code is capable of correcting the original errors. This point of view will be particularly useful for our purposes. In our case the original errors are the elements of the Pauli subgroup \\(Q\\) (the stabilizer), and the gates \\(U\\) will turn out not to preserve the original code. Nevertheless, we will show that to the new stabilizer \\(Q^{\\prime}=UQU^{\\dagger}\\) corresponds a QECC (the transformed code \\({\\cal C}^{\\prime}=UC\\)) that can correct the original errors. In this way the fault-tolerance criterion is satisfied. ## IV Encoded \\(Su(2)\\) from Hamiltonians We now begin in earnest our discussion of how to implement universal, fault-tolerant quantum computation on the Pauli-subgroup DFSs. In this section we show how arbitrary single encoded-qubit operations can be implemented fault-tolerantly. We will do so by generating the entire encoded \\(SU(2)\\) group from at most two-qubit Hamiltonians. We assume that the system Hamiltonian is of the general two-qubit form \\[H_{S}=\\sum_{i=1}^{K}\\sum_{\\alpha=\\{x,y,z\\}}\\omega_{i}^{\\alpha_{i}}\\sigma_{i}^{ \\alpha_{i}}+\\sum_{i>j=1}^{K}\\sum_{\\alpha,\\beta=\\{x,y,z\\}}J_{ij}^{\\alpha_{i} \\beta_{j}}\\sigma_{i}^{\\alpha_{i}}\\otimes\\sigma_{j}^{\\beta_{j}}, \\tag{10}\\] with controllable parameters \\(\\{\\omega_{i}^{\\alpha_{i}}\\}\\), \\(\\{J_{ij}^{\\alpha_{i}\\beta_{j}}\\}\\). ### Background Suppose we are given an error subgroup \\(Q\\) generated by the elements \\(\\{q_{i}\\}_{i=1}^{|Q|}\\). By the results of paper 1 we know how to identify the corresponding DFS, which is also a stabilizer-code with respect to the errors that anti-commute with \\(Q\\). This QECC aspect will not be needed as long as we are only interested in _storing_ information in this DFS: then the \\(Q\\)-errors will have no effect. However, here we are interested in the more ambitious goal of _computing_ in the presence of the \\(Q\\) -errors, which means that we must be able to implement logic gates. As discussed above, these gates will take the states out the DFS and expose them to the \\(Q\\)-errors. To be able to compute we will need some basic results from the theory of fault-tolerant quantum computation using stabilizer codes, as developed primarily in Ref. [34]. Let us briefly review these results. The set of operators which commute with the stabilizer themselves form a group called the _normalizer_ of the code, \\(N(Q)\\). These elements are of interest because they are operations which preserve the DFS. Let \\(q\\in Q\\), \\(|\\psi\\rangle\\in{\\rm DFS}(Q)\\); if \\(n\\in N(Q)\\) then \\[q\\left(n|\\psi\\rangle\\right)=nq|\\psi\\rangle=n|\\psi\\rangle, \\tag{11}\\] so that \\(n|\\psi\\rangle\\) is in the DFS as well. Clearly, the stabilizer \\(Q\\) is in the normalizer \\(N(Q)\\) and so the only operations which act nontrivially on the subspace are those which are in the normalizer but not in the stabilizer: \\(N(Q)/Q\\). While this means that these operations can be used to perform useful manipulations on the DFS, it also means that if they act uncontrollably, then they appear as errors that the code _cannot_ detect. As will be seen later on, these are both crucial aspects in our construction. For any Pauli-subgroup stabilizer code, the normalizer is generated by the single qubit \\(\\overline{X}_{i}\\) and \\(\\overline{Z}_{i}\\) operations, where \\(i=1, ,l\\) labels the _encoded_ qubits [34]. The bar superscript denotes that these are \"encoded operations\": they perform a bit-flip and a phase-flip on the encoded qubits. The gates \\(\\overline{X}_{i}\\) and \\(\\overline{Z}_{i}\\), however, are by themselves insufficient for universal quantum computation. The usual stabilizer-QECC construction deals with (typically _un_correlated) errors that anticommute with the stabilizer. In this case, in addition to generating the normalizer of the Pauli group \\(N(P_{K})\\), one other operation is needed, such as the Toffoli gate [32]. Such constructions have been covered in several recent publications [32, 33, 34, 35, 41]. However, as emphasized above, the errors here are qualitatively different: not only are they always correlated, rather than anticommuting with the stabilizer, _the errors are the stabilizer itself_. Thus the usual construction does not apply, and we introduce a different approach. We show how to perform universal fault-tolerant quantum computation using the early \\(SU(2)\\)+CNOT construction [37, 39, 42], but applied to encoded (DFS) qubits. ### A Useful Formula: Conjugation by \\(\\pi/4\\) Instead of treating \\(\\overline{X}\\) and \\(\\overline{Z}\\) as gates, as in the usual stabilizer-QECC construction, we employ them as _Hamiltonians_. Since \\(\\overline{X}\\) and \\(\\overline{Z}\\) are in the normalizer, so are \\(\\exp(i\\theta\\overline{X})\\) and \\(\\exp(i\\theta\\overline{Z})\\), and so are any other encoded \\(SU(2)\\) group (denoted \\(\\overline{SU(2)}\\)) operations obtained from them. By applying operations from \\(\\overline{SU(2)}\\) alone we ensure that the code is preserved. To obtain other \\(\\overline{SU(2)}\\) operations from \\(\\overline{X}\\) and \\(\\overline{Z}\\), we use the Euler angle construction [43], which shows that any rotation can be composed out of rotations about only two orthogonal axes: \\[\\exp[-i\\omega({\\bf n}\\cdot\\sigma)/2]=\\exp(-i\\beta\\sigma_{z}/2)\\exp(-i\\theta \\sigma_{y}/2)\\exp(-i\\alpha\\sigma_{z}/2). \\tag{12}\\]Here the resulting rotation is by an angle \\(\\omega\\) about the direction specified by the unit vector \\({\\bf n}\\), both of which are functions of \\(\\alpha\\),\\(\\beta\\), and \\(\\theta\\). Using Eq. (4.3) and the mapping \\(\\{\\sigma_{x},\\sigma_{y},\\sigma_{z}\\}\\longmapsto\\{\\overline{X},\\overline{Y}, \\overline{Z}\\}\\), we can construct any element of \\(\\overline{SU(2)}\\). To do so, we now derive a form of the Euler angle construction which is particularly relevant to operations with Pauli matrices. Assume that \\(A\\) and \\(B\\) are both tensor products of Pauli matrices (and thus square to identity). Then: \\[\\exp(-i\\varphi A)B\\exp(+i\\varphi A) = (I\\cos\\varphi-Ai\\sin\\varphi)B(I\\cos\\varphi+Ai\\sin\\varphi) \\tag{4.4}\\] \\[= B\\cos^{2}\\varphi+ABA\\sin^{2}\\varphi-i\\sin\\varphi\\cos\\varphi[A,B]\\] \\[= \\left\\{\\begin{array}{cc}B&\\mbox{if $[A,B]=0$}\\\\ B\\cos 2\\varphi+iBA\\sin 2\\varphi&\\mbox{if $\\{A,B\\}=0$}\\end{array}\\right.\\] For the special case of \\(\\varphi=\\pi/4\\) we define the conjugation with \\(A\\) by \\[T_{A}\\circ\\exp(i\\theta B) \\equiv \\exp(-i\\frac{\\pi}{4}A)\\exp(i\\theta B)\\exp(+i\\frac{\\pi}{4}A) \\tag{4.5}\\] \\[= \\left\\{\\begin{array}{cc}\\exp(i\\theta B)&\\mbox{if $[A,B]=0$}\\\\ \\exp[i\\theta(iAB)]&\\mbox{if $\\{A,B\\}=0$}\\end{array}\\right..\\] This can be understood geometrically as a rotation by \\(\\varphi=\\pi/4\\) about the \"axis\" \\(A\\), followed by a rotation by \\(\\theta\\) about \\(B\\), followed finally by a \\(\\beta=-\\pi/4\\) rotation about \\(A\\), resulting overall in rotation by \\(\\theta\\) about the \"axis\" \\(iAB\\). All \\(\\varphi=\\pi/4\\) rotations about a Pauli group member are elements of the normalizer of the Pauli group: they take elements in the Pauli group under conjugation to other elements of the Pauli group. Note that the \"conjugation-by-\\(\\frac{\\pi}{4}A\\)\" operation \\(T_{A}\\circ\\exp(i\\theta B)\\) is equivalent to multiplication of \\(B\\) to the left by \\(iA\\) inside the exponent. This is very useful, since the elements of the normalizer of any stabilizer can always be written as a tensor product of single qubit Pauli matrices, i.e., as a tensor product of single-body gates. This is exactly the structure that is suggested by Eq. (4.5), and thus it should allow us to construct \\(\\exp(i\\theta\\overline{X})\\) and \\(\\exp(i\\theta\\overline{Z})\\) for any Pauli subgroup using at most two-body interactions. The caveat, however, is that while \\(\\exp(i\\theta\\overline{X})\\) and \\(\\exp(i\\theta\\overline{Z})\\) always preserve the code (since they are in the normalizer), the operations that generate them from Hamiltonians involving at most two-body interactions may corrupt the code, as explained in Sec. III above. Let us then state the challenges ahead: _To show how the Hamiltonians \\(\\overline{X}\\) and \\(\\overline{Z}\\) can be generated using (i) at most two-body interactions, (ii) fault-tolerantly._ ### Simple Example: The Subgroup \\(Q_{4}\\) Let us pause by introducing a simple example illustrating the notion of universal computation using normalizer elements which are two-body Hamiltonians. Our example uses a group whose natural structure is such that the two-body restriction is automatically satisfied. To this end consider the subgroup \\(Q_{4}=\\{I^{\\otimes 4},X^{\\otimes 4},Y^{\\otimes 4},Z^{\\otimes 4}\\}\\), which we studied in detail in paper 1. It is generated by \\(K-l=4-l=2\\) elements (\\(X^{\\otimes 4},Z^{\\otimes 4}\\)), and therefore encodes \\(l=2\\) qubits, with states given by \\[|00\\rangle_{L} = \\frac{1}{\\sqrt{2}}\\left(|0000\\rangle+|1111\\rangle\\right)\\qquad|01 \\rangle_{L}=\\frac{1}{\\sqrt{2}}\\left(|1001\\rangle+|0110\\rangle\\right)\\] \\[|10\\rangle_{L} = \\frac{1}{\\sqrt{2}}\\left(|1100\\rangle+|0011\\rangle\\right)\\qquad|1 1\\rangle_{L}=\\frac{1}{\\sqrt{2}}\\left(|0101\\rangle+|1010\\rangle\\right). \\tag{4.6}\\] These states are easily seen to be \\(+1\\) eigenstates of \\(Q_{4}\\). The normalizer in this case contains two \\(\\overline{X}_{i}\\) and \\(\\overline{Z}_{i}\\) operations, one for each encoded qubit: \\[\\overline{X}_{1} = IXXI\\qquad\\overline{Z}_{1}=ZZII\\] \\[\\overline{X}_{2} = XXII\\qquad\\overline{Z}_{2}=IZZI. \\tag{4.7}\\] Indeed, we have, for example \\(\\overline{X}_{1}|a,b\\rangle_{L}=|1-a,b\\rangle_{L}\\) and \\(\\overline{Z}_{1}|a,b\\rangle_{L}=(-1)^{a}|a,b\\rangle_{L}\\), so \\(\\overline{X}_{1}\\) and \\(\\overline{Z}_{1}\\) act, respectively, as a bit flip and a phase flip on the first encoded qubit. As easily checked, \\(\\overline{X}_{i}\\) and \\(\\overline{Z}_{i}\\) commute with \\(Q_{4}\\), so that they keep states within the DFS, as should be the case for normalizer elements. As _Hamiltonians_\\(\\overline{X}_{i}\\) and \\(\\overline{Z}_{i}\\) are valid two-body interactions and hence can be used directly to generate the encoded \\(SU(2)\\) group on each encoded qubit. That is, \\(\\exp(i\\alpha\\overline{X}_{i})\\) and \\(\\exp(i\\beta\\overline{Z}_{i})\\) can be combined directly, with arbitrary values for the angles \\(\\alpha\\) and \\(\\beta\\), to produce any operation in \\(\\overline{SU(2)}\\) by using the Euler angle formula. For example, we can construct a rotation about the encoded \\(Y_{i}\\) axis by conjugation: \\(\\exp(i\\theta\\overline{Y}_{i})=\\exp(-i\\frac{\\pi}{4}\\overline{X}_{i})\\exp(-i \\theta\\overline{Z}_{i})\\exp(+i\\frac{\\pi}{4}\\overline{X}_{i})\\). We have, therefore, two independent encoded qubits which can be operated upon seperately by encoded \\(SU(2)\\) operations. What about coupling between the encoded qubits so that the full \\(\\overline{SU(4)}\\) can be used to do computation? Note that Hamiltonians like \\(\\overline{Z}_{1}\\otimes\\overline{Z}_{2}=ZIZI\\), which are two-body on the encoded qubits, can be implemented directly since they are also two-body on the physical qubits (this is not a generic feature, however, as discussed in Section V.4 below). It is a fundamental theorem of universal quantum computation [37, 39, 42] that the ability to perform \\(SU(2)\\) on two qubits plus the ability to perform _any_ nontrivial two-body _Hamiltonian_ between these qubits is universal over the combined \\(SU(4)\\) of these two qubits. Thus we can perform universal computation on the \\(Q_{4}\\)-DFS. In this case the normalizer elements which perform the \\(\\overline{SU(4)}\\) are all two-body Hamiltonians, and there is no need to apply any new methods in order to perform fault-tolerant computation which preserves this DFS. Anticipating the discussion in Section VI, note that while we have demonstrated universal computation on a single DFS-block, we have not yet addressed how to accomplish this when we have clusters of the \\(Q_{4}\\)-DFSs. This, of course, is necessary to scale-up the quantum computer under the \\(Q_{4}\\)-model of decoherence. In order to perform universal fault-tolerant computation with clusters, we must show that these can be coupled in a non-trivial manner. Methods for performing non-trivial couplings between clusters exist for any stabilizer code [34]. In particular, the \\(Q_{4}\\)-DFS is a Calderbank-Shor-Steane (CSS)-code, whose clusters can be coupled by performing bit-wise parallel controlled-not gates between two clusters of qubits. This implements as desired an encoded controlled-not between these clusters. In Section VI we will discuss what is needed to make this procedure fault-tolerant \\(Q_{4}\\) is a special case because of the fact that the normalizer elements are all two-body interactions. In general the normalizer elements will be many-body interactions and more general techniques are needed, to which we turn next. ### Generating \\(\\overline{X}\\) and \\(\\overline{Z}\\) Using At Most Two-Body Interactions We now move on to the general case where the normalizer elements are possibly many-body Pauli operators. Our first task is to show that the \"conjugation-by- \\(\\frac{\\pi}{4}A\\)\" operation \\(T_{A}\\circ\\exp(i\\theta B)\\) can be used to generate any many-body Hamiltonian inside the exponent using at most two-qubit Hamiltonians. In Section V we show that this is a fault-tolerant procedure if applied correctly to a DFS. Suppose the many-body Pauli Hamiltonian \\(H\\) we want to generate is of the following general form: \\[H=\\sigma_{b}^{\\beta}\\bigotimes_{j\\in{\\cal J}}\\sigma_{j}^{\\alpha_{j}}, \\tag{10}\\] where \\({\\cal J}\\) is some index set and \\(b\ otin{\\cal J}\\). From Eq. (11) we have at our disposal a single-qubit Hamiltonian \\(\\sigma_{b}^{\\beta}\\), and a set of two-qubit Hamiltonians \\(A_{j}=\\sigma_{b}^{\\gamma_{j}}\\otimes\\sigma_{j}^{\\alpha_{j}}\\) with \\(j\\in{\\cal J}\\) and \\(\\gamma_{j}\ eq\\beta\\). We call the \\(b^{\\rm th}\\) qubit the \"base qubit\". \\(A_{j}\\) and \\(\\sigma_{b}^{\\beta}\\) agree on one qubit index but differ on the Pauli matrix applied to that qubit, so they anticommute: \\(\\{A_{j},\\sigma_{b}^{\\beta}\\}=0\\). Let \\({\\cal J}(i)\\) denote the \\(i^{\\rm th}\\) element in the index set \\({\\cal J}\\). If we use the \"conjugation-by-\\(\\frac{\\pi}{4}A_{{\\cal J}(1)}\\)\" operation about \\(\\exp(i\\theta\\sigma_{b}^{\\beta})\\) [recall Eq. (12)] we obtain: \\[T_{A_{{\\cal J}(1)}}\\circ\\exp(i\\theta\\sigma_{b}^{\\beta})=\\exp[i\\theta(i\\sigma_ {b}^{\\gamma_{{\\cal J}(1)}}\\otimes\\sigma_{{\\cal J}(1)}^{\\alpha_{{\\cal J}(1)}}) \\sigma_{b}^{\\beta}]=\\exp[\\pm i\\theta\\sigma_{b}^{\\eta_{1}}\\otimes\\sigma_{{\\cal J }(1)}^{\\alpha_{{\\cal J}(1)}}], \\tag{11}\\] where the sign is determined by that of \\(\\varepsilon_{\\gamma_{{\\cal J}(1)}\\beta\\eta_{1}}\\), according to the usual rule of multiplication Pauli matrices: \\[\\sigma^{\\alpha}\\sigma^{\\beta}=-i\\varepsilon_{\\alpha\\beta\\gamma}\\sigma^{\\gamma}. \\tag{12}\\] Applying all other \"conjugation-by-\\(\\frac{\\pi}{4}A_{{\\cal J}(i)}\\)\" operations, \\(i=1..|{\\cal J}|\\), we obtain \\[T_{A_{{\\cal J}(|{\\cal J}|)}}\\circ\\cdots\\circ T_{A_{{\\cal J}(i)}}\\circ\\exp(i \\theta\\sigma_{b}^{\\gamma})=\\exp\\left(\\pm i\\theta\\bigotimes_{j\\in{\\cal J}}\\sigma _{b}^{\\eta}\\otimes\\sigma_{j}^{\\alpha_{j}}\\right). \\tag{13}\\] It is clear that by appropriately choosing the sequence of Pauli matrices, i.e., the \\(\\alpha_{{\\cal J}(i)}\\), we can obtain \\(\\eta=\\beta\\). Further, conjugating by \\(-\\frac{\\pi}{4}\\) (instead of \\(+\\frac{\\pi}{4}\\)) allows us to always adjust the sign in the exponent to \\(+\\). Thus the action of this gate sequence is to generate the Hamiltonian \\(H\\), as desired: \\[T_{A_{{\\cal J}(|{\\cal J}|)}}\\circ\\cdots\\circ T_{A_{{\\cal J}(i)}}\\circ\\exp(i \\theta\\sigma_{b}^{\\gamma})=\\exp\\left(i\\theta H\\right). \\tag{14}\\]An example of this type of gate network (analyzed in detail in Section V.3) is shown in Fig. 1. Since the elements of the normalizer of any stabilizer can always be written as a tensor product of single qubit Pauli matrices, Eq. (4.12) gives a constructive way of generating these normalizer elements as _Hamiltonians_ (i.e., appearing as arguments in the exponent). We have thus met the first challenge mentioned above: we have shown how to generate the Hamiltonians \\(\\overline{X}\\) and \\(\\overline{Z}\\) using at most 2-body interactions. More generally, Eq. (4.12) can be considered as a constructive procedure for generating desired many-body Hamiltonians from given two-body interactions. Finally, we note that it is perfectly possible to replace the central single-qubit Hamiltonian with a two-qubit one, specifically by \\(A_{{\\cal J}(1)}\\sigma_{b}^{\\gamma}\\). This may be more convenient for practical applications, where control of two-body interactions may be more easily achievable (as in the case of exchange interactions in quantum dots [44]). This change would not affect our fault-tolerance analysis in the next sections. ## V Generating encoded \\(Su(2)\\) fault-tolerantly for any abelian Pauli subgroup We are now ready to show how to generate encoded \\(SU(2)\\) operations fault-tolerantly for any Pauli error-subgroup. Let \\(Q\\) be such a subgroup, generated by the elements \\(\\{q_{i}\\}_{i=1}^{n}\\), \\(|Q|=2^{n}\\). Recall that here these elements play the dual role of errors and of defining the DFS by fixing its elements. A new (transformed) stabilizer is obtained after each application of a gate \\(\\exp(i\\varphi_{j}A_{j})\\). To this sequence of stabilizers corresponds a sequence of stabilizer-QECCs \\({\\cal C}_{j}\\). Our strategy will be to find conditions on the Hamiltonians \\(\\{A_{j}\\}\\) such that after each gate application, the then current QECC is able to correct the original \\(Q\\)-errors. Let \\(Q_{j}\\)\\([N(Q_{j})]\\) denote the stabilizer [normalizer] obtained after application of the gate \\(U_{j}=\\exp(i\\varphi_{j}A_{j})\\). If \\(\\varphi_{j}\\) is an integer multiple of \\(\\pi/4\\) (as we will always assume) then there are only three mutually exclusive possibilities for the errors \\(e\\in Q\\) (we use the notations \\(e\\) and \\(q\\) for members of \\(Q\\) to emphasize the error and stabilizer element aspects, respectively): 1. \\(e\\in Q_{j}\\): The error is part of the transformed stabilizer. In this case the transformed code is immune to \\(e\\) (i.e., the transformed code is a DFS with respect to \\(e\\)), and there is no problem. 2. \\(e\\) anticommutes with some element of \\(Q_{j}\\): The error is detectable by the transformed code. 3. \\(e\\in N(Q_{j})/Q_{j}\\) (i.e., \\(e\\) commutes with \\(Q_{j}\\) but is not in it): The error infiltrated the transformed normalizer. This is a problem since the error is _un_detectable by the transformed code, and acts on it in a non-trivial manner. Suppose the errors \\(e\\in Q\\) are exclusively of type 1. or 2. Then those that are of type 2. are not only detectable but also correctable. This is so because they form a group \\((Q)\\), and therefore any product of two errors is again either of type 1. or 2., which is exactly the error correction criterion.6 Thus the problematic case is 3., and this is the case we focus on in order to make a prudent choice of Hamiltonians \\(A_{j}\\). To simplify the notation, from now on we shall denote \\(N(Q_{j})/Q_{j}\\) simply by \\(N_{j}\\) (and by \\(N\\) when \\(Q_{j}=Q\\)), and refer to this as the normalizer (without risk of confusion). Footnote 6: Note that this is not true for errors in the usual stabilizer-QECC case, where the errors do not close as a group under multiplication. Is there a simple criterion to check whether \\(e\\in N_{j}\\)? The answer is contained in the following theorem: _Theorem 2.--_ Given are a Pauli-subgroup of errors \\(Q\\), its normalizer \\(N\\), and a sequence of their images \\(\\{Q_{j}\\}\\) and \\(\\{N_{j}\\}\\) under conjugation by unitaries \\(\\{U_{j}\\}\\). Corresponding to \\(Q\\) is a DFS (code) \\({\\cal C}\\). A sufficient condition so that no \\(e\\in Q\\) is ever in \\(N_{j}/Q_{j}\\) is that either (i) each \\(n_{j}\\in N_{j}\\) equals its source in \\(N\\), or (ii) for each \\(n_{j}\\in N_{j}\\) there exists \\(m\\in N\\) such that \\(\\{n_{j},m\\}=0\\). Then the transformed codes \\({\\cal C}_{j}=U_{j}{\\cal C}_{j-1}\\) (\\({\\cal C}_{1}=U_{1}{\\cal C}\\)) can always correct the original \\(Q\\)-errors. _Proof.--_ The normalizer is, by definition, the set of operations that commute with the stabilizer. Let us denote this by \\(N=Q^{\\prime}\\). What is \\(N^{\\prime}\\) (the set of operations that commute with the normalizer)? We have \\(N^{\\prime}=(Q^{\\prime})^{\\prime}\\), and claim that \\((Q^{\\prime})^{\\prime}=Q\\).7 In other words, the only operations that commute with the normalizer are those in the stabilizer. Now let \\(n_{j}\\) be the image of \\(n\\in N\\) after the \\(j^{\\rm th}\\) transformation. The observation \\(N^{\\prime}=Q\\) allows us to exclude case 3. by checking if, for every \\(n_{j}\\in N_{j}\\) (where \\(n_{j}\ eq n\\)), there exists \\(m\\in N\\) that \\(n_{j}\\) anti-commutes with. To see this, note first that if \\(n_{j}=n\\) then by definition \\(n_{j}\\) cannot be in \\(Q\\). Secondly, for an \\(n_{j}\\in N_{j}\\) that differs from its source in \\(N\\), assume that it anti-commutes with some \\(m\\in N\\). This implies that \\(n_{j}\\) is not in the commutant of \\(N\\), and is therefore not in \\(Q\\). If this is true for all \\(n_{j}\\in N_{j}\\) then we have covered the entire new normalizer \\(N_{j}\\) and not found an element of \\(Q\\) in it. This guarantees that no element of the original stabilizer \\(Q\\) becomes a member of the new normalizer \\(N_{j}\\). QED. Note that if the conditions of the theorem are satisfied then _all_ elements of the original stabilizer are excluded from the transformed normalizer. Therefore also all products of stabilizer elements are excluded (since the stabilizer is a group), so that all stabilizer-errors are both detectable and _correctable_. Below we make repeated use of the result of Theorem 2. The first application is to show how to construct two-body Hamiltonians \\(\\{A_{j}\\}\\) which can be applied in succession to produce arbitrary normalizer elements, such that at every point the theorem is satisfied. To this end we need a basic result from the theory of stabilizer codes, regarding a standard form for the normalizer. We then illustrate the general construction with the relatively simple case of CSS codes, and finally move on to general stabilizer errors. ### Standard Form of the Normalizer for Stabilizer Codes It is shown in [46] that, due to the fact that the normalizer is invariant under multiplication by stabilizer elements, the normalizer of every stabilizer code can be brought into the following standard form: \\[\\overline{Z}_{j} = (\\underbrace{I\\otimes\\cdots\\otimes I\\otimes Z_{j}\\otimes I \\otimes\\cdots\\otimes I}_{l})\\otimes\\underbrace{\\left(M_{Z}^{j}\\right)}_{r} \\otimes\\underbrace{\\left(I\\otimes\\cdots\\otimes I\\right)}_{K-l-r} \\tag{10}\\] \\[\\overline{X}_{j} = (\\underbrace{I\\otimes\\cdots\\otimes I\\otimes X_{j}\\otimes I \\otimes\\cdots\\otimes I}_{l})\\otimes\\underbrace{\\left(N_{Z}^{j}\\right)}_{r} \\otimes\\underbrace{\\left(M_{X}^{j}\\right)}_{K-l-r}. \\tag{11}\\] Here \\(M_{Z}^{j}=\\otimes_{n\\in{\\cal Z}_{j}}Z_{n}\\), \\(N_{Z}^{j}=\\otimes_{n^{\\prime}\\in{\\cal Z}_{j}^{\\prime}}Z_{n^{\\prime}}\\) and \\(M_{X}^{j}=\\otimes_{i\\in{\\cal X}_{j}}X_{i}\\), where \\({\\cal Z}_{j}\\), \\({\\cal Z}_{j}^{\\prime}\\) and \\({\\cal X}_{j}\\) are (possibly empty) index sets of lengths \\(r\\), \\(r\\) and \\(K-l-r\\) respectively (i.e., \\(M_{Z}^{j}\\), \\(N_{Z}^{j}\\) and \\(M_{X}^{j}\\) are tensor products of \\(I\\)'s and single qubit Pauli \\(Z\\) and \\(X\\) matrices, respectively). Recall that \\(K\\) is the number of physical qubits; \\(l\\) is the number of encoded qubits. The exact form of \\(M_{Z}^{j}\\), \\(N_{Z}^{j}\\) and \\(M_{X}^{j}\\), as well as the value of the integer \\(r\\), can be found from the stabilizer [46], but is unimportant for our purposes. We only need the result that for every pair of encoded \\(Z\\) and \\(X\\) operations, acting on the \\(j^{\\rm th}\\) encoded qubit, it is possible to express the operations in the blockwise product shown in Eqs. (10),(11). ### CSS-Stabilizer Errors on One Encoded Qubit For simplicity, let us now restrict attention to the case of a single encoded qubit in CSS codes, i.e., those codes where every \\(\\overline{Z}\\) and \\(\\overline{X}\\) can be written as a product of only \\(Z\\)'s and only \\(X\\)'s, respectively. Then, from Eqs. (10),(11) the standard form is (dropping the \\(j\\) index): \\[\\overline{Z} = Z_{1}\\otimes M_{Z}\\otimes I^{\\otimes K-l-r} \\tag{12}\\] \\[\\overline{X} = X_{1}\\otimes I^{\\otimes r}\\otimes M_{X}, \\tag{13}\\] i.e., \\(N_{Z}=I^{\\otimes r}\\). Our goal is to construct such \\(\\overline{Z}\\) and \\(\\overline{X}\\) from single- and two-body Hamiltonians. We shall do this by starting from the single body Hamiltonians \\(Z_{1}\\) and \\(X_{1}\\), and conjugating by certain two-body Hamiltonians. The idea is to successively construct the \\(Z\\)'s in \\(M_{Z}\\) and the \\(X\\)'s in \\(M_{X}\\). We claim that the required two-body Hamiltonians have the natural form \\[A_{n} = X_{1}Z_{z_{n}}\\qquad z_{n}\\in{\\cal Z} \\tag{14}\\] \\[B_{i} = Z_{1}X_{x_{i}}\\qquad x_{i}\\in{\\cal X}, \\tag{15}\\] where \\(n=1..|{\\cal Z}|\\) and \\(i=1..|{\\cal X}|\\), i.e., \\(A_{n}\\) (\\(B_{i}\\)) has a \\(Z\\) (\\(X\\)) in the \\(n^{\\rm th}\\) (\\(i^{\\rm th}\\)) position of the index set \\({\\cal Z}\\) (\\({\\cal X}\\)). If there is an even number of \\(Z\\)'s in \\(\\overline{Z}\\) then the last Hamiltonian should be taken as \\(A_{|{\\cal Z}|}=X_{1}\\) [since as we show below in that case in the penultimate step we have \\(Y_{1}\\otimes M_{Z}\\otimes I^{\\otimes K-l-r}\\) for \\(\\overline{Z}\\)], and similarly for the last \\(B_{i}\\).8 Note that \\([A_{n},\\overline{X}]=[B_{i},\\overline{Z}]=0\\), so that transforming \\(\\overline{Z}\\) does not affect \\(\\overline{X}\\), and _vice versa_. There are now two ways to construct \\(\\overline{Z}\\) and \\(\\overline{X}\\) fault-tolerantly: in parallel or in series. The parallel implementation has the advantage that it requires only three basic steps and thus is very efficient. Its disadvantage is that it may be hard to implement in practice because it requires simultaneous control over many qubits. #### v.2.1 Series Construction We assume throughout this discussion that we wish to generate \\(\\exp(i\\theta\\overline{Z})\\). The symmetry between \\(\\overline{Z}\\) and \\(\\overline{X}\\) in the CSS case implies that our arguments hold for \\(\\exp(i\\theta\\overline{X})\\) as well, with obvious modifications. The series construction consists of applying first the sequence of gates \\(\\{\\exp(i\\frac{\\pi}{4}A_{n})\\}_{n=1}^{|\\mathcal{Z}|}\\), then the gate \\(\\exp(i\\theta Z_{1})\\), and then the reverse sequence of gates \\(\\{\\exp(-i\\frac{\\pi}{4}A_{n})\\}_{n=|\\mathcal{Z}|}^{1}\\). An example is shown in Fig. 1. First, as an application of the general Eq. (4.12), let us prove that this procedure indeed generates \\(\\exp(i\\theta\\overline{Z})\\): \\[T_{A_{1}}\\circ T_{A_{2}}\\circ\\cdots T_{A_{|\\mathcal{Z}|}}\\circ \\exp(i\\theta Z_{1}) = \\left[\\bigotimes_{n=1}^{|\\mathcal{Z}|}\\exp\\left(-i\\frac{\\pi}{4}A _{n}\\right)\\right]\\exp(i\\theta Z_{1})\\left[\\bigotimes_{n=|\\mathcal{Z}|}^{1} \\exp\\left(+i\\frac{\\pi}{4}A_{n}\\right)\\right] \\tag{5.7}\\] \\[= \\exp\\left[i\\theta(i^{|\\mathcal{Z}|}\\prod_{n=1}^{|\\mathcal{Z}|}A_ {n}Z_{1})\\right]\\] \\[= \\exp\\left[(-)^{|\\mathcal{Z}|}i\\theta\\overline{Z}\\right],\\] where in the first line we used the definition of the \"conjugation-by-\\(\\frac{\\pi}{4}A_{n}\\)\" operation, in the second the result that this operation corresponds to multiplication inside the exponent, and in the third the form in Eq. (5.3) for \\(\\overline{Z}\\). Note that the reason we have a series of conjugation-by-\\(\\frac{\\pi}{4}A_{n}\\) operations (as opposed to trivial identity operations) is that \\(\\{\\prod_{n=1}^{k-1}A_{n}Z_{1},A_{k}\\}=0\\)\\(\\forall k\\leq|\\mathcal{Z}|\\). Finally, we can eliminate the minus sign [if \\(|\\mathcal{Z}|\\) is odd] by changing one of the \\(\\frac{\\pi}{4}\\)'s to \\(-\\frac{\\pi}{4}\\). Next we must demonstrate that the conditions of Theorem 2 are satisfied at each point in the corresponding circuit in order to guarantee the fault-tolerance of this implementation. Let us divide the proof into three parts, by following the transformations of the normalizer elements before and after the central \\(\\exp(i\\theta Z_{1})\\) gate, and showing that either \\(\\overline{Z}\\) or \\(\\overline{X}\\) anticommutes with the transformed normalizer at each step along the way. Errors before the central gate: After application of the first \\(k\\) gates \\(\\{\\exp(i\\frac{\\pi}{4}A_{n})\\}_{n=1}^{k}\\), \\(\\overline{Z}\\) is transformed to \\(\\overline{Z}^{(k)}\\equiv\\prod_{n=1}^{k}A_{n}\\overline{Z}\\) (we neglect the unimportant factors of \\(i\\) from now on). From Eq. (5.5) the product is: \\[\\prod_{n=1}^{k}A_{n}=\\left\\{\\begin{array}{cc}\\prod_{n=1}^{k}Z_{z_{n}}&\\text {if}\\quad k=2l\\\\ X_{1}\\prod_{n=1}^{k}Z_{z_{n}}&\\text{if}\\quad k=2l+1\\end{array}\\right.. \\tag{5.8}\\] Therefore, using the standard form: \\[\\left(\\prod_{n=1}^{2l}A_{n}\\overline{Z}\\right)\\overline{X}=\\prod_{n=1}^{2l}Z_ {z_{n}}\\overline{Z}\\overline{X}=-\\overline{X}\\prod_{n=1}^{2l}Z_{z_{n}}\\overline {Z}=-\\overline{X}\\left(\\prod_{n=1}^{2l}A_{n}\\overline{Z}\\right), \\tag{5.9}\\] so that \\(\\{\\overline{Z}^{(2l)},\\overline{X}\\}=0\\). On the other hand \\[\\left(\\prod_{n=1}^{2l+1}A_{n}\\overline{Z}\\right)\\overline{Z}=X_{1}\\prod_{n=1} ^{2l+1}Z_{z_{n}}\\overline{Z}\\overline{Z}=-\\overline{Z}X_{1}\\prod_{n=1}^{2l+1} Z_{z_{n}}\\overline{Z}=-\\overline{Z}\\left(\\prod_{n=1}^{2l+1}A_{n}\\overline{Z} \\right), \\tag{5.10}\\] so that \\(\\{\\overline{Z}^{(2l+1)},\\overline{Z}\\}=0\\). Thus Theorem 2 is satisfied after each gate application, with \\(\\overline{Z}\\) and \\(\\overline{X}\\) alternating in the role of the anticommuting original-normalizer element. Error immediately after the central gate: At the end of step (i) \\(\\overline{Z}\\) has been transformed to \\(Z_{1}\\). Since the central gate (\\(\\theta\\)-rotation) uses only \\(Z_{1}\\), the transformed \\(\\overline{Z}\\) does not change. Therefore is still anticommutes with the original \\(\\overline{X}\\) and satisfies the criterion of Theorem 2. For the same reason, however, \\(\\overline{X}\\) is transformed by the central gate:\\[\\overline{X}\\longmapsto\\overline{X}_{\\theta}=\\overline{X}\\cos(2\\theta)+i\\overline{X} Z_{1}\\sin(2\\theta). \\tag{11}\\] Thus it suffices to show that \\(\\overline{X}_{\\theta}\\) anticommutes with \\(\\overline{Z}\\), which is true since \\([\\overline{Z},Z_{1}]=0\\): \\[\\overline{X}_{\\theta}\\overline{Z}=\\overline{X}\\overline{Z}\\cos(2\\theta)+i \\overline{X}Z_{1}\\overline{Z}\\sin(2\\theta)=-\\overline{Z}\\overline{X}\\cos(2 \\theta)-i\\overline{Z}\\overline{X}Z_{1}\\sin(2\\theta)=-\\overline{Z}\\overline{X}_ {\\theta}. \\tag{12}\\] Errors after the central gate: After application of the first \\(k^{\\prime}\\) inverse gates \\(\\{\\exp(-i\\frac{\\pi}{4}A_{n})\\}_{n=|\\overline{Z}|-k^{\\prime}+1}^{|\\mathcal{Z}|- k^{\\prime}+1}\\), \\(Z_{1}\\) is transformed to \\(\\overline{Z}^{\\prime(k)}\\equiv\\prod_{n=1}^{k^{\\prime}}A_{n}\\overline{Z}\\). Therefore the same reasoning as in (i) applies to \\(\\overline{Z}^{(k)}\\). As for \\(\\overline{X}\\) (which is now \\(\\overline{X}_{\\theta}\\)), the \\(\\overline{X}\\cos(2\\theta)\\) component commutes with the inverse gates \\(\\exp(-i\\frac{\\pi}{4}A_{n})\\) so that it does not change. The \\(i\\overline{X}Z_{1}\\sin(2\\theta)\\) component, however, anticommutes with the inverse gates \\(\\exp(-i\\frac{\\pi}{4}A_{n})\\). Therefore it flips back and forth between \\(i\\overline{X}Z_{1}\\sin(2\\theta)\\) and \\(i\\overline{X}Y_{1}\\sin(2\\theta)\\). These terms anticommute with the original \\(\\overline{Z}\\) and \\(\\overline{Y}\\), respectively. But so does the \\(\\overline{X}\\cos(2\\theta)\\) component, so their sum anticommutes alternately with the original \\(\\overline{Z}\\) and \\(\\overline{Y}\\). We conclude that Theorem 2 is satisfied at each stage of the circuit. Therefore the series construction is indeed fault-tolerant. Of course, this fault-tolerance is achieved in practice by supplementing the circuit with error-detection and correction procedures after each gate (the parallel construction discussed next is much more economical for this reason). We discuss this issue in Section VII. #### v.2.2 Parallel Construction Since the \\(A_{n}\\) (\\(B_{i}\\)) all commute, the corresponding gates can also be implemented _in parallel_. That is, \\[U_{A} \\equiv \\bigotimes_{n\\in\\mathcal{Z}}\\exp\\left(i\\frac{\\pi}{4}A_{n}\\right) =\\exp\\left(i\\frac{\\pi}{4}\\sum_{n\\in\\mathcal{Z}}A_{n}\\right)\\] \\[U_{B} \\equiv \\bigotimes_{i\\in\\mathcal{X}}\\exp\\left(i\\frac{\\pi}{4}B_{i}\\right) =\\exp\\left(i\\frac{\\pi}{4}\\sum_{i\\in\\mathcal{X}}B_{i}\\right) \\tag{13}\\] can be used as parallel gates in our circuit (see Fig. 2 for an example). To see directly that this circuit really does implement the normalizer gate \\(\\exp(i\\theta\\overline{Z})\\) [or \\(\\exp(i\\theta\\overline{X})\\)], observe that, by definition \\(\\{A_{n},Z_{1}\\}=\\{B_{i},X_{1}\\}=0\\) for all \\(n\\) and \\(i\\). This means that conjugation of \\(Z_{1}\\) by \\(U_{A}\\) will act as multiplication by \\(\\prod_{n\\in\\mathcal{Z}}A_{n}\\) and thus transform \\(Z_{1}\\) to \\(\\overline{Z}\\) (without changing \\(X_{1}\\)). The same is true for \\(X_{1}\\) by changing \\(Z\\)'s to \\(X\\)'s and \\(U_{A}\\) to \\(U_{B}\\). Therefore \\(U_{A}Z_{1}U_{A}^{\\dagger}=\\overline{Z}\\) and \\(U_{B}X_{1}U_{B}^{\\dagger}=\\overline{X}\\), from which follows immediately by Taylor expansion that: \\[U_{A}\\exp(i\\theta Z_{1})U_{A}^{\\dagger} = \\exp(i\\theta\\overline{Z})\\] \\[U_{B}\\exp(i\\theta X_{1})U_{B}^{\\dagger} = \\exp(i\\theta\\overline{X}). \\tag{14}\\] This too is a fault-tolerant construction. The reason is that it corresponds to looking at the series construction just at the following three points: right before the central gate, right after the central gate, and the end. ### Example: The Subgroup \\(Q_{2x}\\) As an example with a many-body normalizer element, consider the Pauli subgroup/stabilizer generated by the errors \\(XXII\\), \\(IXXI\\), \\(IIXX\\): \\[Q_{2X}=\\{III,XXII,XIIX,IIXX,XIXI,IXXI,IXIX,XXXX\\}. \\tag{15}\\] It describes a physically interesting error-model, of bit-flip errors which act on all pairs of nearest neighbor qubits. This situation is of interest, e.g., when decoherence results from spin-rotation coupling in a dipolar Hamiltonian, typical in NMR [47]: \\[H_{I}=\\sum_{j,k}\\frac{\\gamma_{j}\\gamma_{k}}{r_{jk}^{3}}\\left[\\sigma_{j}\\cdot \\sigma_{k}-3\\left(\\sigma_{j}\\cdot{\\bf r}_{jk}\\right)\\left(\\sigma_{k}\\cdot{\\bf r }_{jk}\\right)\\right]. \\tag{16}\\]Here \\(\\gamma_{j}\\) is the gyromagnetic ratio of spin \\(j\\), \\(r_{jk}\\) is the distance beween spins \\(j\\) and \\(k\\). In the anistropic case (e.g., a liquid crystal) this can be rewritten as \\[H_{I}=\\sum_{j,k}\\frac{\\gamma_{j}\\gamma_{k}}{r_{jk}^{3}}\\sum_{\\alpha,\\beta=-1}^{ 1}g_{jk}^{\\alpha\\beta}\\left(\\sigma_{j}^{\\alpha}\\otimes\\sigma_{k}^{\\beta} \\right)Y_{2}^{-\\alpha-\\beta}, \\tag{5.17}\\] where \\(Y_{l}^{m}\\) are the spherical harmonics and \\(g_{jk}^{\\alpha\\beta}\\) is the anistropy tensor. When \\(g_{jk}^{\\alpha\\beta}=\\delta_{\\alpha 0}\\delta_{\\beta 0}g_{jk}\\) only the \\(\\sigma_{j}^{z}\\otimes\\sigma_{k}^{z}\\) terms remain (coupled to \\(Y_{2}^{0}\\)), which leads to decoherence described by the subgroup \\(Q_{2Z}\\) (defined similarly to \\(Q_{2X}\\)), analyzed in paper 1. To find the DFS under \\(Q_{2X}\\), we construct in accordance with the techniques of paper 1 the projector \\(P=\\frac{1}{8}\\sum_{i}q_{i}\\) (corresponding to the identity irrep of \\(Q_{2X}\\)), where the sum is over all \\(q_{i}\\in Q_{2X}\\). Applying this projector to an arbitrary initial state we find a 2-dimensional DFS, spanned by the states \\[\\left|0_{L}\\right\\rangle = \\left(\\left|0000\\right\\rangle+\\left|0011\\right\\rangle+\\left|0101 \\right\\rangle+\\left|0110\\right\\rangle+\\left|1001\\right\\rangle+\\left|10100 \\right\\rangle+\\left|1111\\right\\rangle\\right)/\\sqrt{8},\\] \\[\\left|1_{L}\\right\\rangle = \\left(\\left|0001\\right\\rangle+\\left|0010\\right\\rangle+\\left|0100 \\right\\rangle+\\left|0111\\right\\rangle+\\left|1000\\right\\rangle+\\left|1011 \\right\\rangle+\\left|1101\\right\\rangle\\right)/\\sqrt{8}. \\tag{5.18}\\] This DFS thus encodes a full qubit. Since for \\(Q_{2X}\\) there is just one encoded qubit, we expect to find just one \\(\\overline{X}\\) and one \\(\\overline{Z}\\). In the case of \\(Q_{2X}\\) it is easily verified that the normalizer is generated by \\[\\overline{X} = XIII,\\] \\[\\overline{Z} = ZZZZ. \\tag{5.19}\\] \\(\\overline{X}\\) is already a single-body Hamiltonian and therefore can be implemented directly. Let us show how \\(\\overline{Z}\\) can be implemented as a Hamiltonian using at most two-body interactions. Note that \\(Q_{2X}\\) supports a CSS code. Comparing the above expressions for \\(\\overline{Z}\\) to the standard form for CSS normalizer elements [Eq. (5.3)], we have \\(M_{Z}=Z_{2}Z_{3}Z_{4}\\) and \\(M_{X}=\\emptyset\\). Therefore, from the recipe of Eq. (5.5): \\(A_{n}=X_{1}Z_{n+1}\\) for \\(n=1..3\\), while \\(A_{4}=X_{1}\\). The series-circuit implementing \\(\\exp(i\\theta\\overline{Z})\\) thus has the form depicted in Fig. 1. The parallel version of the same circuit is shown in Fig. 2. To verify directly that these circuits indeed implement \\(\\exp(i\\theta\\overline{Z})\\) use Eq. (4.12) and choose the base qubit to be the first qubit. Then: \\[T_{XZII}\\circ T_{XIZI}\\circ T_{XIIZ}\\circ T_{XIII}\\circ\\exp(i\\theta ZIII)=\\exp (i\\theta ZZZ). \\tag{5.20}\\] As required, this is an implementation that uses at most two-body interactions. Fig. 1 also shows the transformed \\(\\overline{Z}\\) at each point, and directly below the original normalizer element (\\(\\overline{X}\\) or \\(\\overline{Z}\\)) that this transformed normalizer element anticommutes with. This verifies that the circuit is indeed a fault-tolerant implementation of \\(\\exp(i\\theta\\overline{Z})\\) for \\(Q_{2X}\\). ### CSS-Stabilizer Errors on Multiple Encoded Qubits The CSS case of more than one encoded qubit is a simple extension of the single encoded qubit case discussed above. From Eqs. (5.1),(5.2) the standard form for a CSS code is now: \\[\\overline{Z}_{j} = Z_{j}\\otimes M_{Z}^{j}\\otimes I^{\\otimes K-l-r} \\tag{5.21}\\] \\[\\overline{X}_{j} = X_{j}\\otimes I^{\\otimes r}\\otimes M_{X}^{j}. \\tag{5.22}\\] Operations on different encoded qubits \\(j\\),\\(j^{\\prime}\\) commute. Therefore the single encoded qubit constructions still holds when the Hamiltonians are modified to read \\[A_{n}^{(j)} = X_{j}Z_{z_{n}}\\qquad z_{n}\\in{\\cal Z}_{j} \\tag{5.23}\\] \\[B_{i}^{(j)} = Z_{j}X_{x_{i}}\\qquad x_{i}\\in{\\cal X}_{j}. \\tag{5.24}\\] As is easily checked, the entire proof for the single encoded qubit case carries through when the base qubit becomes physical qubit number \\(j\\) instead of number 1. This thus allows us to fault-tolerantly implement \\(\\overline{SU(2)}^{\\otimes l}\\) on all \\(l\\) encoded qubits. To couple encoded qubits within the same block [thus generating \\(\\overline{SU(2^{l})}\\)], one could use a standard trick from stabilizer theory [34], using an auxiliary block to swap information into and out of. This transversal operation involves applying encoded controlled-NOT operations, which we treat in Section VI below. In that Section we also show how coupling multiple encoded qubits can be achieved directly, without resorting to an auxiliary block. ### General Stabilizer Errors The entire analysis for the CSS case carries through in the general stabilizer case for the implementation of \\(\\exp(i\\theta\\overline{Z})\\), since \\(\\overline{Z}\\) remains unchanged [recall Eq. (102)]. However, the encoded \\(X\\) operation now includes the additional block \\(N_{Z}\\): \\(\\overline{X}=X_{1}\\otimes N_{Z}\\otimes M_{X}\\) [Eq. (103)]. Therefore to generate this operation we must include a new set of Hamiltonians: \\[C_{n^{\\prime}}=Z_{1}Z_{n^{\\prime}}\\qquad n^{\\prime}\\in{\\cal Z}^{\\prime}. \\tag{104}\\] If there is an even number of \\(Z\\)'s in \\(\\overline{Z}\\) then the last Hamiltonian should be taken as \\(C_{|{\\cal Z}^{\\prime}|}=Z_{1}\\). We now need to repeat the analysis for the generation of \\(\\exp(i\\theta\\overline{X})\\). Again, there is a series and a parallel construction. Since the \\(C_{n^{\\prime}}\\) and \\(B_{i}\\) all commute, the gate \\[U_{BC}\\equiv U_{B}\\otimes U_{C}=\\left[\\bigotimes_{i\\in{\\cal X}}\\exp\\left(i \\frac{\\pi}{4}B_{i}\\right)\\right]\\otimes\\left[\\bigotimes_{n^{\\prime}\\in{\\cal Z }^{\\prime}}\\exp\\left(i\\frac{\\pi}{4}C_{n^{\\prime}}\\right)\\right]=\\exp\\left[i \\frac{\\pi}{4}\\left(\\sum_{i\\in{\\cal X}}B_{i}+\\sum_{n^{\\prime}\\in{\\cal Z}^{ \\prime}}C_{n^{\\prime}}\\right)\\right] \\tag{105}\\] can be implemented in parallel. Conjugation of \\(\\exp(i\\theta X_{1})\\) by \\(U_{BC}\\) will yield \\(\\exp(i\\theta\\overline{X})\\) by Eq. (101), since \\(\\{X_{1},B_{i}\\}=\\{X_{1},C_{n^{\\prime}}\\}=0\\). It is further straightforward to check that this is a fault-tolerant implementation, since the arguments used in the case of a single encoded CSS qubit are still valid here. We are thus left to check only the series construction. Here the only new element is that we must make sure that the application of the \\(C_{n^{\\prime}}\\) Hamiltonians does not allow for undetectable errors to take place. Apart from this everything is the same as in the CSS case. Now, after application of the first \\(k\\) gates \\(\\{\\exp(i\\frac{\\pi}{4}C_{n^{\\prime}})_{n^{\\prime}=1}^{k},\\,\\overline{X}\\}\\) is transformed to \\(\\overline{X}^{(k)}\\equiv\\prod_{n^{\\prime}=1}^{k}C_{n^{\\prime}}\\overline{X}\\). This product is: \\[\\prod_{n^{\\prime}=1}^{k}C_{n^{\\prime}}=\\left\\{\\begin{array}{cc}\\prod_{n^{ \\prime}\\in{\\cal Z}_{k}^{{}^{\\prime}}}Z_{n^{\\prime}}&\\mbox{if}&k=2l\\\\ Z_{1}\\prod_{n^{\\prime}\\in{\\cal Z}_{k}^{{}^{\\prime}}}Z_{n^{\\prime}}&\\mbox{if}&k= 2l+1\\end{array}\\right., \\tag{106}\\] where \\({\\cal Z}_{k}^{{}^{\\prime}}\\) are the first \\(k\\) elements of the index set \\({\\cal Z}^{\\prime}\\). Therefore \\[\\{\\overline{X}^{(k)},\\overline{Z}\\}=\\left[(Z_{1})^{k}\\prod_{n^{\\prime}\\in{ \\cal Z}_{k}^{{}^{\\prime}}}Z_{n^{\\prime}}\\overline{X}\\right]\\overline{Z}+ \\overline{Z}\\left[(Z_{1})^{k}\\prod_{n^{\\prime}\\in{\\cal Z}_{k}^{{}^{\\prime}}}Z_ {n^{\\prime}}\\overline{X}\\right]=\\left[(Z_{1})^{k}\\prod_{n^{\\prime}\\in{\\cal Z} _{k}^{{}^{\\prime}}}Z_{n^{\\prime}}\\right]\\{\\overline{X},\\overline{Z}\\}=0. \\tag{107}\\] Thus Theorem 2 is satisfied after each \\(C_{n^{\\prime}}\\)-gate application, with \\(\\overline{Z}\\) playing the role of the anticommuting original-normalizer element. This means that use of the Hamiltonians \\(C_{n^{\\prime}}\\) does not spoil the fault tolerance of the circuit. We know from the calculations in the single encoded qubit case that the rest of the circuit is also fault tolerant. Hence we can conclude at this point that our method of constructing normalizer elements is fault tolerant for any stabilizer code. ### Summary Let us recapitulate the main result of this section. Given a set of errors corresponding to some Abelian subgroup of the Pauli group (i.e., a stabilizer), there is a DFS which is immune against these errors. We have shown how to implement arbitrary encoded \\(SU(2)\\) operations on this class of DFSs. To do so, we gave an explicit construction of encoded \\(\\sigma_{x}\\) and \\(\\sigma_{z}\\) operations, which together span encoded \\(SU(2)\\)'s for each DFS-qubit. The construction involves turning on and off a series of one- and two-body Hamiltonians for a specific durations. Each such operation takes the encoded states outside of the DFS. However, our construction guarantees that the errors always remain correctable by the code formed by the transformed states. That is, these states form a QECC with respect to the Pauli subgroup errors. Therefore, our construction works by supplementing the unitary gates executing the encoded \\(\\sigma_{x}\\) and \\(\\sigma_{z}\\) operations by appropriate error correction procedures. To complete the construction, we still need to show how to execute encoded two-body gates, and how to fault-tolerantly measure the error syndrome. This is the subject of the next two sections. ## VI Encoded Controlled-Not The unitary controlled-NOT (CNOT) operation from the first qubit (\"control qubit) to the second qubit (\"target qubit\") can be written in the basis of \\(\\sigma_{z}\\) eigenstates as: \\[U_{\\rm{CNOT}}=\\left(\\begin{array}{cc}I&0\\\\ 0&X\\end{array}\\right) \\tag{10}\\] (where each entry is a \\(2\\times 2\\) matrix). Since we are working in the Heisenberg picture it is useful to consider how two-qubit operators transform under CNOT. For example, \\[X\\otimes I\\longmapsto U_{\\rm{CNOT}}\\left(X\\otimes I\\right)U^{\\dagger}_{\\rm{ CNOT}}=\\left(\\begin{array}{cc}I&0\\\\ 0&X\\end{array}\\right)\\left(\\begin{array}{cc}0&I\\\\ I&0\\end{array}\\right)\\left(\\begin{array}{cc}I&0\\\\ 0&X\\end{array}\\right)=\\left(\\begin{array}{cc}0&X\\\\ X&0\\end{array}\\right). \\tag{11}\\] As is simple to verify, the full transformation table is: \\[X\\otimes I\\longmapsto X\\otimes X\\] \\[I\\otimes X\\longmapsto I\\otimes X\\] \\[Z\\otimes I\\longmapsto Z\\otimes I\\] \\[I\\otimes Z\\longmapsto Z\\otimes Z. \\tag{12}\\] Since \\(U(A\\otimes B)U^{\\dagger}=U(A\\otimes I)U^{\\dagger}U(I\\otimes B)U^{\\dagger}\\), the rest of the transformations under CNOT follow simply by taking appropriate products of the above, e.g., \\(X\\otimes Z=\\left(X\\otimes I\\right)\\left(I\\otimes Z\\right)\\longmapsto\\left( X\\otimes X\\right)\\left(Z\\otimes Z\\right)=-Y\\otimes Y\\). We need to show how to fault-tolerantly construct an encoded CNOT operation for the DFS corresponding to a given Pauli subgroup of errors. ### CSS-Stabilizer Errors It is well known that a bitwise CNOT gate between physical qubits in different blocks is an operation that preserves any CSS code, and acts as the encoded CNOT gate between the blocks encoding different qubits [34]. However, this is true only at the _conclusion_ of the operation, i.e., after all the bitwise operations have been applied. During the execution of the bitwise operations the codewords are exposed to errors. To demonstrate this, consider the transformation of the normalizer elements of a CSS code. Let \\({\\rm{CNOT}}_{j_{A},j_{B}}\\) denote the CNOT operation from control qubit \\(j\\) (in the first block \\(A\\)) to target qubit \\(j\\) (in the second block \\(B\\)). For definiteness let us consider the transformation of \\(\\overline{X}_{j}\\otimes I^{\\otimes K}\\) under bitwise CNOT's. Then, because of the standard form for \\(\\overline{X}_{j}\\), the first CNOT operation is applied from control qubit \\(j\\), and subsequent CNOT's from control qubits determined by the index set \\({\\cal X}\\), i.e., acting on pairs of physical qubits at positions \\(\\{(i_{A},i_{B})\\}_{i\\in{\\cal X}}\\). Using Eqs. (10),(11) for \\(\\overline{Z}_{j}\\) and \\(\\overline{X}_{j}\\), and the transformation table of Eq. (12), we find: \\[\\overline{X}_{j}\\otimes I^{\\otimes K} = [X_{j}\\otimes I^{\\otimes r}\\otimes M_{X}]\\otimes[I^{\\otimes l} \\otimes I^{\\otimes r}\\otimes I^{\\otimes K-l-r}] \\tag{13}\\] \\[\\stackrel{{{\\rm{CNOT}}_{j_{A},j_{B}}}}{{\\longmapsto}} [X_{j}\\otimes I^{\\otimes r}\\otimes M_{X}]\\otimes[X_{j}\\otimes I^{ \\otimes r}\\otimes I^{\\otimes K-l-r}]\\] \\[\\stackrel{{{\\rm{CNOT}}_{i_{A},i_{B}}}}{{\\longmapsto}} [X_{j}\\otimes I^{\\otimes r}\\otimes M_{X}]\\otimes[X_{j}\\otimes I^{ \\otimes r}\\otimes X_{i_{1}}]\\] \\[\\longmapsto \\longmapsto[X_{j}\\otimes I^{\\otimes r}\\otimes M_{X}]\\otimes[ X_{j}\\otimes I^{\\otimes r}\\otimes M_{X}]=\\overline{X}_{j}\\otimes\\overline{X}_{j}.\\] Similarly one can check that the rest of the transformations of Eq.( 12) are satisfied at the encoded level. Therefore this calculation demonstrates that the full bitwise CNOT gate indeed acts as an _encoded_ CNOT operation, since it transforms encoded normalizer operations according to the transformation rules of CNOT, as per Eq. (12). In our context this implies that given a certain Pauli subgroup of errors, application of the full bitwise CNOT gate will implement the \\(\\overline{\\rm{CNOT}}\\) gate on the DFS in a way which keeps the codewords inside the DFS at the end of the operation. However, as in the \\(\\overline{SU(2)}\\) case, this is not true at intermediate steps, meaning that the code leaves the DFS.9 As in the \\(\\overline{SU(2)}\\) case, we must check that the original errors are still correctable at each intermediate step. Theorem2 will still apply if error correction procedures are implemented on each block separately, after each bitwise CNOT operation (since the blocks are only coupled during the execution of the CNOT). Therefore we need to check that for each block in which the normalizer changed, there exists an element in the original normalizer that anticommutes with the transformed normalizer. It is easy to see from Eq. (10) that \\(\\overline{X}_{j}\\) does not change in the first block, and the sequence of transformed \\(\\overline{X}_{j}\\)'s in the second block anticommutes with \\(\\overline{Z}_{j}\\) at every step. Therefore error correction is possible at each intermediate step. To complete the construction it is necessary to check that the remaining normalizer elements are appropriately transformed. Repeating the calculation of Eq. (10) it is straightforward to check that this is true, namely: \\[I^{\\otimes K}\\otimes\\overline{X}_{j} \\longmapsto I^{\\otimes K}\\otimes\\overline{X}_{j}\\] \\[\\overline{Z}_{j}\\otimes I^{\\otimes K} \\longmapsto \\overline{Z}_{j}\\otimes I^{\\otimes K}\\] \\[I^{\\otimes K}\\otimes\\overline{Z}_{j} \\longmapsto \\overline{Z}_{j}\\otimes\\overline{Z}_{j}, \\tag{11}\\] with \\(I^{\\otimes K}\\otimes\\overline{X}_{j}\\) and \\(\\overline{Z}_{j}\\otimes I^{\\otimes K}\\) invariant under the bitwise CNOT's (thus requiring no error correction), and the transformed \\(I^{\\otimes K}\\otimes\\overline{Z}_{j}\\) anticommuting at each step with the original \\(\\overline{X}_{j}\\). This completes our demonstration that a \\(\\overline{\\rm CNOT}\\) gate can be implemented fault-tolerantly using bitwise CNOT's in the CSS case. ### General Stabilizer Errors In the non-CSS case the bitwise CNOT does not act as a \\(\\overline{\\rm CNOT}\\). One quick way to realize this is to note that since \\(X\\otimes I\\longmapsto X\\otimes X\\), by unitarity \\(X\\otimes X\\longmapsto X\\otimes I\\), but this is not the case at the encoded level: \\[\\overline{X}\\otimes\\overline{X} = [X_{1}\\otimes N_{Z}\\otimes M_{X}]\\otimes[X_{K+1}\\otimes N_{Z} \\otimes M_{X}]\\] \\[\\longmapsto [X_{1}\\otimes I^{\\otimes r}\\otimes M_{X}]\\otimes[I_{K+1}\\otimes N _{Z}\\otimes I^{\\otimes K-l-r}]\ eq\\overline{X}\\otimes I^{\\otimes K}.\\] Thus a different implementation of the \\(\\overline{\\rm CNOT}\\) is needed. Now, it is clear that if the product of stabilizers for different blocks (each encoding one qubit or more) is mapped to itself at the end of the \\(\\overline{\\rm CNOT}\\) implementation, then the stabilizer errors will not have changed, the DFS qubits will not have changed, and thus the DFS-code still offers protection against the stabilizer errors. Gottesman [34] has given such an implementation of the \\(\\overline{\\rm CNOT}\\) for arbitrary stabilizer codes. It uses transformations involving 4 blocks at a time, where two blocks serve as ancillas and are discarded after a measurement at the end of the implementation. We will not repeat this analysis here - the interested reader is referred to p.133 of [34] for details. The faster the gate sequence implementing this \\(\\overline{\\rm CNOT}\\) is executed compared to the timescale for the errors to appear, the higher the probability that the code will not be taken outside of the DFS. However, as shown in Appendix A, the gate sequence (Fig. 2 of [34]) does not have the property we have been able to demonstrate above for all our constructions, i.e., it allows for errors to become part of the transformed normalizer. Therefore we cannot use this construction. Instead we now introduce a different construction for the \\(\\overline{\\rm CNOT}\\), in the spirit of what we have done above for the \\(\\overline{SU(2)}\\) operations. Consider two blocks \\(A\\) and \\(B\\) encoding one DFS qubit each. We already know how to implement \\(\\exp(i\\theta I_{A}\\otimes\\overline{X}_{B})\\). Suppose one can also implement \\(\\exp(i\\theta\\overline{Z}_{A}\\otimes\\overline{X}_{B})\\). Then by use of the Trotter formula \\(\\exp[i(t_{1}O_{1}+t_{2}O_{2})/n]\\)= \\(\\lim_{n\\to\\infty}\\left[\\exp\\left(i\\frac{t_{1}}{n}O_{1}\\right)\\exp\\left(i\\frac{ t_{2}}{n}O_{2}\\right)\\right]^{n}\\)[48], or its short-time approximation \\[\\exp[it(O_{1}+O_{2})/n]=\\exp[itO_{1}/n]\\exp[itO_{2}/n]+O(n^{-2}) \\tag{12}\\] valid for arbitrary operators \\(O_{1}\\) and \\(O_{2}\\), we can form, to any desired accuracy \\[\\exp[i\\theta(I_{A}\\otimes\\overline{X}_{B}-\\overline{Z}_{A}\\otimes\\overline{X} _{B})/2]=\\left(\\begin{array}{cc}I&0\\\\ 0&\\exp(i\\theta\\overline{X}_{B})\\end{array}\\right). \\tag{13}\\] For \\(\\theta=\\pi/2\\) this is the \\(\\overline{\\rm CNOT}\\) operation between the two blocks. Thus our problem reduces to showing how \\(\\exp(i\\theta\\overline{Z}_{A}\\otimes\\overline{X}_{B})\\) can be implemented fault-tolerantly for arbitrary stabilizer DFSs. Consider the circuit shown in Fig 3. It describes the implementation of \\(\\overline{Z}\\) and \\(\\overline{X}\\) operations, as in the \\(\\overline{SU(2)}\\) case, with the difference that the single-body central gates have been replaced with a two-body gate, generated by the Hamiltonian \\(H_{AB}=Z_{1}^{A}\\otimes X_{1}^{B}\\) (here \\(A\\) and \\(B\\) are the two blocks and the subscript 1 indicates the first physical qubit in each block). By the \\(\\overline{SU(2)}\\) construction we have that \\(U_{A}Z_{1}^{A}U_{A}^{\\dagger}=\\overline{Z}_{A}\\) and \\(U_{B}X_{1}^{B}U_{B}^{\\dagger}=\\overline{X}_{B}\\) (recall Sec. V.2.2). Therefore, using the fact that for any non-singular matrix \\(M\\) the equality \\(M\\exp(H)M^{-1}=\\exp(MHM^{-1})\\) holds, the gates in Fig. 3 yield: \\[(U_{A}\\otimes U_{B})\\exp(i\\theta H_{AB})\\left(U_{A}^{\\dagger} \\otimes U_{B}^{\\dagger}\\right) = \\exp\\left[i\\theta\\left(U_{A}\\otimes U_{B}\\right)H_{AB}\\left(U_{A} ^{\\dagger}\\otimes U_{B}^{\\dagger}\\right)\\right] \\tag{101}\\] \\[= \\exp\\left[i\\theta\\left(U_{A}Z_{1}^{A}U_{A}^{\\dagger}\\right) \\otimes\\left(U_{B}X_{1}^{B}U_{B}^{\\dagger}\\right)\\right]\\] \\[= \\exp\\left(i\\theta\\overline{Z}_{A}\\otimes\\overline{X}_{B}\\right),\\] as desired. It remains to verify that this is a fault-tolerant construction. The only difference compared to the \\(\\overline{SU(2)}\\) construction above is the fact that we are now using a _two_-body central Hamiltonian. It is reasonable to assume that if the system can couple the two blocks connected by this Hamiltonian, then so can the environment. Therefore instead of considering the error subgroups \\(Q_{A}\\) and \\(Q_{B}\\) separately, we must now consider the new error subgroup \\(Q_{A}\\times Q_{B}\\). But then the appropriate normalizer is \\(N_{AB}=N_{A}\\times N_{B}\\), and the sequence of transformed normalizers satisfy \\(N_{AB,j}=N_{A,j}\\times N_{B,j}\\). This makes the fault-tolerance verification task very simple: We already checked in our \\(\\overline{SU(2)}\\) discussion that Theorem 2 is satisfied for each block separately. Now, clearly both \\(N_{A}\\otimes I_{B},I_{A}\\otimes N_{B}\\in N_{AB}\\). Therefore, since for every transformed normalizer element in \\(N_{A,j}\\)\\([N_{B,j}]\\) there is an anticommuting element in the original normalizer \\(N_{A}\\)\\([N_{B}]\\), it follows that \\(N_{A}\\otimes I_{B}\\)\\([I_{A}\\otimes N_{B}]\\) will correspondingly anticommute with the elements of \\(N_{AB,j}\\). This means that Theorem 2 is satisfied also for the combination of blocks \\(A\\) and \\(B\\), and fault-tolerance is guaranteed as in the \\(\\overline{SU(2)}\\) case. As promised in Section V.4, the construction presented here also applies to multiple qubits encoded into a single block. To see this, consider the case of two encoded qubits in the same block, and let us show that we can generate \\(\\exp(i\\theta\\overline{Z}_{1}\\otimes\\overline{Z}_{2})\\) between them. This coupling, together with single encoded-qubit operations, suffices to generate \\(\\overline{SU(2^{l})}\\) (for \\(l\\) encoded qubits in a block). Now, from the standard form we have: \\[\\overline{Z}_{1} = Z_{1}\\otimes M_{2}^{1}\\otimes I^{\\otimes K-l-r} \\tag{102}\\] \\[\\overline{Z}_{2} = Z_{2}\\otimes M_{2}^{2}\\otimes I^{\\otimes K-l-r}. \\tag{103}\\] Let \\(\\overline{Z}_{1}=U_{1}Z_{1}U_{1}^{\\dagger}\\) and \\(\\overline{Z}_{2}=U_{2}Z_{1}U_{2}^{\\dagger}\\). Note that \\([U_{1},Z_{2}]=[U_{2},Z_{1}]=0\\) since \\(U_{1(2)}\\) contains \\(X_{2(1)}\\). For the same reason also \\([U_{1},U_{2}]=0\\). Therefore: \\[(U_{1}\\otimes U_{2})\\exp(i\\theta Z_{1}\\otimes Z_{2})\\left(U_{1}^{ \\dagger}\\otimes U_{2}^{\\dagger}\\right) = \\exp\\left[i\\theta\\left(U_{1}\\otimes U_{2}\\right)Z_{1}\\otimes Z_{2 }\\left(U_{1}^{\\dagger}\\otimes U_{2}^{\\dagger}\\right)\\right] \\tag{104}\\] \\[= \\exp\\left[i\\theta\\left(U_{1}Z_{1}U_{1}^{\\dagger}\\right)\\otimes \\left(U_{2}Z_{2}U_{2}^{\\dagger}\\right)\\right]\\] \\[= \\exp\\left(i\\theta\\overline{Z}_{1}\\otimes\\overline{Z}_{2}\\right).\\] The same idea can be used to implement \\(\\overline{\\rm CNOT}\\) between multiple qubits encoded into a single block. We have thus provided a fault-tolerant implementation of \\(\\overline{\\rm CNOT}\\) for any stabilizer DFS. ## VII Fault tolerant measurement of the error syndrome So far we have taken for granted that error detection and correction is possible in between gate applications. We now complete our discussion by showing that it is indeed possible to do so fault-tolerantly. This requires the ability to measure the sequence of transformed stabilizer generators in a manner that does not introduce new errors in a catastrophic way. To accomplish this fault-tolerant measurement we follow, with some modifications, the usual stabilizer construction [46]. Let us recall the basics of measurement within stabilizer theory. A DFS state \\(|\\psi\\rangle\\) in the stabilizer \\(Q\\) is a \\(+1\\) eigenstate of all elements of \\(Q\\). An error \\(e\\) is an operator that anticommutes with at least one element of the stabilizer \\(Q\\), say \\(q\\). If \\(|\\psi\\rangle\\in Q\\) then \\(qe|\\psi\\rangle=-eq|\\psi\\rangle=-e|\\psi\\rangle\\), so that \\(e|\\psi\\rangle\\) is an eigenstate of \\(q\\) with eigenvalue \\(-1\\). Therefore each generator measurement that returns the eigenvalue \\(+1\\) indicates that no error has occured, while each \\(-1\\) result indicates an error, which can be fixed by applying the error \\(e\\) to the state. The sequence of \\(\\pm 1\\)'s that results from measuring all stabilizer generators is called the \"error syndrome\". The identity of \\(e\\) is uniquely determined by this \"syndrome\", since the measurement process projects any linear combination of errors to an error in the Pauli group. ### CSS-Stabilizer Errors In this case the stabilizer generators contain either products only of \\(Z\\)'s (\"\\(Z\\)-type\") or products only of \\(X\\)'s (\"\\(X\\)-type\"). Suppose we wish to measure a \\(Z\\)-type stabilizer generator. The \\(+1\\) eigenstates of such a generator are the \"even parity states\", i.e., those states containing an even number of \\(|1\\rangle\\)'s. Prepare an ancilla in the encoded \\(|0_{L}\\rangle\\) state (below we discuss how). Then for each data qubit where the given stabilizer generator has a \\(Z\\) (not an \\(I\\)) apply a controlled-\\(\\overline{X}\\) from this qubit to the ancilla. The ancilla will flip every time the data qubit was a \\(|1\\rangle\\), so measuring the ancilla at the end and finding it in \\(|0_{L}\\rangle\\) will indicate no error (even number of flips), whereas \\(|1_{L}\\rangle\\) will indicate an error (odd number of flips). Distinguishing between \\(|0_{L}\\rangle\\) and \\(|1_{L}\\rangle\\) amounts to measuring \\(\\overline{Z}\\) on the ancilla, which we can do directly by measuring \\(Z\\) on all those ancilla qubits whose \\(\\overline{Z}\\) has a \\(Z\\). Now suppose we wish to measure an \\(X\\)-type stabilizer element. The same procedure as for \\(Z\\)-type generators can be applied, with one modification: a Hadamard transform \\[R=\\frac{1}{\\sqrt{2}}\\left(\\begin{array}{cc}1&1\\\\ 1&-1\\end{array}\\right) \\tag{12}\\] must be applied before and after the controlled-\\(\\overline{X}\\) operation. The effect of the Hadamard transform before the controlled-\\(\\overline{X}\\) operation is to change the corresponding qubit into the \\(Z\\)-eigenbasis, whence the \\(Z\\) -type construction applies. The second Hadamard transform returns the qubit to the original basis. This construction is shown schematically in Fig. 4. Note that since \\(\\overline{X}\\) is in the normalizer, it commutes with all stabilizer errors. This means that any such error occuring on the ancilla before the \\(\\overline{X}\\) is equivalent to the same error after the \\(\\overline{X}\\), and therefore the error has no effect. In other words, neither does the ancilla ever leave the DFS under the application of \\(\\overline{X}\\), nor can an error on the ancilla propagate back to the data qubits.10 Note further that since the ancilla is at all times unentangled from the data qubits the measurement is non-destructive on the data qubits. Footnote 10: This DFS-construction is different than in the usual QECC-stabilizer construction, where multiple control operations to the same ancilla-qubit are not fault-tolerant because they are not transversal. There multiple CNOT’s from different data qubits to the same ancilla qubit can cause errors to spread catastrophically if the ancilla qubit undergoes a phase error (recall that under CNOT, \\(I\\otimes Z\\mapsto Z\\otimes Z\\)). What if a stabilizer error occurs on the data qubits right after the application of the Hadamard gate? This can clearly present a problem, since it may for example flip the data qubit controlling the \\(\\overline{X}\\) applied to the ancilla. One (standard) way of dealing with such errors is to repeat the measurement several times in order to improve our confidence in the result. An alternative is to use concatenated codes [35, 49, 50, 51]. This will be of use if the stabilizer error is correctable by the transformed code, i.e., if we can verify that the conditions of Theorem 2 are satisfied. Then we can use the DFS at the lowest level, and concatenate it with the QECC it transforms into under the stabilizer errors (see Ref. [23] for concatenated DFS-QECC in the collective decoherence model). Now, recall the CSS form of the normalizer elements, Eq. (10). For every Hadamard transform in the first set (i.e., before the controlled-\\(\\overline{X}\\) operations) on a qubit in a position corresponding to an \\(X\\) in an \\(X\\)-type stabilizer generator, the normalizer elements transform by having \\(X\\) and \\(Z\\) interchange in this position. In the standard form of Eq. (10), if this happens to be the first qubit then \\(\\overline{Z}\\longmapsto X\\otimes M_{Z}\\otimes I\\), which anticommutes with the original \\(\\overline{Z}\\), and \\(\\overline{X}\\longmapsto Z\\otimes I\\otimes M_{X}\\), which in turn anticommutes with the original \\(\\overline{X}\\). If the position of the \\(X\\) in the \\(X\\)-type stabilizer generator is where \\(M_{Z}\\) has a \\(Z\\), then \\(\\overline{Z}\\longmapsto Z\\otimes M_{Z}^{\\prime}\\otimes I\\), where \\(M_{Z}^{\\prime}\\) has that \\(Z\\) changed into an \\(X\\). This transformed \\(\\overline{Z}\\) anticommutes with the original \\(\\overline{X}\\). Similarly, \\(\\overline{X}\\longmapsto X\\otimes I\\otimes M_{X}^{\\prime}\\) with an \\(X\\) changed into a \\(Z\\), and this transformed \\(\\overline{X}\\) anticommutes with the original \\(\\overline{Z}\\). Thus the conditions of Theorem 2 are again satisfied. The second set of Hadamard transforms restores the original normalizer. One then proceeds to measure the next stabilizer generator. We thus see that this measurement procedure is fault-tolerant of stabilizer errors. ### General Stabilizer Errors In the non-CSS case the stabilizer generators may contain \\(Y\\)'s as well, so our analysis above requires some modifications. The unitary operation that transforms \\(Y\\) to \\(Z\\) is \\[Q=\\frac{1}{\\sqrt{2}}\\left(\\begin{array}{cc}1&-i\\\\ 1&i\\end{array}\\right). \\tag{13}\\]It also maps \\(Z\\mapsto X\\mapsto Y\\). When this operation is applied immediately before the controlled-\\(\\overline{X}\\) to the ancilla and immediately after it for every \\(Y\\) in the stabilizer, the \\(Z\\)-type construction applies again. However, for the purpose of concatenation we need to check that the procedure is still fault-tolerant of stabilizer errors. The normalizer generators now have the form of Eqs. (11),(12). Every time a Hadamard or \\(Q\\) operation is applied, \\(Z\\mapsto X\\) in a single position in \\(\\overline{Z}\\). Similarly, \\(Z\\mapsto X\\), or \\(X\\mapsto Z\\) (if Hadamard) or \\(Y\\) (if \\(Q\\)) in a single position in \\(\\overline{X}\\). The case of the transformed \\(\\overline{Z}\\) is trivial: if \\(Z\\mapsto X\\) anywhere then the transformed \\(\\overline{Z}\\) anticommutes with the original \\(\\overline{Z}\\). Consider the transformed \\(\\overline{X}\\). The possibilities are: (i) \\(X_{1}\\mapsto Z_{1}\\) or \\(Y_{1}\\), (ii) \\(Z\\mapsto X\\) in the \\(N_{Z}\\) part, (iii) \\(X\\mapsto Z\\) or \\(Y\\) in the \\(M_{X}\\) part. In all these cases it is easily verified that the transformed \\(\\overline{X}\\) anticommutes with the original \\(\\overline{X}\\). Therefore the measurement procedure is fault-tolerant also in the non-SS case. ## VIII Outlook: Implications for the Independent-Errors Model The methods we have introduced in this paper need not be restricted to stabilizer-errors. In this section we briefly touch upon the implications of our construction for universal quantum computation in the independent errors model, when stabilizer-errors are taken into account as well. We thus generalize the standard treatment of stabilizer codes [34], where stabilizer errors that may occur during the course of gate implementation are ignored. However, we are here only able to consider independent single-qubit errors, so that the inclusion of the special type of correlated many-body errors represented by the stabilizer-errors is a rather unrealistic error model. The main importance of the result presented here is that it suggests an alternative route to universal quantum computation that is fault tolerant with respect to error _detection_, and is highly parallelizable. We believe that this may lead to an improved threshold for fault tolerant computation in the setting of concatenated codes [51]. Let us recall the error detection and correction criteria for a stabilizer code \\(Q=\\{q_{k}\\}\\) to be able to deal with all single qubit errors: \\[\\forall i,j,\\alpha,\\beta\\ \\exists k\\ {\\rm s.t.}\\ \\{q_{k},\\sigma_{i}^{\\alpha} \\otimes\\sigma_{j}^{\\beta}\\}=0 \\tag{14}\\] Can we implement encoded \\(SU(2)\\) operations in the independent errors model similarly to what we did above for stabilizer-errors? To do so we need to make sure that the errors do not become part of the sequence of transformed normalizers. The important difference compared to the stabilizer-errors case is that now the errors are \"small\" (single-body), which means that we must avoid using a single-qubit Hamiltonian as a central gate (for it is a normalizer element which will not be distinguishable from an error). If we restrict ourselves to using two-body Hamiltonians as central gates (which we can always do - recall the comment at the end of Section IV.4 ), then we run into a similar problem regarding the two-body form of Eq. (14), i.e., if the central gate uses the Hamiltonian \\(\\sigma_{i}^{\\alpha}\\otimes\\sigma_{j}^{\\beta}\\) then we will not be able to correct the two errors \\(\\sigma_{i}^{\\alpha}\\) and \\(\\sigma_{j}^{\\beta}\\). However, as we now show, as long as we use a two-body central gate it is nearly always possible to satisfy the error _detection_ criterion, \\(\\forall i,\\alpha\\ \\exists k\\ {\\rm s.t.}\\ \\{q_{k},\\sigma_{i}^{\\alpha}\\}=0\\). Let us demonstrate this explicitly for Steane's 7-qubit code [12]. This is a CSS code encoding one qubit into seven, and in standard form has the normalizer: \\[\\overline{X} = X_{1}X_{5}X_{6}\\] \\[\\overline{Z} = Z_{1}Z_{3}Z_{4}. \\tag{15}\\] Consider the gate construction [derived from Eq. (10)] \\[\\exp(i\\theta\\overline{Z})=T_{X_{1}Z_{3}}\\circ\\exp(i\\theta Y_{1}Z_{4}). \\tag{16}\\] The normalizer transforms as: \\[\\overline{X}\\mathop{\\longrightarrow}\\limits^{X_{1}Z_{3}}_{\\rm T} \\overline{X}\\mathop{\\longrightarrow}\\limits^{Y_{1}Z_{4}}_{\\rm T}\\cos(2\\theta) \\overline{X}+i\\sin(2\\theta)Z_{1}Z_{4}X_{5}X_{6}\\mathop{\\longrightarrow} \\limits^{X_{1}Z_{3}}_{\\rm T}\\cos(2\\theta)\\overline{X}+\\sin(2\\theta)\\overline{ Y}=\\overline{X}\\exp(2i\\theta\\overline{Z})\\] \\[\\overline{Z}\\mathop{\\longrightarrow}\\limits^{X_{1}Z_{3}}_{\\rm T }Y_{1}Z_{4}\\mathop{\\longrightarrow}\\limits^{Y_{1}Z_{4}}_{\\rm T}Y_{1}Z_{4} \\mathop{\\longrightarrow}\\limits^{X_{1}Z_{3}}_{\\rm T}\\overline{Z}. \\tag{17}\\] We see that at no point does a single-qubit error become part of the transformed normalizer, so that all single qubit errors are detectable. On the other hand, while we can always detect the occurrence of both the \\(Y_{1}\\) and \\(Z_{4}\\) errors, we cannot distinguish between them after the first gate has been applied (since our normalizer is \\(Y_{1}Z_{4}\\) at that point). Since we might accidentally try to reverse the error \\(Y_{1}\\) when in fact the error \\(Z_{4}\\) has taken place, this means that our construction is fault tolerant only for error detection. Similarly, the gate construction \\[\\exp(i\\theta\\overline{X})=T_{Z_{1}X_{5}}\\circ\\exp(i\\theta Y_{1}X_{6}) \\tag{18}\\]yields \\[\\overline{X}\\stackrel{{ Z_{1}X_{5}}}{{\\longmapsto}}Y_{1}X_{6} \\stackrel{{ Y_{1}X_{6}}}{{\\longmapsto}}Y_{1}X_{6}\\stackrel{{ Z_{1}X_{5}}}{{\\longmapsto}}\\overline{X}\\] \\[\\overline{Z}\\stackrel{{ Z_{1}X_{5}}}{{\\longmapsto}} \\overline{Z}\\stackrel{{ Y_{1}X_{6}}}{{\\longmapsto}}\\cos(2\\theta) \\overline{Z}+i\\sin(2\\theta)X_{1}Z_{3}Z_{4}X_{6}\\stackrel{{ Z_{1}X_{5}}}{{ \\longmapsto}}\\cos(2\\theta)\\overline{Z}+\\sin(2\\theta)\\overline{Y}=\\overline{Z} \\exp(-2i\\theta\\overline{X}). \\tag{101}\\] which also satisfies the error detection (but not correction) condition for single-qubit errors, in that no single-qubit error becomes part of the transformed stabilizer. Let us now consider the general stabilizer case. Recall once more the standard form of the normalizer, Eqs. (100),(101). Our gate construction acts by transforming one of the normalizer elements to two-body form, where it is applied as the central \\(\\theta\\)-gate, and then is transformed back to its standard form. All other normalizer elements are left unchanged until the application of the central gate, with which they anticommute. At this point each \\(\\overline{Z}\\) [\\(\\overline{X}\\)] is multiplied by \\(\\exp(-2i\\theta\\overline{X})\\) [\\(\\exp(2i\\theta\\overline{Z})\\)]. The final sequence of gates flips these normalizer elements back and forth between \\(\\exp(-2i\\theta\\overline{X})\\) and \\(\\exp(-2i\\theta\\overline{Y})\\) [\\(\\exp(2i\\theta\\overline{Z})\\) and \\(\\exp(2i\\theta\\overline{Y})\\)] (recall the analysis in Section V.2). All these operations have the effect of expanding, rather than shrinking the normalizer elements, as seen in the example of the 7-qubit code above. The ability to error-detect at each point thus translates to the question of whether any normalizer element ever becomes a single-body Hamiltonian under this sequence of transformations. It is not hard to see from the above description of the orbit of the normalizer that this can only be the case if in the standard form the normalizer contains a single-body element to begin with. This is certainly possible, as indeed shown in our \\(Q_{2X}\\) example (Section V.3), where \\(\\overline{X}=XIII\\). However, it is not the case for most interesting stabilizer codes, i.e., those offering protection against arbitrary single-qubit errors. Such codes must have \"large\" normalizer elements since they may not contain any single-qubit errors to begin with. We conclude that our \\(\\overline{SU(2)}\\) construction using just two-qubit Hamiltonians works for all stabilizer codes of interest, in the sense that it is fault-tolerant with respect to error detection. To complete the repertoir of universal operations the \\(\\overline{\\rm CNOT}\\) gate is still needed. The discussion given in Section VI applies here as well, with the modification that for non-CSS stabilizer codes it is once again necessary to apply two-body central gates. Fault tolerant measurement of the error syndrome can be done using the standard techniques available for stabilizer codes [34]. ## IX Summary and Conclusions In a previous paper [29] we derived conditions for the existence of class of decoherence-free subspaces (DFSs) defined by having Abelian stabilizers over the Pauli group. In this sequel paper we addressed the problem of universal, fault-tolerant quantum computation on this class of DFSs. The errors in this model are the elements of the stabilizer, and thus are necessarily correlated. This model is complementary to the standard model of quantum computation using stabilizer quantum error correcting codes (QECCs), where the errors that are correctable by the code anticommute with the stabilizer (rather than being part of it). The correlation between errors in the present model implies no spatial symmetry in the system-bath interaction, unlike in most previous studies of computation on DFSs (which considered the \"collective decoherence\" model, and where the stabilizer is non-Abelian). Therefore our present results significantly enlarge the scope of the theory of DFSs. It turns out that even though the class of DFSs we considered are Pauli-group stabilizer codes, the usual universality constructions do not apply, because of the different error-model we assume. Our alternative construction of a set of universal quantum gates resorts to the early ideas about universal quantum computation, except that our operations all act on _encoded_ (DFS) qubits: we showed how to implement arbitrary single-encoded-qubit operations [the \\(\\overline{SU(2)}\\) group] and \\(\\overline{\\rm CNOT}\\) gates between pairs of encoded qubits. The challenge here was to show how to accomplish this implementation using only physically reasonable Hamiltonians, i.e., those involving no more than two-body interactions. To do so, we switched from the usual point of view treating the normalizer elements (i.e., the operations that preserve the DFS) as gates, to one where these elements are considered as many-body Hamiltonians. We then introduced a procedure whereby these Hamiltonians could be simulated using at most two-body interactions. The gate sequence implementing this simulation does not preserve the DFS except at the beginning and end. Throughout the execution of the gates the DFS states are exposed to the stabilizer-errors. However, we showed that in fact the DFS is transformed into a sequence of stabilizer codes, each of which is capable of detecting and correcting the original stabilizer-errors. Moreover, we showed that these errors can be diagnosed in a fault-tolerant manner, i.e., without introducing new errors as a result of the associated measurements. In all, we showed how by using this type of hybrid DFS-QECC approach, universal, fault-tolerant quantum computation can be implemented. Our results have implications beyond computation on DFSs. We briefly considered here also the question of whether our techniques can be used to compute fault-tolerantly in the standard stabilizer error-model. We found the answer to be affirmative for the purpose of single-qubit error detection, but not correction. While this is interesting in its own right because of the new universality construction we introduced, it may also have important implications for the question of quantum computation using concatenated codes. The reason is that our construction is highly parallelizable, meaning that it requires a very small number of operations during which the encoded information is exposed to errors. We speculate that this can significantly reduce the threshold for fault-tolerant quantum computation. Finally, an interesting open question is whether the methods developed here are applicable to the problem of universal quantum computation on other classes of DFSs. ## X Acknowledgments This material is based upon work supported by the U.S. Army Research Office under contract/grant number DAAG55-98-1-0371, and in part by NSF CHE-9616615. We would like to thank Dr. Daniel Gottesman for very useful correspondence. Appendix A Why the 4-block implementation of \\(\\overline{\\rm CNOT}\\) is not fault-tolerant for non-CSS stabilizers The construction of the \\(\\overline{\\rm CNOT}\\) in Ref. [34] uses a series of bitwise CNOT's (along with some other operations) acting between pairs of qubits in 4 different blocks. Let us calculate the result of applying bitwise CNOT's on \\(I^{\\otimes K}\\otimes\\overline{X}\\) (i.e., on two out of the four blocks). Recall that for a non-CSS code \\(\\overline{X}=X\\otimes N_{Z}\\otimes M_{X}\\) [Eq. (5.2)]. Therefore it follows from Eq. (6.3) that \\[I^{\\otimes K}\\otimes\\overline{X}\\longmapsto\\left[I\\otimes N_{Z}\\otimes I^{ \\otimes K-1-r}\\right]\\otimes\\overline{X}, \\tag{101}\\] i.e., the \\(Z\\)'s are copied backwards into the first block. Therefore the normalizer on the first block now contains \\(I\\otimes N_{Z}\\otimes I^{\\otimes K-1-r}\\). This element obviously commutes with both the original \\(\\overline{X}\\) and \\(\\overline{Z}\\) [Eq. (5.1)], but does not equal either. Therefore it must be in the original stabilizer \\(Q\\). Turning this around, we see that an error \\(e\\in Q\\) has become part of the new normalizer \\(N_{j}(Q_{j})/Q_{j}\\) which is catastrophic since this error is now undetectable. ## References * [1] H.K. Lo, S. Popescu and T.P. Spiller, _Introduction to Quantum Computation and Information_ (World Scientific, Singapore, 1999). * [2] C. Williams and S. Clearwater, _Explorations in Quantum Computing_ (Springer-Verlag, New York, 1998). * [3] A.M. Steane, Rep. on Prog. in Phys. **61**, 117 (1998), LANL Report No. quant-ph/9708022. * [4] D. Aharonov, Quantum Computation, LANL Report No. quant-ph/9812037. * [5] R. Cleve, An Introduction to Quantum Complexity Theory, LANL Report No. quant-ph/9906111. * [6] R. Alicki and K. Lendi, _Quantum Dynamical Semigroups and Applications_, No. 286 in _Lecture Notes in Physics_ (Springer-Verlag, Berlin, 1987). * [7] K. Kraus, _States, Effects and Operations_, _Fundamental Notions of Quantum Theory_ (Academic, Berlin, 1983). * [8] D. Bacon, D.A. Lidar and K.B. Whaley, Phys. Rev. A **60**, 1944 (1999), LANL Report No. quant-ph/9902041. * [9] M.A. Nielsen, C.M. Caves, B. Schumacher and H. Barnum, Proc. Roy. Soc. London Ser. A **454**, 277 (1998), L Report No. quant-ph/9706064. * [10] P.W. Shor, Phys. Rev. A **52**, 2493 (1995). * [11] A.R. Calderbank and P.W. Shor, Phys. Rev. A **54**, 1098 (1996). * [12] A.M. Steane, Phys. Rev. Lett. **77**, 793 (1996). * [13] C.H. Bennett, D.P. DiVincenzo, J.A. Smolin and W.K. Wootters, Phys. Rev. A **54**, 3824 (1996). * [14] A.Yu. Kitaev, Russian Math. Surveys **52**, 1191 (1996). * [15] D. Gottesman, Phys. Rev. A **54**, 1862 (1996), LANL Report No. quant-ph/9604038. * [16] E. Knill and R. Laflamme, Phys. Rev. A **55**, 900 (1997). * [17] A.M. Steane, in _Introduction to Quantum Computation and Information_, edited by H.K. Lo, S. Popescu and T.P. Spiller (World Scientific, Singapore, 1999), p. 184. * [18] P. Zanardi and M. Rasetti, Phys. Rev. Lett. **79**, 3306 (1997), LANL Report No. quant-ph/9705044. * [19] P. Zanardi and M. Rasetti, Mod. Phys. Lett. B **11**, 1085 (1997), LANL Report No. quant-ph/9710041. * [20] L.-M Duan and G.-C. Guo, Phys. Rev. A **57**, 737 (1998). * [21] L.-M Duan and G.-C. Guo, Phys. Lett. A **243**, 265 (1998). * [22] D.A. Lidar, I.L. Chuang and K.B. Whaley, Phys. Rev. Lett. **81**, 2594 (1998), LANL Report No. quant-ph/9807004. * [23] D.A. Lidar, D. Bacon and K.B. Whaley, Phys. Rev. Lett. **82**, 4556 (1999), LANL Report No. quant-ph/9809081. * [24] E. Knill, R. Laflamme and L. Viola, Phys. Rev. Lett. **84**, 2525 (2000), LANL preprint quant-ph/9908066. * [25] L. Duan and G. Guo, Phys. Lett. A **255**, 209 (1999), LANL Report No. quant-ph/9809057. * [26] D. Bacon, J. Kempe, D.A. Lidar and K.B. Whaley, Universal Fault-Tolerant Computation on Decoherence-Free Subspaces, submitted to Phys. Rev. Lett. Available as LANL Report No. quant-ph/9909058. * [27] J. Kempe, D. Bacon, D.A. Lidar, and K.B. Whaley, Theory of Decoherence-Free, Fault-Tolerant, Universal Quantum Computation, submitted to Phys. Rev. A. Available as LANL Report No. quant-ph/0004064. * [28] E.M. Rains, R.H. Hardin, P.W. Shor and N.J.A. Sloane, Phys. Rev. Lett. **79**, 953 (1997). * [29] D.A. Lidar, D. Bacon, J. Kempe, and K.B. Whaley, Decoherence-Free Subspaces for Multiple-Qubit Errors: (I) Characterization, submitted to Phys. Rev. A. Available as LANL Report No. quant-ph/9908064. * [30] P. Zanardi, Phys. Rev. A **60**, R729 (1999), LANL Report No. quant-ph/9901047. * [31] A. Beige, D. Braun, B.Tregenna, and P.L. Knight, Quantum Computing Using Dissipation, LANL preprint quant-ph/0004043. * [32] P.W. Shor, in _Proceedings of the 37th Symposium on Foundations of Computing_ (IEEE Computer Society Press, Los Alamitos, CA, 1996), p. 56, LANL Report No. quant-ph/9605011. * [33] P. Boykin, T. Mor, M. Pulver, V. Roychowdhury, and F. Vatan, On Universal and Fault-Tolerant Quantum Computing, LANL Report No. quant-ph/9906054. * [34] D. Gottesman, Phys. Rev. A **57**, 127 (1997), LANL Report No. quant-ph/9702029. * [35] E. Knill, R. Laflamme and W. Zurek, Proc. Roy. Soc. London Ser. A **454**, 365 (1998), LANL Report No. quant-ph/9702058. * [36] D. Deutsch, A. Barenco and A. Ekert, Proc. Roy. Soc. London Ser. A **449**, 669 (1995). * [37] D.P. DiVincenzo, Phys. Rev. A **51**, 1015 (1995). * [38] T. Sleator and H. Weinfurter, Phys. Rev. Lett. **74**, 4087 (1995). * [39] S. Lloyd, Phys. Rev. Lett. **75**, 346 (1995). * [40] D. Gottesman, The Heisenberg Representation of Quantum Computers, LANL Report No. quant-ph/9807006. * [41] A.R. Calderbank, E.M. Rains, P.W. Shor and N.J.A. Sloane, IEEE Trans. Inf. Th. **44**, 1369 (1998), LANL Report No. quant-ph/9608006. * [42] A. Barenco, C.H. Bennett, R. Cleve, D.P. DiVincenzo, N. Margolus, P. Shor, T. Sleator, J. Smolin and H. Weinfurter, Phys. Rev. A **52**, 3457 (1995). * [43] M.E. Rose, _Elementary Theory of Angular Momentum_ (Dover, New York, 1995). * [44] D. Loss and D.P. DiVincenzo, Phys. Rev. A **57**, 120 (1998), LANL Report No. quant-ph/9701055. * [45] N.P. Landsman, Lecture Notes on C\\({}^{*}\\)-algebras, Hilbert C\\({}^{*}\\)-modules and Quantum Mechanics, LANL preprint math-ph/9807030. * [46] D. Gottesman, Ph.D. thesis, California Institute of Technology, Pasadena, CA, 1997, LANL Report No. quant-ph/9705052. * [47] C. Slichter, _Principles of Magnetic Resonance_, No. 1 in _Springer Series in Solid-State Sciences_ (Springer, Berlin, 1996). * [48] R. Bhatia, _Matrix Analysis_, No. 169 in _Graduate Texts in Mathematics_ (Springer-Verlag, New York, 1997). * [49] D. Aharonov and M. Ben-Or, in _Proceedings of 29th Annual ACM Symposium on Theory of Computing (STOC)_ (ACM, New York, NY, 1997), p. 46, LANL Report No. quant-ph/9611025. * [50] C. Zalka, Threshold estimate for fault-tolerant quantum computing, LANL Report No. quant-ph/9612028. * [51] E. Knill, R. Laflamme and W. Zurek, Science **279**, 342 (1998). Figure 1: Fault-tolerant circuit implementing \\(\\exp(i\\theta\\overline{Z})\\) for the \\(Q_{2X}\\) subgroup. The transformed \\(\\overline{Z}\\) is shown at each gate, and directly below the original normalizer element that it anticommutes with. Figure 4: Measurement of the stabilizer element \\(XZYX\\). Figure 3: Fault-tolerant implementation of \\(\\exp(i\\theta\\overline{ZX})\\) needed to generate CNOT. Figure 1: Lidar et al., _Decoherence-Free Subspaces for Multiple Qubit Errors (II)_ Figure 2: Lidar et al., _Decoherence-Free Subspaces for Multiple Qubit Errors (II)_ **Figure 3**, Lidar et al., _Decoherence-Free Subspaces for Multiple Qubit Errors (II)_Figure 4: Lidar et al., _Decoherence-Free Subspaces for Multiple Qubit Errors (II)_
Decoherence-free subspaces (DFSs) shield quantum information from errors induced by the interaction with an uncontrollable environment. Here we study a model of correlated errors forming an Abelian subgroup (stabilizer) of the Pauli group (the group of tensor products of Pauli matrices). Unlike previous studies of DFSs, this type of errors does not involve any spatial symmetry assumptions on the system-environment interaction. We solve the problem of universal, fault-tolerant quantum computation on the associated class of DFSs. PACS numbers: 03.67.Lx, 03.65.Bz, 03.65.Fd, 89.70.+c
Summarize the following text.
arxiv-format/0007313v1.md
# Frequency decomposition of astrometric signature of planetary systems Maciej Konacki 1, Department of Geological and Planetary Sciences, California Institute of Technology, 1201 E. California Blvd., Pasadena, CA 91125, USA Andrzej J. Maciejewski 2 Toruni Centre for Astronomy, Nicolaus Copernicus University, 87-100 Torun, Gagarina 11, Poland Alex Wolszczan 3 Department of Astronomy and Astrophysics, Penn State University University Park, PA 16802, USA Toruni Centre for Astronomy, Nicolaus Copernicus University ul. Gagarina 11, 87-100 Torun, Poland Footnote 1: affiliation: e-mail: [email protected] Footnote 2: affiliation: e-mail: [email protected] Footnote 3: affiliation: e-mail: [email protected] ## 1 Introduction One of the most important and challenging goals of the Space Interferometry Mission (SIM, see [http://sim.jpl.nasa.gov/](http://sim.jpl.nasa.gov/)) is astrometric detection of extrasolar planetary systems including Earth-like planets around stars from the solar neighborhood. High precision astrometry requires not only advanced technology but also adequately elaborated methods of data analysis. Amongothers it is important to develop techniques allowing reliable detection of planetary signatures and extraction of the orbital elements. The aim of this paper is to discuss some of the problems related to this subject. Specifically, we propose a method called the Frequency Decomposition (FD) to detect planets and help to obtain their orbital elements. This method has been successfully used for PSR 1257+12 timing observations (Konacki, Maciejewski & Wolszczan, 1999) and 16 Cygni B radial velocity measurements (Konacki and Maciejewski, 1999). Particular nature of the astrometric observations requires however some modifications of our original approach. In the paper we present a theoretical background of the method and an example of its application. The astrometric signal is a superposition of several effects of different magnitude and a proper analysis of the observations requires at least rough a priori knowledge of how different effects contribute to the signal. However, these effects depend on the parameters (such as number of planets, their eccentricities) which are unknown in advance. So what we propose is a two-step analysis (1) FD to understand the basic properties of the signal (i.e determine the number of planets and approximate values of their orbital elements) and (2) least-squares fit based on a proper model and the starting values of parameters derived from the previous step to refine the parameters and obtain their uncertainties. The basic idea of FD is the following. With few exception (proper motion, long period planets) the processes contributing to the signal are periodic. Therefore our astrometric signal can be successfully modeled as a multiple Fourier series plus a polynomial of certain degree (to account for the proper motion and long period planets). FD is a numerical algorithm to obtain the estimates of frequencies, amplitudes and phases of such model (Konacki, Maciejewski & Wolszczan, 1999). Let us note that such approach, contrary to the usual least-squares method, allows us to analyze the data without assuming any physical model. Subsequently we interpret derived parameters. We decide how many planets are present in the system and calculate their orbital parameters (as one can derive analytical formula expressing amplitudes and phases as functions of the orbital elements). This is especially useful for multiple planetary systems where deciphering the number of planets may be tricky (e.g. two planets in circular 2:1 resonant orbits may mimic one planet in an eccentric orbit, see Konacki and Maciejewski, 1999). Our approach can be also helpful while trying to determine whether we observe an astrometric displacement from a planet in 1-yr orbit or annual parallax since the parallactic motion has its own specific Fourier expansion constrained by SIM orbit. Finally, we can use these findings to perform the 'traditional' least-squares fit. We believe that such approach allows us to make more justified hypothesis about the data and in consequence lead to reliable results. The plan of our paper is the following. In section 2 we derive a detailed model of the SIM measurements. In section 3 we investigate Fourier properties of the orbital motion. In section 4 we discuss our approach to the analysis of SIM data and finally in section 5 we perform some numerical tests to show how our method works in practice. Modeling delays SIM measures relative positions of stars using Michelson interferometers. A single measurement with SIM gives the projection of direction to the star \\({\\bf s}\\) onto the interferometer baseline vector \\({\\bf B}\\). The measured quantity is the optical pathlength delay between the two arms of the interferometer (Shao & Baron, 1999) \\[d={\\bf B}\\cdot{\\bf s}+c+\\epsilon, \\tag{1}\\] where \\(c\\) is the zero point of the metrology gauge and \\(\\epsilon\\) represents measurement uncertainty. The search for extrasolar planet is performed in so-called narrow angle mode where delays toward two stars (called target and reference) within \\(1^{\\circ}\\) are measured and compared. For this kind of observation the measured quantity becomes the relative delay \\[D={\\bf B}\\cdot({\\bf s}_{1}-{\\bf s}_{2})+\\epsilon, \\tag{2}\\] where \\({\\bf s}_{1}\\) and \\({\\bf s}_{2}\\) are directions to the target and reference stars, respectively. Such narrow-angle measurement gives the angular separation between the stars and offers higher accuracy as many errors scale with the angular distance. The direction to a star \\({\\bf S}={\\bf S}(t)\\) from the Solar System Barycenter (SSB) is changing with time due to the proper motion and presence of companions. These two effects, we model in the following way \\[{\\bf S}(t)={\\bf S}_{0}+\\delta{\\bf S}_{\\mu}(t)+\\delta{\\bf S}_{\\rm c }(t), \\tag{3}\\] where \\({\\bf S}_{0}\\) is the direction toward the star at epoch \\(t_{0}\\); \\(\\delta{\\bf S}_{\\mu}(t)\\) and \\(\\delta{\\bf S}_{\\rm c}(t)\\) describe changes due to the proper and orbital motion, respectively. In order to properly calculate these changes let us assume that the SSB radius vector of each star is given by \\[{\\bf R}={\\bf R}_{0}+\\delta{\\bf R},\\qquad{\\rm where}\\quad\\|\\delta{ \\bf R}\\|\\ll\\|{\\bf R}_{0}\\|, \\tag{4}\\] then up to the first order in \\(\\|\\delta{\\bf R}\\|/\\|{\\bf R}_{0}\\|\\) \\[{\\bf S}=\\frac{{\\bf R}}{\\|{\\bf R}\\|}={\\bf S}_{0}+\\delta{\\bf S}^{(1)}, \\tag{5}\\] where \\[{\\bf S}_{0}=\\frac{{\\bf R}_{0}}{\\|{\\bf R}_{0}\\|},\\qquad\\delta{\\bf S }^{(1)}=-\\frac{1}{\\|{\\bf R}_{0}\\|}{\\bf S}_{0}\\times({\\bf S}_{0}\\times\\delta{ \\bf R})\\,, \\tag{6}\\] The correction \\(\\delta{\\bf S}^{(1)}\\) can be written in the form \\[\\delta{\\bf S}^{(1)}=\\frac{1}{\\|{\\bf R}_{0}\\|}\\left[\\delta{\\bf R}-{ \\bf S}_{0}({\\bf S}_{0}\\cdot\\delta{\\bf R})\\right]=\\frac{1}{\\|{\\bf R}_{0}\\|} \\delta{\\bf R}_{\\perp}, \\tag{7}\\]which means that it only depends on the component of \\(\\delta{\\bf R}\\) perpendicular to \\({\\bf S}_{0}\\). It turns out that sometimes the first order correction is not sufficient. Therefore we need to derive and analyze also the second order term \\(\\delta{\\bf S}^{(2)}\\) given by \\[\\delta{\\bf S}^{(2)}={\\bf S}_{0}\\left[\\frac{3}{2}\\left({\\bf S}_{0}\\cdot\\frac{ \\delta{\\bf R}}{\\|{\\bf R}_{0}\\|}\\right)^{2}-\\frac{1}{2}\\left(\\frac{\\|\\delta{\\bf R }\\|}{\\|{\\bf R}_{0}\\|}\\right)^{2}\\right]-\\frac{\\delta{\\bf R}}{\\|{\\bf R}_{0}\\|} \\left({\\bf S}_{0}\\cdot\\frac{\\delta{\\bf R}}{\\|{\\bf R}_{0}\\|}\\right) \\tag{8}\\] If we represent \\(\\delta{\\bf R}\\) as a sum of two components, perpendicular and parallel to \\({\\bf S}_{0}\\), \\[\\delta{\\bf R}=\\delta{\\bf R}_{\\perp}+\\delta{\\bf R}_{\\parallel} \\tag{9}\\] \\(\\delta{\\bf S}^{(2)}\\) can be written as \\[\\delta{\\bf S}^{(2)}=-\\frac{1}{2}\\left(\\frac{\\|\\delta{\\bf R}_{\\perp}\\|}{\\|{\\bf R }_{0}\\|}\\right)^{2}{\\bf S}_{0}-\\frac{\\|\\delta{\\bf R}_{\\parallel}\\|}{\\|{\\bf R} _{0}\\|^{2}}\\delta{\\bf R}_{\\perp} \\tag{10}\\] As we show in section 2.3, \\(\\delta{\\bf S}^{(2)}\\) is especially significant for nearby stars with large proper motions. For such stars we have \\[\\delta{\\bf R}_{\\perp}={\\bf V}_{T}\\,t\\quad\\delta{\\bf R}_{\\parallel}={\\bf V}_{R} \\,t \\tag{11}\\] where \\({\\bf V}_{T}\\) and \\({\\bf V}_{R}\\) are respectively transverse and radial velocity of the star. Thus if the star has a significant proper motion, through astrometric observations we can detect angular displacement \\(\\delta\\theta\\) (second term in equation (10)) \\[\\delta\\theta=\\frac{\\|\\delta{\\bf R}_{\\parallel}\\|}{\\|{\\bf R}_{0}\\|}\\frac{\\| \\delta{\\bf R}_{\\perp}\\|}{\\|{\\bf R}_{0}\\|}=\\frac{\\|{\\bf V}_{T}\\|}{\\|{\\bf R}_{0} \\|}\\frac{\\|{\\bf V}_{R}\\|}{\\|{\\bf R}_{0}\\|}\\,t^{2} \\tag{12}\\] due to the radial velocity. This effect is called _perspective acceleration_. The other term from the equation (10) has an interesting property. Namely, it can be shown that it does not change the angle between \\({\\bf S}(t)={\\bf S}_{0}+\\delta{\\bf S}^{(1)}(t)+\\delta{\\bf S}^{(2)}(t)\\) and \\({\\bf S}_{0}\\) (i.e. current and initial position of the star). In other words, if we had a direct way to measure this angle, we would not observe any effect from that term. However, since we measure all angles through the equation (1) and model the unit vector toward the star with \\({\\bf S}(t)={\\bf S}_{0}+\\delta{\\bf S}^{(1)}(t)+\\delta{\\bf S}^{(2)}(t)\\), the term \\(-\\frac{1}{2}\\left(\\|\\delta{\\bf R}_{\\perp}\\|/\\|{\\bf R}_{0}\\|\\right)^{2}{\\bf S}_ {0}\\) is necessary. Specifically it affects the length of the vector \\({\\bf S}\\) and helps to keep it normalized within the accuracy of the second order approximation. Further details concerning second order corrections we discuss in section 2.3. ### Local frame and baseline vector orientations In order to obtain explicit form of the delay \\(d\\) we need to calculate a scalar product \\({\\bf B}\\cdot{\\bf S}\\). The value of this product does not depend on a chosen reference frame. Thus, depending on our needs we can express vectors on the right hand side of (3) in different ways. It is convenient to introduce a local right hand orthonormal frame at the point \\({\\bf S}_{0}\\) on the celestial sphere (see Fig. 1). This frame is connected with the classical equatorial spherical coordinates and is defined by the unit vectors \\(\\{{\\bf e}_{\\alpha},{\\bf e}_{\\delta},{\\bf e}_{r}\\}\\). In SSB equatorial frame coordinates of these vectors are the following \\[{\\bf e}_{\\alpha}=(-\\sin\\alpha,\\cos\\alpha,0),\\qquad{\\bf e}_{\\delta} =(-\\sin\\delta\\cos\\alpha,-\\sin\\delta\\sin\\alpha,\\cos\\delta), \\tag{13}\\] \\[{\\bf e}_{r}={\\bf S}_{0}=(\\cos\\delta\\cos\\alpha,\\cos\\delta\\sin \\alpha,\\sin\\delta), \\tag{14}\\] where \\((\\alpha,\\delta)=(\\alpha_{0},\\delta_{0})\\) is the right ascension and declination of a star at \\(t_{0}\\). One can determine the relative position of the target and reference star using two interferometers or, as it is planned for SIM, by performing two measurements with one interferometer for two non parallel orientations of its baseline, \\({\\bf B}_{i},\\,i=1,2\\). For each orientation, the baseline can be represented as a sum of two vectors, \\({\\bf B}_{i}^{\\mbox{\\tiny{\\sc ii}}},{\\bf B}_{i}^{\\perp}\\), parallel and perpendicular to the initial direction toward the target star, \\({\\bf S}_{0}\\). Since we have \\({\\bf S}(t)={\\bf S}_{0}+\\delta{\\bf S}(t)\\) where \\(\\delta{\\bf S}(t)\\) is a displacement tangent to \\({\\bf S}_{0}\\), the delay can be written as \\[d={\\bf B}_{i}\\cdot{\\bf S}(t)+c+\\epsilon={\\bf B}_{i}^{\\mbox{\\tiny{\\sc ii}}} \\cdot{\\bf S}_{0}+{\\bf B}_{i}^{\\perp}\\cdot\\delta{\\bf S}(t)+c+\\epsilon=d_{0}+ \\Delta d(t)+c+\\epsilon \\tag{15}\\] where \\(d_{0}={\\bf B}_{i}^{\\mbox{\\tiny{\\sc ii}}}\\cdot{\\bf S}_{0}\\) and \\(\\Delta d(t)={\\bf B}_{i}^{\\perp}\\cdot\\delta{\\bf S}(t)\\). Clearly, from the planet detection point of view the important term is \\(\\Delta d(t)\\). Assuming that the measurement uncertainty, \\(\\epsilon\\), is independent on the baseline orientation the most favorable orientation of \\({\\bf B}_{i}\\) is \\({\\bf B}_{i}={\\bf B}_{i}^{\\perp}\\). For such orientation \\(\\Delta d(t)\\) is the largest possible for a given length of the baseline. Moreover, the baseline orientations should be perpendicular. This way the the covariance ellipse on the sky will be circular and there will not be any direction on the plane tangent at \\({\\bf S}_{0}\\) in which the measurements are more accurate than in others. Therefore, for all further considerations we assume that all observations are made with two orthogonal and fixed baseline orientations \\({\\bf B}_{1}\\) and \\({\\bf B}_{2}\\) which are perpendicular to the initial direction toward the target star. Additionally, to simplify the equations we assume that \\({\\bf B}_{1}\\) is parallel to \\({\\bf e}_{\\alpha}\\) and \\({\\bf B}_{2}\\) is parallel to \\({\\bf e}_{\\delta}\\). ### Proper motion, parallax and companions The proper motion is a projection on the sky of the motion of a star with the velocity \\({\\bf V}\\) and within the first order approximation astrometrically only its transverse component \\({\\bf V}_{T}=V_{\\alpha}\\,{\\bf e}_{\\alpha}+V_{\\delta}\\,{\\bf e}_{\\delta}\\) is observable. Thus using simple arguments we find that \\[\\delta{\\bf S}_{\\mu}(t)=\\pi\\,(V_{\\alpha}\\,t\\,{\\bf e}_{\\alpha}+V_{\\delta}\\,t\\,{ \\bf e}_{\\delta})=\\cos\\delta\\,\\mu_{\\alpha}\\,t\\,{\\bf e}_{\\alpha}+\\mu_{\\delta}\\, t\\,{\\bf e}_{\\delta}, \\tag{16}\\] where \\[V_{\\alpha}={\\bf V}\\cdot{\\bf e}_{\\alpha},\\quad V_{\\delta}={\\bf V}\\cdot{\\bf e}_ {\\delta}\\quad\\mbox{and}\\quad\\mu_{\\alpha}=\\frac{{\\rm d}\\alpha}{{\\rm d}t}(t_{0} ),\\quad\\mu_{\\delta}=\\frac{{\\rm d}\\delta}{{\\rm d}t}(t_{0}). \\tag{17}\\] and \\(\\pi=1/D^{\\star}\\) where \\(D^{\\star}=\\|{\\bf R}_{0}\\|\\) is the SSB distance to the star. If the star has companions then the proper motion refers to the motion of the mass center of the system and the first order correction due to the orbital is given by the following equation \\[\\delta{\\bf S}_{\\rm c}(t)=\\pi\\left[R_{\\alpha}^{\\star}(t){\\bf e}_{\\alpha}+R_{ \\delta}^{\\star}(t){\\bf e}_{\\delta}\\right], \\tag{18}\\] where \\({\\bf R}^{\\star}=(R_{\\alpha}^{\\star},R_{\\delta}^{\\star},R_{r}^{\\star})\\) denotes the radius vector of the star with respect to the barycenter of its system in the local frame \\(\\{{\\bf e}_{\\alpha},{\\bf e}_{\\delta},{\\bf e}_{r}\\}\\). If the interferometer is located at \\({\\bf R}_{\\rm O}(t)\\) in SSB frame then the observed direction toward the star is \\[{\\bf s}(t)={\\bf S}(t)+{\\bf\\Pi}(t), \\tag{19}\\] where \\({\\bf\\Pi}(t)\\) is the parallactic displacement \\[{\\bf\\Pi}(t)=\\pi{\\bf S}_{0}\\times({\\bf S}_{0}\\times{\\bf R}_{\\rm O}(t)) \\tag{20}\\] obtained from the equation (6) by substituting \\(-{\\bf R}_{\\rm O}\\) for \\(\\delta{\\bf R}\\). In the local frame the parallactic displacement can be written in the following form \\[{\\bf\\Pi}(t)=\\pi\\left[\\Pi_{\\alpha}(t){\\bf e}_{\\alpha}+\\Pi_{\\delta}(t){\\bf e}_{ \\delta}\\right], \\tag{21}\\] where \\[\\Pi_{\\alpha}(t)=-{\\bf R}_{\\rm O}(t)\\cdot{\\bf e}_{\\alpha}=X_{\\rm O }(t)\\sin\\alpha-Y_{\\rm O}(t)\\cos\\alpha, \\tag{22}\\] \\[\\Pi_{\\delta}(t)=-{\\bf R}_{\\rm O}(t)\\cdot{\\bf e}_{\\delta}=X_{\\rm O }(t)\\sin\\delta\\cos\\alpha+Y_{\\rm O}(t)\\sin\\delta\\sin\\alpha-Z_{\\rm O}(t)\\cos\\delta \\tag{23}\\] and \\((X_{\\rm O}(t),Y_{\\rm O}(t),Z_{\\rm O}(t))\\) are the coordinates of SSB vector \\({\\bf R}_{\\rm O}(t)\\). The expressions for \\({\\bf S}_{\\mu}(t)\\), \\({\\bf S}_{\\rm c}(t)\\) and \\({\\bf\\Pi}(t)\\) come directly from the formulae (6), (7) and thus represent a first order approximation with respect to \\(\\pi\\). Now, using (3)-(20) and assuming that the measurements have been already corrected for aberration and gravitational lensing we can rewrite the delay equation (1) in the form \\[d=d^{0}+d^{\\mu}\\,t+{\\bf d}^{\\pi}\\cdot{\\bf R}_{\\rm O}(t)+{\\bf d}^{\\rm c}\\cdot{ \\bf R}^{\\star}(t)+c+\\epsilon, \\tag{24}\\] where \\[d^{0}={\\bf B}\\cdot{\\bf S}_{0},\\qquad d^{\\mu}=\\pi{\\bf B}\\cdot{\\bf V }_{T}={\\bf b}\\cdot\\mathbf{\\mu},\\qquad\\mathbf{\\mu}=(\\mu_{ \\alpha}\\cos\\delta,\\mu_{\\delta}), \\tag{25}\\] \\[{\\bf b}=({\\bf B}\\cdot{\\bf e}_{\\alpha},{\\bf B}\\cdot{\\bf e}_{\\delta }),\\qquad{\\bf d}^{\\pi}=-\\pi\\left[{\\bf B}-({\\bf B}\\cdot{\\bf S}_{0}){\\bf S}_{0} \\right],\\] (26) \\[{\\bf d}^{\\rm c}=\\pi{\\bf b},\\qquad{\\bf R}^{\\star}(t)=(R_{\\alpha}^ {\\star}(t),R_{\\delta}^{\\star}(t)). \\tag{27}\\] Using (24) we obtain similar formula for the relative delay \\[D=D^{0}+D^{\\mu}\\,t+{\\bf D}^{\\pi}\\cdot{\\bf R}_{\\rm O}(t)+{\\bf d}^{\\rm c}\\cdot{ \\bf R}^{\\star}(t)+\\epsilon, \\tag{28}\\]where \\[\\begin{split}& D^{0}={\\bf B}\\cdot({\\bf S}_{0}^{1}-{\\bf S}_{0}^{2}), \\qquad D^{\\mu}={\\bf B}\\cdot\\left(\\pi_{1}{\\bf V}_{T}^{1}-\\pi_{2}{\\bf V}_{T}^{2} \\right),\\\\ &{\\bf D}^{\\pi}=\\left[\\pi_{1}({\\bf B}\\cdot{\\bf S}_{0}^{1}){\\bf S}_ {0}^{1}-\\pi_{2}({\\bf B}\\cdot{\\bf S}_{0}^{2}){\\bf S}_{0}^{2}\\right]-(\\pi_{1}- \\pi_{2}){\\bf B}.\\end{split} \\tag{29}\\] where the indices \\(1,2\\) refer to the target and reference star respectively. In the above we assumed that only the target star has companions so as \\({\\bf d}^{\\rm c}\\) refers to the local reference frame of the target star. Formula (28) plays the fundamental role in our consideration. It explicitly shows the structure of the observed signal that consists of a dominant linear trend modulated by periodicities due to the motion of the interferometer and the companions of the target star. According to our assumptions a single observation is done for two orthogonal baseline orientations \\({\\bf B}_{1}\\) and \\({\\bf B}_{2}\\). Thus a single observations is given as a two component vector \\({\\bf D}=(D_{1},D_{2})\\) of the relative delays. Each component \\(D_{i}\\) has the form (28) where coefficients \\(D_{i}^{0}\\), \\(D_{i}^{\\mu}\\), \\({\\bf D}_{i}^{\\pi}\\) and \\({\\bf d}_{i}^{\\rm c}\\) are calculated with the formulae (29) and \\({\\bf B}={\\bf B}_{i}\\), \\(i=1,2\\) respectively. ### Second order corrections The above considerations represent first order approximation which is sufficient for most astrometric measurements. However SIM is expected to deliver unprecedented \\(1\\mu\\)as precision in the narrow-angle mode and thus it is important to understand limitation of the model (29). It can be accomplished by analyzing higher order terms. For the baselines perpendicular to \\({\\bf S}_{0}\\), the second order corrections correspond to the term that is responsible for the actual angular displacement (see equation (12)) and we have \\[\\|\\delta{\\bf S}^{(2)}\\|=\\frac{\\|\\delta{\\bf R}_{{}_{\\|}}\\|}{\\|{\\bf R}_{0}\\|} \\frac{\\|\\delta{\\bf R}_{{}_{\\perp}}\\|}{\\|{\\bf R}_{0}\\|}\\leq\\left(\\frac{\\|\\delta {\\bf R}\\|}{\\|{\\bf R}_{0}\\|}\\right)^{2} \\tag{30}\\] They can be calculated if we put \\(\\delta{\\bf R}=-{\\bf R}_{\\rm O}(t)+{\\bf R}^{\\star}(t)+{\\bf R}_{V}(t)\\) where \\({\\bf R}_{V}(t)={\\bf V}_{T}\\,t+{\\bf V}_{R}\\,t\\). We obtain \\[\\begin{split}&\\frac{1}{\\pi^{2}}\\delta{\\bf S}^{(2)}=-\\|{\\bf R}_{ {}_{\\rm O}}^{{}_{\\|}}(t)\\|{\\bf R}_{{}_{\\rm O}}^{\\perp}(t)+\\|{\\bf R}_{{}_{\\rm i }}^{\\star}(t)\\|{\\bf R}_{\\perp}^{\\star}(t)+\\|{\\bf R}_{V}^{{}_{\\|}}(t)\\|{\\bf R}_ {V}^{\\perp}(t)+\\\\ &+\\,\\left(\\|{\\bf R}_{{}_{\\rm O}}^{{}_{\\|}}(t)\\|{\\bf R}_{\\perp}^{ \\star}(t)-\\|{\\bf R}_{{}_{\\rm i}}^{\\star}(t)\\|{\\bf R}_{{}_{\\rm O}}^{\\perp}(t) \\right)+\\left(\\|{\\bf R}_{{}_{\\rm O}}^{{}_{\\|}}(t)\\|{\\bf R}_{V}^{\\perp}(t)-\\|{ \\bf R}_{V}^{{}_{\\|}}(t)\\|{\\bf R}_{{}_{\\rm O}}^{\\perp}(t)\\right)+\\\\ &+\\,\\left(\\|{\\bf R}_{\\star}^{{}_{\\|}}(t)\\|{\\bf R}_{V}^{\\perp}(t) +\\|{\\bf R}_{V}^{{}_{\\|}}(t)\\|{\\bf R}_{\\perp}^{\\star}(t)\\right)\\end{split} \\tag{31}\\] and \\[\\begin{split}&\\|\\delta{\\bf S}^{(2)}\\|\\leq\\pi^{2}\\big{(}\\|{\\bf R}_{ {}_{\\rm O}}(t)\\|^{2}+\\|{\\bf R}^{\\star}(t)\\|^{2}+\\|{\\bf R}_{V}(t)\\|^{2}+\\\\ &-\\,2\\,{\\bf R}_{{}_{\\rm O}}(t)\\cdot{\\bf R}^{\\star}(t)-2\\,{\\bf R}_ {{}_{\\rm O}}(t)\\cdot{\\bf R}_{V}(t)+2\\,{\\bf R}^{\\star}(t)\\cdot{\\bf R}_{V}(t) \\big{)}\\end{split} \\tag{32}\\]As we can see there are two types of second order corrections. The first type includes the second order corrections due to the proper motion, parallax and companions. For the proper motion it is easy to calculate its exact value \\[\\Delta S_{\\mu}=\\frac{1}{4}\\frac{\\|{\\bf V}_{T}\\|}{\\|{\\bf R}_{0}\\|}\\frac{\\|{\\bf V} _{R}\\|}{\\|{\\bf R}_{0}\\|}\\,\\Delta T^{2}=\\frac{\\pi^{2}}{4}\\,V_{T}\\,V_{R}\\,\\Delta T ^{2} \\tag{33}\\] where \\(\\Delta T\\) is the time span of the mission (and we assumed that \\(t_{0}\\) is at half of \\(\\Delta T\\)), \\(\\|{\\bf V}_{T}\\|=V_{T}\\), \\(\\|{\\bf V}_{R}\\|=V_{R}\\) In order to learn about the magnitude of this correction, we calculated its value for the sample of 150 stars from the Hipparcos catalogue with the largest proper motion (see the Internet location [http://astro.estec.esa.nl/SA-general/Projects/Hipparcos/hipparcos.html](http://astro.estec.esa.nl/SA-general/Projects/Hipparcos/hipparcos.html)). The results are shown in Fig. 3. As one can see \\(\\Delta S_{\\mu}\\) is indeed significant for such stars and without any doubts has to be included into the model. For the remaining corrections due to the motion of the interferometer (i.e. the second order parallactic correction) and the presence of a companion we have the following upper limits \\[\\begin{array}{c}\\pi^{2}\\|{\\bf R}_{\\rm O}^{{}^{\\!\\!\\!\\!\\!\\!\\!\\ account (see Fig. 4). We also find that \\[\\Delta\\Pi_{\\rm c}\\approx 9\\times 10^{-3}\\,\\frac{m_{M_{JUP}}P_{yr}^{2/3}}{d_{pc}^{2 /3}M_{M_{\\odot}}^{2/3}(1-e)}\\,\\mu as\\quad\\mbox{for planetary companions} \\tag{37}\\] \\[\\Delta\\Pi_{\\rm c}\\approx 9.7\\frac{m_{M_{\\odot}}P_{yr}^{2/3}}{d_{pc}^{2 }M_{M_{\\odot}}^{2/3}(1+m_{M_{\\odot}}/M_{M_{\\odot}})^{2/3}(1-e)}\\,\\mu as\\quad \\mbox{for stellar companions}\\] and \\[\\Delta\\Psi_{\\rm c}\\approx 0.1\\,V_{100}\\Delta T_{yr}\\,\\frac{m_{M_{JUP}}P_{yr} ^{2/3}}{d_{pc}^{2}M_{M_{\\odot}}^{2/3}(1-e)}\\,\\mu as\\quad\\mbox{for planetary companions} \\tag{38}\\] \\[\\Delta\\Psi_{\\rm c}\\approx 102.3\\,V_{100}\\Delta T_{yr}\\,\\frac{m_{M_{\\odot}}P_{yr} ^{2/3}}{d_{pc}^{2}M_{M_{\\odot}}^{2/3}(1+m_{M_{\\odot}}/M_{M_{\\odot}})^{2/3}(1- e)}\\,\\mu as\\quad\\mbox{for stellar companions}\\] where \\(V_{100}\\) is the velocity of star in hundreds of km/s and \\(\\Delta T_{yr}\\) is the time span of the mission in years. Finally, let us shortly discuss the magnitude of third order terms. Obviously, they will be detectable only for \\(\\delta{\\bf R}\\) due to the proper motion. Thus we can estimate that the resulting angular displacement \\(\\delta S^{(3)}\\) is \\[\\delta S^{(3)}\\sim\\left(\\frac{\\|\\delta{\\bf R}\\|}{\\|{\\bf R}_{0}\\|} \\right)^{3}=\\frac{\\pi^{3}}{8}V^{3}\\,\\Delta T^{3}=\\frac{\\pi^{3}}{8}(V_{R}^{2}+ V_{T}^{2})^{3/2}\\,\\Delta T^{3} \\tag{39}\\] Its value for the sample of stars from Figs. 3-4 is presented in Fig. 5. As one can see in few cases this third order term can be larger than \\(1\\mu as\\). The above analysis clearly shows that a variety of second order effects and possibly in few cases third order effects will be detectable with SIM. Although throughout the rest of this paper we use only the first order model presented in sections 2.1-2 to simplify our considerations, in real applications a correct model must include higher order effects. They can be easily derived given the theoretical background presented in section 2. ## 3 Orbital motion Let us assume that the motion of \\(N\\) planets and their star is described in the barycentric system. From the definition of such system, we have the following relation for the radius vector of the parent star \\[{\\bf R^{\\star}}=-\\frac{1}{M_{\\star}}\\sum_{j=1}^{N}m_{j}{\\bf R}_{j}, \\tag{40}\\]where \\({\\bf R}_{j}\\) are radius vectors of planets and \\(M_{\\star}\\), \\(m_{j}\\) are the mass of the star and the \\(j\\)-th planet, respectively. In the first approximation, the motion of planets can be described by means of the following equations \\[\\frac{{\\rm d}^{2}{\\bf R}_{j}}{{\\rm d}t^{2}}=-\\mu_{j}\\frac{{\\bf R}_{j}}{||{\\bf R} _{j}||^{3}},\\qquad j=1,\\ldots,N, \\tag{41}\\] where \\[\\mu_{j}=\\frac{GM_{\\star}}{(1+m_{j}/M_{\\star})^{2}}.\\] and the motion of the star can be obtained from the equation (40). ### Elliptic motion and its expansion Solutions \\({\\bf R}_{j}(t)\\) of the equations (41) belong to the family of Keplerian orbits among which elliptic orbits are of particular interest for farther analysis. Therefore, let us remind their basic properties. The radius vector \\({\\bf R}_{j}={\\bf R}(t)\\) of a planet moving in an elliptic orbit is given by \\[{\\bf R}(t)={\\bf P}\\,a(\\cos E(t)-e)+{\\bf Q}\\,a\\sqrt{1-e^{2}}\\sin E(t), \\tag{42}\\] where \\[{\\bf P}={\\bf l}\\,\\cos\\omega+{\\bf m}\\,\\sin\\omega,\\qquad{\\bf Q}=-{\\bf l}\\,\\sin \\omega+{\\bf m}\\cos\\omega,\\] \\[{\\bf l}=\\begin{bmatrix}\\cos\\Omega\\\\ \\sin\\Omega\\\\ 0\\end{bmatrix},\\qquad{\\bf m}=\\begin{bmatrix}-\\cos i\\sin\\Omega\\\\ \\cos i\\cos\\Omega\\\\ \\sin i\\end{bmatrix}.\\] The eccentric anomaly \\(E=E(t)\\) is an implicit function of time through the Kepler equation \\[E-e\\sin E={\\cal M}, \\tag{43}\\] where \\({\\cal M}\\) is the mean anomaly \\[{\\cal M}=n(t-T_{\\rm p}),\\qquad n=\\frac{2\\pi}{P}, \\tag{44}\\] and \\(P\\) is the orbital period of a planet. The remaining parameters \\(a,e,\\omega,\\Omega,T_{p}\\) are the standard Keplerian elements -- semi-major axis, eccentricity, longitude of pericenter, longitude of ascending node and time of pericenter. The functions \\(\\cos E\\) and \\(\\sin E\\) are periodic with respect to \\({\\cal M}\\) and can be expanded in the Fourier series \\[\\begin{array}{l}\\cos E=-\\frac{1}{2}e+\\sum_{k\\in{\\cal Z}_{0}}\\frac{1}{k}J_{k-1} (ke)\\cos(k{\\cal M}),\\\\ \\sin E=\\sum_{k\\in{\\cal Z}_{0}}\\frac{1}{k}J_{k-1}(ke)\\sin(k{\\cal M}), \\end{array} \\tag{45}\\] where \\(J_{n}(z)\\) is a Bessel function of the first kind of order \\(n\\) and argument \\(z\\); \\({\\cal Z}_{0}\\) denotes the set of all positive and negative integers excluding zero, and \\(e\\in[0,1)\\). Thus, using the equations (42) and (45), we obtain \\[\\widehat{\\bf R}(t)=\\widehat{\\bf R}^{0}+{\\bf A}\\sum_{k\\in{\\cal Z}_{0}}\\frac{1} {k}J_{k-1}(ke)\\cos(k{\\cal M})+{\\bf B}\\sum_{k\\in{\\cal Z}_{0}}\\frac{1}{k}J_{k-1} (ke)\\sin(k{\\cal M}), \\tag{46}\\] where \\[\\widehat{\\bf R}(t)=\\frac{1}{a}{\\bf R},\\quad\\widehat{\\bf R}^{0}=-\\frac{3}{2}{ \\bf P}\\,e,\\qquad{\\bf A}={\\bf P}\\qquad{\\bf B}={\\bf Q}\\sqrt{1-e^{2}}.\\] It can be written in the following complex form \\[\\widehat{\\bf R}(t)=\\widehat{\\bf R}^{0}+\\sum_{k\\in{\\cal Z}_{0}}\\mathbf{ \\Theta}_{k}\\,{\\rm e}^{{\\rm i}k{\\cal M}}, \\tag{47}\\] where \\[\\mathbf{\\Theta}_{k}=\\frac{1}{2k}\\left(F_{-}(k,e){\\bf A}-{\\rm i}F_{+}( k,e){\\bf B}\\right), \\tag{48}\\] and \\[F_{\\pm}(k,e)=J_{k-1}(ke)\\pm J_{k+1}(ke). \\tag{49}\\] Eventually, using (44) and (47), we obtain the Fourier expansion of \\(\\widehat{\\bf R}(t)\\) \\[\\widehat{\\bf R}(t)=\\widehat{\\bf R}^{0}+\\sum_{k\\in{\\cal Z}_{0}}\\mbox{\\boldmath $\\Lambda$}_{k}\\,{\\rm e}^{{\\rm i}knt},\\quad\\mbox{where}\\quad\\mathbf{ \\Lambda}_{k}=\\mathbf{\\Theta}_{k}{\\rm e}^{-{\\rm i}knT_{\\rm p}}. \\tag{50}\\] Let us define the following quantity \\[{\\cal A}_{k}^{l}=\\frac{\\left|\\Lambda_{k+1}^{l}\\right|}{\\left|\\Lambda_{k}^{l} \\right|},\\quad\\mbox{for}\\quad l=1,2,3,\\quad\\mbox{and}\\quad k>0, \\tag{51}\\] i.e. the ratio of amplitudes of two successive harmonics, where \\(\\Lambda_{j}^{i}\\) is the \\(i\\)-th component of vector \\(\\mathbf{\\Lambda}_{j}\\). From the properties of Bessel functions we have \\[{\\cal A}_{k}^{l}(e)=\\frac{k}{k+1}\\sqrt{\\frac{e^{2}(A^{l})^{2}[J_{k+1}^{\\prime} ((k+1)e)]^{2}+(B^{l})^{2}\\left[J_{k+1}((k+1)e)\\right]^{2}}{e^{2}(A^{l})^{2} \\left[J_{k}^{\\prime}(ke)\\right]^{2}+(B^{l})^{2}\\left[J_{k}(ke)\\right]^{2}}}, \\tag{52}\\] where \\(J_{n}^{\\prime}(z)\\) indicates the derivative of a Bessel function \\(J_{n}(z)\\) with respect to \\(z\\). It can be proved that for all \\(e\\in(0,1)\\), \\(l\\in\\{1,2,3\\}\\) and \\(k>0\\) we have \\({\\cal A}_{k}^{l}(e)<1\\). It means that the expansion of \\(\\widehat{\\bf R}(t)\\) has an important property--moduli of successive harmonics of each of coordinates of \\(\\widehat{\\bf R}(t)\\) decrease strictly monotonically with \\(k\\). ### Real expansion Given the equations from the previous section, it is possible to derive the real expansion for every component of the vector \\(\\widehat{\\mathbf{R}}(t)=(\\widehat{R}_{1}(t),\\widehat{R}_{2}(t),\\widehat{R}_{3}(t))\\). Namely, we can express this vector in the form \\[\\widehat{\\mathbf{R}}(t)=\\widehat{\\mathbf{R}}^{0}+\\sum_{k=1}^{\\infty}\\left( \\mathbf{C}^{k}\\cos(knt)+\\mathbf{S}^{k}\\sin(knt)\\right) \\tag{53}\\] which is more convenient in numerical applications. Using (47), (48) and (50) we find \\[\\begin{split}\\mathbf{C}^{k}&=\\frac{1}{k}\\left[ \\mathbf{P}F_{-}(k,e)\\cos(knT_{\\mathrm{p}})-\\mathbf{Q}\\sqrt{1-e^{2}}F_{+}(k,e) \\sin(knT_{\\mathrm{p}})\\right],\\\\ \\mathbf{S}^{k}&=\\frac{1}{k}\\left[\\mathbf{P}F_{-}(k,e )\\sin(knT_{\\mathrm{p}})+\\mathbf{Q}\\sqrt{1-e^{2}}F_{+}(k,e)\\cos(knT_{\\mathrm{p} })\\right].\\end{split} \\tag{54}\\] From the above formulae immediately follows that amplitudes of successive harmonics are given by \\[(D_{l}^{k})^{2}=(C_{l}^{k})^{2}+(S_{l}^{k})^{2}=\\frac{1}{k^{2}}\\left[P_{l}^{2} F_{-}(k,e)^{2}+Q_{l}^{2}(1-e^{2})F_{+}(k,e)^{2}\\right],\\qquad l=1,2,3. \\tag{55}\\] In applications it is convenient to have these expressions in an explicit form \\[\\begin{split}(D_{1}^{k})^{2}&=\\frac{1}{k^{2}}\\left[ F_{-}(k,e)^{2}\\left(1-\\sin^{2}i\\sin^{2}\\Omega\\right)+\\ F(k,e)\\left(\\cos\\Omega\\sin\\omega+\\cos i \\sin\\Omega\\cos\\omega\\right)^{2}\\right]\\\\ (D_{2}^{k})^{2}&=\\frac{1}{k^{2}}\\left[F_{-}(k,e)^{2} \\left(1-\\sin^{2}i\\cos^{2}\\Omega\\right)+\\ F(k,e)\\left(\\sin\\Omega\\sin\\omega-\\cos i \\cos\\Omega\\cos\\omega\\right)^{2}\\right]\\\\ (D_{3}^{k})^{2}&=\\frac{1}{k^{2}}\\left[F_{-}(k,e)^{2} +\\ F(k,e)\\cos^{2}\\omega\\right]\\sin^{2}i,\\end{split} \\tag{56}\\] where \\[F(k,e)=(1-e^{2})F_{+}(k,e)^{2}-F_{-}(k,e)^{2}.\\] ### Approximate formulae for small and moderate eccentricities Since small and moderate eccentricities are more probable it is useful to have approximations of the expressions from the previous section. Namely, using known expansions for Bessel functions we obtain the following formulae \\[F_{\\pm}(k,e)=\\frac{1}{(k-1)!}\\left(\\frac{ke}{2}\\right)^{k-1}+\\mathcal{O}(e^{k +1}),\\quad\\sqrt{1-e^{2}}F_{+}(k,e)=\\frac{1}{(k-1)!}\\left(\\frac{ke}{2}\\right)^ {k-1}+\\mathcal{O}(e^{k+1}) \\tag{57}\\] Subsequently \\[\\begin{split}\\mathbf{C}^{k}&=\\frac{1}{k(k-1)!} \\left(\\frac{ke}{2}\\right)^{k-1}[\\mathrm{l}\\cos\\tilde{\\omega}_{k}+\\mathbf{m} \\sin\\tilde{\\omega}_{k}]+\\mathcal{O}(e^{k+1}),\\\\ \\mathbf{S}^{k}&=\\frac{1}{k(k-1)!}\\left(\\frac{ke}{2} \\right)^{k-1}[-\\mathrm{l}\\sin\\tilde{\\omega}_{k}+\\mathbf{m}\\cos\\tilde{\\omega}_ {k}]+\\mathcal{O}(e^{k+1}).\\end{split} \\tag{58}\\]where \\(\\tilde{\\omega}_{k}=\\omega-knT_{\\rm p}\\). Finally, we obtain the expansions for the amplitudes \\[(D_{1}^{k})^{2} =\\left[\\frac{1}{k(k-1)!}\\left(\\frac{ke}{2}\\right)^{k-1}\\right]^{2} \\left(1-\\sin^{2}i\\sin^{2}\\Omega\\right)+{\\cal O}(e^{2k}), \\tag{59}\\] \\[(D_{2}^{k})^{2} =\\left[\\frac{1}{k(k-1)!}\\left(\\frac{ke}{2}\\right)^{k-1}\\right]^{2} \\left(1-\\sin^{2}i\\cos^{2}\\Omega\\right)+{\\cal O}(e^{2k}),\\] \\[(D_{3}^{k})^{2} =\\left[\\frac{1}{k(k-1)!}\\left(\\frac{ke}{2}\\right)^{k-1}\\right]^{2 }\\sin^{2}i+{\\cal O}(e^{2k}).\\] From the above we can obtain the harmonic expansion for a circular orbit. Namely we find that \\(\\widehat{\\bf R}^{0}={\\bf 0}\\) and \\({\\bf C}^{k}={\\bf S}^{k}={\\bf 0}\\) for \\(k>1\\). While for \\(k=1\\) \\[{\\bf C}^{k} =\\left[{\\bf 1}\\cos\\tilde{\\omega}+{\\bf m}\\sin\\tilde{\\omega}\\right], \\tag{60}\\] \\[{\\bf S}^{k} =\\left[-{\\bf 1}\\sin\\tilde{\\omega}+{\\bf m}\\cos\\tilde{\\omega}\\right].\\] where \\(\\tilde{\\omega}=-nT_{\\rm p}\\). These equations will be especially useful for deriving orbital elements from the coefficients \\({\\bf C}^{k},{\\bf S}^{k}\\) obtained through the analysis of observations. We discuss this issue in the next section. ## 4 Data analysis For the tests we assume the following SIM observing scenario. At the moments \\(t_{i}\\) and \\(t_{i+1}\\) the relative delay between the target and reference star is measured for two orthogonal baseline orientations \\({\\bf B}_{1}\\) and \\({\\bf B}_{2}\\). Such measurement gives a two dimensional delay vector \\({\\bf D}_{i}=(D_{1}(t_{i}),D_{2}(t_{i+1}))\\) and is repeated \\(N\\) times over the time span of the mission, \\(\\Delta T\\). As a result we obtain a two dimensional time series \\({\\cal D}=\\{{\\bf D}_{i},i=1,\\ldots,N\\}\\). The goal of the data analysis is to detect planetary signatures in \\({\\cal D}\\) and derive the orbital parameters of planets. We solve this problem in two steps. First we perform Frequency Decomposition (FD) of the time series \\({\\cal D}\\). The aim of this step is to understand the basic properties of \\({\\cal D}\\) i.e. determine the number of planets and estimate their orbital parameters. The second step is the least-squares analysis based on a specific physical model established in the previous step. Its is aim is to obtain accurate values of the orbital elements and their uncertainties. These two steps are described in the following sections. ### Harmonic model From the theoretical considerations of sections 2 and 3 it follows that relative delays can be modeled by means of the following expression \\[\\begin{array}{l}\\mathbf{D}=\\widehat{\\mathbf{D}}^{0}+\\widehat{\\mathbf{D}}^{\\mu} \\,t+\\sum_{k=1}^{\\infty}\\left[\\widehat{\\mathbf{C}}^{\\pi,k}\\cos(n_{\\mathrm{O}}kt) +\\widehat{\\mathbf{S}}^{\\pi,k}\\cos(n_{\\mathrm{O}}kt)\\right]+\\\\ \\\\ \\hskip 14.226378pt+\\sum_{j=1}^{N}\\sum_{k=1}^{\\infty}\\left[\\widehat{\\mathbf{C}} ^{j,k}\\cos(n_{j}kt)+\\widehat{\\mathbf{S}}^{j,k}\\cos(n_{j}kt)\\right],\\end{array} \\tag{61}\\] where \\(N\\) denotes the number of planets, \\(n_{\\mathrm{O}}\\) and \\(n_{j}\\) denote the mean motion of SIM and \\(j\\)-th planet, respectively. Such equation comes directly from the fact that the motion of the interferometer and the motion of planets can be expanded into the Fourier series. Consequently, the above formula is used to describe \\(\\mathcal{D}\\) and special numerical algorithm is used to obtain the parameters \\[\\begin{array}{l}\\widehat{\\mathbf{D}}^{0}=\\begin{bmatrix}\\widehat{D}_{1}^{0} \\\\ \\widehat{D}_{2}^{0}\\end{bmatrix},\\quad\\widehat{\\mathbf{D}}^{\\mu}=\\begin{bmatrix} \\widehat{D}_{1}^{\\mu}\\\\ \\widehat{D}_{2}^{\\mu}\\end{bmatrix},\\quad\\widehat{\\mathbf{C}}^{\\pi,k}= \\begin{bmatrix}\\widehat{C}_{1}^{\\pi,k}\\\\ \\widehat{C}_{2}^{\\pi,k}\\end{bmatrix},\\quad\\widehat{\\mathbf{S}}^{\\pi,k}= \\begin{bmatrix}\\widehat{S}_{1}^{\\pi,k}\\\\ \\widehat{S}_{2}^{\\pi,k}\\end{bmatrix},\\end{array} \\tag{62}\\] \\[\\begin{array}{l}\\widehat{\\mathbf{C}}^{j,k}=\\begin{bmatrix}\\widehat{C}_{1}^{ j,k}\\\\ \\widehat{C}_{2}^{j,k}\\end{bmatrix},\\qquad\\widehat{\\mathbf{S}}^{j,k}=\\begin{bmatrix} \\widehat{S}_{1}^{j,k}\\\\ \\widehat{S}_{2}^{j,k}\\end{bmatrix},\\quad n_{j},\\quad n_{O},\\qquad j=1,\\ldots,N,\\quad k=1,\\ldots,K_{j},\\end{array} \\tag{63}\\] This algorithm has been described in great detail in Konacki, Maciejewski & Wolszczan (1999). Let us only note here that in practice, due to limited accuracy of measurements and the fact that the amplitudes of subsequent harmonics decrease monotonically, the expansions of (61) are finite and the (finite) number of harmonics \\(K_{j}\\) depends mainly on orbital eccentricities and measurement errors. Using our algorithm we can determine the number of planets \\(N\\), the number of detectable harmonics \\(K_{j}\\) and determine the basic frequencies and coefficients of (61). In fact, we can assume that the mean motion of SIM \\(n_{\\mathrm{O}}\\) as well as the other elements of its orbit are known. In other words \\(\\mathbf{R}_{O}(t)\\) is known and we can use the following more constrained version of the formula (61) \\[\\mathbf{D}=\\widehat{\\mathbf{D}}^{0}+\\widehat{\\mathbf{D}}^{\\mu}\\,t+\\widehat{ \\mathbb{D}}^{\\pi}\\cdot\\mathbf{R}_{\\mathrm{O}}(t)+\\sum_{j=1}^{N}\\sum_{k=1}^{K_{ j}}\\left[\\widehat{\\mathbf{C}}^{j,k}\\cos(n_{j}kt)+\\widehat{\\mathbf{S}}^{j,k}\\cos(n_{ j}kt)\\right], \\tag{64}\\] This way instead of several parameters \\(\\widehat{\\mathbf{C}}^{\\pi,k},\\widehat{\\mathbf{S}}^{\\pi,k},n_{O}\\) we have six parameters since \\[\\widehat{\\mathbb{D}}^{\\pi}=\\begin{bmatrix}\\widehat{D}_{11}^{\\pi},\\widehat{D} _{12}^{\\pi},\\widehat{D}_{13}^{\\pi}\\\\ \\widehat{D}_{21}^{\\pi},\\widehat{D}_{22}^{\\pi},\\widehat{D}_{23}^{\\pi}\\end{bmatrix} \\tag{65}\\] In order to have a better understanding of the parameters of (64) let us express them by meansof the quantities introduced in section 2. Using (28), (29) and (50) we find that \\[\\begin{split}\\widehat{\\mathbf{D}}^{0}=\\begin{bmatrix}\\widehat{D}_{ 1}^{0}\\\\ \\widehat{D}_{2}^{0}\\end{bmatrix}=\\begin{bmatrix}D_{1}^{0}\\\\ D_{2}^{0}\\end{bmatrix}-\\pi_{1}\\sum_{j=1}^{N}\\frac{m_{j}}{M_{\\star}}a_{j} \\begin{bmatrix}B_{1}\\,\\widehat{R}_{1}^{0,j}\\\\ B_{2}\\,\\widehat{R}_{2}^{0,j}\\end{bmatrix},\\end{split} \\tag{65}\\] \\[\\begin{split}\\widehat{\\mathbf{D}}^{\\mu}=\\begin{bmatrix}\\widehat{D}_ {1}^{\\mu}\\\\ \\widehat{D}_{2}^{\\mu}\\end{bmatrix}=\\begin{bmatrix}D_{1}^{\\mu}\\\\ D_{2}^{\\mu}\\end{bmatrix},\\qquad\\widehat{\\mathbb{D}}^{\\pi}=\\begin{bmatrix} \\widehat{D}_{11}^{\\pi},\\widehat{D}_{12}^{\\pi},\\widehat{D}_{13}^{\\pi}\\\\ \\widehat{D}_{21}^{\\pi},\\widehat{D}_{22}^{\\pi},\\widehat{D}_{23}^{\\pi}\\end{bmatrix} =\\begin{bmatrix}\\mathbf{D}_{1}^{\\pi}\\\\ \\mathbf{D}_{2}^{\\pi}\\end{bmatrix}\\end{split}\\] where \\(D_{i}^{0}\\), \\(D_{i}^{\\mu}\\) and \\(\\mathbf{D}_{i}^{\\pi}\\) are the quantities defined by (29) and calculated for \\(\\mathbf{B}=\\mathbf{B}_{i}\\), \\(i=1,2\\); \\(\\widehat{\\mathbf{R}}^{0,j}\\) denotes \\(\\widehat{\\mathbf{R}}^{0}\\) in the expansion (53) for the orbit of the \\(j\\)-th planet and \\(a_{j}\\) is the semi-major axis of the \\(j\\)-th planet. The coordinates of \\(\\widehat{\\mathbf{R}}^{j}=(\\widehat{R}_{1}^{j},\\widehat{R}_{2}^{j},\\widehat{R}_ {3}^{j})\\) are expressed in the local frame \\(\\{\\mathbf{e}_{\\alpha},\\mathbf{e}_{\\delta},\\mathbf{e}_{r}\\}\\). This way \\[\\begin{split}\\mathbf{d}^{c}\\cdot\\widehat{\\mathbf{R}}^{j}& =\\pi_{1}(\\mathbf{B}_{1}\\cdot\\mathbf{e}_{\\alpha})\\widehat{R}_{1}^ {j}=\\pi_{1}B_{1}\\widehat{R}_{1}^{j},\\quad\\text{for }\\mathbf{B}_{1}\\\\ \\mathbf{d}^{c}\\cdot\\mathbf{R}^{j}&=\\pi_{1}( \\mathbf{B}_{2}\\cdot\\mathbf{e}_{\\delta})\\widehat{R}_{2}^{j}=\\pi_{1}B_{2} \\widehat{R}_{2}^{j},\\quad\\text{for }\\mathbf{B}_{2}\\end{split} \\tag{66}\\] since \\(\\mathbf{B}_{1}=B_{1}\\,\\mathbf{e}_{\\alpha}\\), \\(\\mathbf{B}_{2}=B_{2}\\,\\mathbf{e}_{\\delta}\\) where \\(B_{i}\\) is the length of the baseline vector \\(\\mathbf{B}_{i}\\). Similarly we have \\[\\widehat{\\mathbf{C}}^{j,k}=-\\pi_{1}\\frac{m_{j}}{M_{\\star}}a_{j}\\begin{bmatrix} B_{1}\\,C_{1}^{j,k}\\\\ B_{2}\\,C_{2}^{j,k}\\end{bmatrix},\\qquad\\widehat{\\mathbf{S}}^{j,k}=-\\pi_{1} \\frac{m_{j}}{M_{\\star}}a_{j}\\begin{bmatrix}B_{1}\\,S_{1}^{j,k}\\\\ B_{2}\\,S_{2}^{j,k}\\end{bmatrix}, \\tag{67}\\] where \\(\\mathbf{C}^{j,k}=(C_{1}^{j,k},C_{2}^{j,k},C_{3}^{j,k})\\) and \\(\\mathbf{S}^{j,k}=(S_{1}^{j,k},S_{2}^{j,k},S_{3}^{j,k})\\) are \\(\\mathbf{C}^{k}\\) and \\(\\mathbf{S}^{k}\\) coefficients of expansion (53) for \\(j\\)-th planet expressed in the local frame. Now let us assume that after performing FD, we obtained the parameters \\[\\widehat{\\mathbf{D}}^{0},\\quad\\widehat{\\mathbf{D}}^{\\mu},\\quad\\widehat{ \\mathbb{D}}^{\\pi},\\quad\\widehat{\\mathbf{C}}^{j,k},\\quad\\widehat{\\mathbf{S}}^ {j,k},\\quad j=1,\\ldots,N,\\quad k=1,\\ldots,K_{j} \\tag{68}\\] where \\(N\\) is the number of planets (i.e. the number of basic frequencies detected) and \\(K_{j}\\) is the number of detected harmonics for each planet. The first question is if we can derive the canonical parameters like \\(\\alpha,\\delta,\\mu_{\\alpha},\\mu_{\\delta},\\pi\\) for the target and reference star from \\(\\widehat{\\mathbf{D}}^{0},\\widehat{\\mathbf{D}}^{\\mu},\\widehat{\\mathbb{D}}^{\\pi}\\). Unfortunately this is not possible, at least without additional assumptions. Obviously it is a direct consequence of the relative measurements we perform. Thus we can only derive \\((\\mathbf{S}_{0}^{1}-\\mathbf{S}_{0}^{2})\\) as well as differential proper motion and differential parallax. In fact it is possible to chose such a reference star that the differential parallactic displacement has an amplitude close to zero. It suffice to have a reference star with the parallax similar to the parallax of the target star since by assumption these two stars are close to each other and their parallactic displacement is very similar. This way we can remove a strong parallactic component from our observations. On the other hand we do not need the exact values of the canonical parameters. We only have to properly remove the respective effects in order to be able to detect putative planets. The remaining question is if we can derive the orbital elements from \\(\\widehat{\\mathbf{C}}^{j,k},\\widehat{\\mathbf{S}}^{j,k}\\). This task is relatively simple. Namely, given that we have detected at least two terms (basic frequency and its first harmonics), all orbital elements can be derived from the equations of section 3.3. For planets with only the basic frequency detectable, we assume a circular orbit and then the other elements can also be found. This procedure we demonstrate in section 5. ### Standard model The harmonic model allows us to describe the data without a priori knowledge of the target star parameters and its planetary system. In the same time it allows to derive all important information -- especially the number of planets and estimates of their orbital elements. With such knowledge we are ready to perform the standard least-squares analysis in which we must specify the model and supply good initial conditions for the fit. The standard model has the following form \\[{\\bf D}=\\widehat{\\bf D}^{0}+\\widehat{\\bf D}^{\\mu}\\,t+\\widehat{\\bf D}^{\\pi} \\cdot{\\bf R}_{\\rm O}(t)+\\sum_{j=1}^{N}\\begin{bmatrix}\\widehat{a}_{j}\\,B_{1} \\,\\widehat{R}_{1}(t,T_{{\\rm p},j},e_{j},i_{j},\\omega_{j},\\Omega_{j},P_{j})\\\\ \\widehat{a}_{j}\\,B_{2}\\,\\widehat{R}_{2}(t,T_{{\\rm p},j},e_{j},i_{j},\\omega_{j },\\Omega_{j},P_{j})\\end{bmatrix}, \\tag{69}\\] where \\(\\widehat{R}_{1},\\widehat{R}_{2}\\) are coordinates of the Keplerian motion vector \\(\\widehat{\\bf R}\\) given by the equation (46). The parameters of such model are \\[{\\bf D}^{0},\\quad{\\bf D}^{\\mu},\\quad{\\mathbb{D}}^{\\pi},\\quad\\widehat{a}_{j},T _{{\\rm p},j},e_{j},i_{j},\\omega_{j},\\Omega_{j},P_{j},\\quad j=1,\\ldots,N, \\tag{70}\\] where \\(T_{{\\rm p},j},e_{j},i_{j},\\omega_{j},\\Omega_{j}\\) are the Keplerian elements of the \\(j\\)-th planet, \\(P_{j}\\) is its orbital period and the parameter \\(\\widehat{a}_{j}\\) is defined in the following way \\[\\widehat{a}_{j}=\\pi_{1}a_{j}\\frac{m_{j}}{M_{\\star}} \\tag{71}\\] ## 5 Numerical tests For the tests we chose \\(\\upsilon\\) And with its two outer planets (Butler et al. 1999). All real and assumed astrometric and orbital parameters are presented in Table 1. We also found a reference star HD 10032 which is about \\(0\\hbox{$.\\!\\!^{\\circ}$}7\\) away from \\(\\upsilon\\) And. Its astrometric parameters are in Table 2. SIM is assumed to move in an orbit similar to the orbit of the Earth (see Table 3). For these stars we simulated \\(N=200\\) measurements of relative delays \\((D_{1},D_{2})\\) (for two baseline vector orientations \\({\\bf B}_{1}\\) and \\({\\bf B}_{2}\\)) randomly distributed over the time span of 10 years. In both cases the length of the baseline vector was 10 meters and a measurement error with \\(\\sigma\\approx 50\\) pm was assumed (i.e. \\(1\\mu as\\) in angular displacement). Since by assumption \\({\\bf B}_{1}\\) is parallel to \\({\\bf e}_{\\alpha}\\) and \\({\\bf B}_{2}\\) to \\({\\bf e}_{\\delta}\\), the delay \\(D_{1}\\) corresponds to an angular distance between \\(\\upsilon\\) And and HD 10032 in right ascension and \\(D_{2}\\) to an angular distance in declination. ### Second order effects Before we proceed with the analysis it is interesting to discuss second order effects present in simulated observations. Since \\(\\upsilon\\) And is a nearby star with large proper motion we can expect significant contribution from this star (our reference star HD 10032 is quite distant and thus all second order effects are mainly due to \\(\\upsilon\\) And). One can analyze these effects by means of the formulae from section 2.3 or simply apply the standard model (69) with the parameters precisely computed from assumed parameters of \\(\\upsilon\\) And, HD 10032 and SIM (Tables 1-3) and examine the resulting residuals. This procedure gives the residuals presented in Fig. 6. As we can see the second order effects are dominated by a variation quadratic in time (Fig. 6 a,b). This effect is due to perspective acceleration \\[\\frac{\\pi^{2}}{4}\\|{\\bf V}_{R}\\|{\\bf V}_{T}\\,\\Delta T^{2} \\tag{72}\\] thus if we assume that the radial velocity, \\({\\bf V}_{R}\\), of \\(\\upsilon\\) And and HD 10032 is zero it will disappear and reveal another second order effect of smaller magnitude (see Fig. 6 c,d). This effect is due to the following term \\[\\pi^{2}\\|{\\bf R}^{{}_{\\rm i}}_{\\rm O}(t)\\|{\\bf R}^{{}_{\\rm i}}_{V}(t)-\\pi^{2} \\|{\\bf R}^{{}_{\\rm ii}}_{V}(t)\\|{\\bf R}^{\\perp}_{\\rm O}(t) \\tag{73}\\] i.e. the mixed term of parallax and motion of the star. Finally let us note that if we allow the parameters of the model (69) to vary, as usual during the process of least-squares fit, this first order model will try to minimize the residuals as presented in Fig. 6 e,f. ### Frequency Decomposition and standard model First step in our analysis of simulated relative delay measurements is the Frequency Decomposition (FD). Here we model the data with the less constrained formula (61) to show how the parallactic motion contributes to the data. From assumed parameters of the stars and SIM we can compute amplitudes of basic terms and their harmonics. They are shown in Fig. 7. The main idea of FD is subsequent removal of effects with decreasing magnitudes (for all details of the method see Konacki, Maciejewski & Wolszczan (1999)). This process is demonstrated in Fig. 8 and 9 for \\(D_{1}\\) (i.e. for delays measured with the baseline vector orientation \\({\\bf B}_{1}\\)). As one can see the most significant part of delay variations comes from the proper motion of both stars (Fig. 8a), then we can detect the basic term of the parallactic motion (Fig. 8b), the basic term of the planet II (Fig. 8c), first harmonic of the parallactic motion (Fig. 8d), the basic term of the planet I (Fig. 9e), first harmonic of the planet II (Fig. 9f), second harmonic of the planet II (Fig. 9g) and finally first harmonic of the planet I (Fig. 9h). The values of respective parameters \\(\\widehat{\\bf S}^{j,k},\\widehat{\\bf C}^{j,k}\\) are presented in Table 4. They are sufficient to derive initial estimates of the orbital elements of planets I and II. Namely, from the approximate equations of section 3.3 we can find that \\[e_{j}=2\\,\\sqrt{\\frac{(\\widehat{C}_{1}^{j,2})^{2}+(\\widehat{S}_{1}^{j,2})^{2}}{( \\widehat{C}_{1}^{j,1})^{2}+(\\widehat{S}_{1}^{j,1})^{2}}},\\] \\[\\frac{\\widehat{S}_{1}^{j,1}}{B_{1}}\\,\\frac{\\widehat{C}_{2}^{j,1}}{B_{2}}-\\frac {\\widehat{S}_{2}^{j,1}}{B_{2}}\\,\\frac{\\widehat{C}_{1}^{j,1}}{B_{1}}=-\\widehat{ a}_{j}^{2}\\,\\cos i_{j}\\] \\[\\begin{split}&\\left(\\frac{\\widehat{C}_{1}^{j,1}}{B_{1}}\\right)^{2}+ \\left(\\frac{\\widehat{C}_{2}^{j,1}}{B_{2}}\\right)^{2}-\\left(\\frac{\\widehat{S}_{ 1}^{j,1}}{B_{1}}\\right)^{2}-\\left(\\frac{\\widehat{S}_{2}^{j,1}}{B_{2}}\\right)^ {2}=\\widehat{a}_{j}^{2}\\,\\cos 2\\tilde{\\omega}_{1,j}\\,\\sin^{2}i_{j},\\\\ &\\left(\\frac{\\widehat{S}_{1}^{j,1}}{B_{1}}\\right)^{2}+\\left( \\frac{\\widehat{C}_{1}^{j,1}}{B_{1}}\\right)^{2}-\\left(\\frac{\\widehat{S}_{2}^{j,1}}{B_{2}}\\right)^{2}-\\left(\\frac{\\widehat{C}_{2}^{j,1}}{B_{2}}\\right)^{2}= \\widehat{a}_{j}^{2}\\,\\cos 2\\Omega_{j}\\,\\sin^{2}i_{j},\\end{split} \\tag{74}\\] \\[\\frac{\\widehat{C}_{1}^{j,1}}{B_{1}}\\,\\frac{\\widehat{S}_{1}^{j,1}}{B_{1}}+ \\frac{\\widehat{C}_{2}^{j,1}}{B_{2}}\\,\\frac{\\widehat{S}_{2}^{j,1}}{B_{2}}=- \\widehat{a}_{j}^{2}\\,\\sin\\tilde{\\omega}_{1,j}\\cos\\tilde{\\omega}_{1,j}\\,\\sin^{ 2}i_{j},\\] \\[\\frac{\\widehat{C}_{1}^{j,1}}{B_{1}}\\,\\frac{\\widehat{C}_{2}^{j,1}}{B_{2}}+ \\frac{\\widehat{S}_{1}^{j,1}}{B_{1}}\\,\\frac{\\widehat{S}_{2}^{j,1}}{B_{2}}= \\widehat{a}_{j}^{2}\\,\\sin\\Omega_{j}\\cos\\Omega_{j}\\,\\sin^{2}i_{j}\\] and, together with analogous formulae for first harmonics, easily determine the orbital elements. They are show in Table 5. As one can see this procedure gives quite accurate values of the orbital elements. However we use them only as initial values for the least-squares fit with the standard model (69) to obtain the final parameters presented in Table 6. ### Conclusions The above test demonstrates that our approach allows us to determine the orbital elements with high confidence, at least in this particular case. It is interesting to note that with FD we are able to estimate the orbital elements without using the entire information present in the simulated data set (the residuals from Fig. 9h are well above the assumed measurement error). Surprisingly this estimation is quite accurate and as demonstrated is perfectly sufficient as an initial guess of the parameters for the standard least-squares analysis. This is a very promising result since the difficult problem of good initial condition is usually solved by means of quasi-global techniques which are very demanding numerically and still may lead to unreliable results. Thus we believe that our approach constitutes safe and efficient solution to the problem of planets detection with SIM. In our forthcoming paper we will thoroughly analyze the method on more realistic simulations and a variety of different planetary systems. ## References * (1) Konacki, M., Maciejewski, A. J. & Wolszczan, A. 1999, ApJ, 513, 471 * (2) Konacki, M. and Maciejewski, A. J. 1999, ApJ, 518, 442 * (3) Konacki, M. and Maciejewski, A. J. 1999, Planets Outside the Solar System: Theory and Observations, NATO Science Series C, (J.-M. Mariotti and D. Alloin, editors), 532, p. 249 * (4) Shao, M. & Baron, R. 1999, Working on the Fringe: An International Conference on Optical and IR Interferometry from Ground and Space, ASP Conference Series (S. Unwin and R. Stachnik, editors), 194, p. 107. * (5) Butler, R. P., Marcy, G. W., Fischer, D. A., Brown, T. M., Contos, A. R., Korzennik, S. G., Nisenson, P. and Noyes, R. W. 1999, ApJ, 526, 916Figure 1: Solar System Barycenter (SSB) reference frame and the celestial sphere. \\({\\bf S}_{0}\\) is the unit vector toward the star with spherical coordinates \\((\\alpha,\\delta)\\) and \\(\\|{\\bf R}_{0}\\|\\) is the distance to the star. Figure 2: Tangent space at \\({\\bf S}_{0}\\) where \\({\\bf e}_{\\alpha}\\), \\({\\bf e}_{\\delta}\\) and \\({\\bf e}_{r}\\) are the unit vectors of the local frame. Figure 3: \\(\\Delta S_{\\mu}\\) for 150 stars with the largest proper motion from the Hipparcos catalogue. The solid lines represent \\(\\Delta S_{\\mu}\\) for 1, 10 and 100 parsecs as a function of \\(V_{T}\\,V_{R}\\). Time span of the mission \\(\\Delta T=10yr\\) was assumed. Figure 5: \\(\\Delta S^{(3)}\\) for 150 stars with the largest proper motion from the Hipparcos catalogue. The solid lines represent \\(\\Delta S^{(3)}\\) for 1, 10 and 100 parsecs as a function of \\(V\\). Time span of the mission \\(\\Delta T=10yr\\) was assumed. Figure 6: Second order effects in the simulated delays for the relative measurements between \\(\\upsilon\\) And and HD 10032 for the baseline vector orientations \\({\\bf B}_{1}\\) (_a_) and \\({\\bf B}_{2}\\) (_b_); (_c,d_) the same effects when the radial velocity of both stars is zero; (_e,f_) the residuals from the least-squares fit of the first order model (69) to the simulated data used in (_a,b_). Figure 7: The amplitudes of subsequent harmonic terms for the relative delays \\(D_{1},D_{2}\\) (left and right panel respectively) corresponding to the planet I (_a_), II (_b_) and the parallactic motion (_c_). Figure 8: Subsequent steps of the Frequency Decomposition for the simulated relative delay measurements between \\(\\upsilon\\) And and HD 10032 corresponding to the baseline vector orientation \\({\\bf B}_{1}\\). Left panel contains the residuals after removal of all components from the steps above. Right panel contains normalized periodograms of these residuals. Figure 9: Continuation of Fig. 8 \\begin{table} \\begin{tabular}{l c} \\hline \\hline \\multicolumn{1}{c}{ Parameter} & \\multicolumn{1}{c}{\\(\\upsilon\\) And} \\\\ \\hline Right ascension, \\(\\alpha\\) (J1991.25) & 01\\({}^{h}\\)36\\({}^{m}\\)47\\(\\fs\\)98 \\\\ Declination, \\(\\delta\\) (J1991.25) & 41\\({}^{\\circ}\\)24′23\\(\\farcs\\)00 \\\\ Proper motion in \\(\\alpha\\), \\(\\mu_{\\alpha}\\cos\\delta\\) (mas/yr) & -172.57 \\\\ Proper motion in \\(\\delta\\), \\(\\mu_{\\delta}\\) (mas/yr) & -381.01 \\\\ Parallax, \\(\\pi\\) (mas) & 74.25 \\\\ Distance, \\(d_{pc}\\) (pc) & 13.47 \\\\ Transverse velocity, \\(V_{T}\\) (km/s) & 26.7 \\\\ Radial velocity, \\(V_{R}\\) (km/s) & -27.7 \\\\ Mass, \\(M_{\\star}\\) (\\(M_{\\odot}\\)) & 1.3 \\\\ \\hline Orbital elements & Planet I & Planet II \\\\ \\hline Semi-major axis, \\(a\\) (AU) & 0.83 & 2.5 \\\\ Semi-major axis, \\(\\widehat{a}=\\pi\\,a\\,m/M_{\\star}\\) (mas) & 0.133 & 0.813 \\\\ Orbital period, \\(P\\) (d) & 241.2 & 1266.6 \\\\ Eccentricity, \\(e\\) & 0.18 & 0.41 \\\\ Epoch of periastron, \\(T_{p}\\) (JD) & 2450154.9 & 2451308.7 \\\\ Longitude of periastron, \\(\\omega\\) & 243\\(\\fdg\\)6 & 247\\(\\fdg\\)7 \\\\ Longitude of ascending node, \\(\\Omega\\) & 30\\(\\fdg\\)0 & 60\\(\\fdg\\)0 \\\\ Inclination, \\(i\\) & 45\\(\\fdg\\)0 & 45\\(\\fdg\\)0 \\\\ Mass, \\(m\\) (\\(M_{JUP}\\)) & 2.95 & 5.98 \\\\ \\hline \\end{tabular} \\end{table} Table 1: Target star — \\(\\upsilon\\) Andromedae (HD 9826, HIP 7513) \\begin{table} \\begin{tabular}{l c} \\hline \\hline \\multicolumn{1}{c}{ Parameter} & HD 10032 \\\\ \\hline Right ascension, \\(\\alpha\\) (J1991.25) & 01\\({}^{h}\\)38\\({}^{m}\\)48\\(\\fs\\)07 \\\\ Declination, \\(\\delta\\) (J1991.25) & 40\\({}^{\\circ}\\)45′38\\(\\farcs\\)80 \\\\ Proper motion in \\(\\alpha\\), \\(\\mu_{\\alpha}\\cos\\delta\\) (mas/yr) & -14.70 \\\\ Proper motion in \\(\\delta\\), \\(\\mu_{\\delta}\\) (mas/yr) & -2.66 \\\\ Parallax, \\(\\pi\\) (mas) & 8.05 \\\\ Distance, \\(d_{pc}\\) (pc) & 124.22 \\\\ Transverse velocity, \\(V_{T}\\) (km/s) & 8.80 \\\\ Radial velocity, \\(V_{R}\\) (km/s) & -34.00 \\\\ \\hline \\end{tabular} \\end{table} Table 2: Reference star — HD 10032 (HIP 7672) \\begin{table} \\begin{tabular}{l c c} \\hline \\hline \\multicolumn{1}{c}{ Parameter} & Planet I & Planet II \\\\ \\hline \\(f\\) (1/d) & \\(\\ldots\\) & \\(1/241.35\\) & \\(1/1265.87\\) \\\\ \\(\\widehat{C}_{1}^{1}\\) (m) & \\(\\ldots\\) & \\(-0.743\\times 10^{-11}\\) & \\(0.282\\times 10^{-7}\\) \\\\ \\(\\widehat{S}_{1}^{1}\\) (m) & \\(\\ldots\\) & \\(0.586\\times 10^{-8}\\) & \\(0.790\\times 10^{-9}\\) \\\\ \\(\\widehat{C}_{2}^{1}\\) (m) & \\(\\ldots\\) & \\(-0.481\\times 10^{-8}\\) & \\(0.756\\times 10^{-8}\\) \\\\ \\(\\widehat{S}_{2}^{1}\\) (m) & \\(\\ldots\\) & \\(0.146\\times 10^{-8}\\) & \\(0.329\\times 10^{-7}\\) \\\\ \\(\\widehat{C}_{1}^{2}\\) (m) & \\(\\ldots\\) & \\(0.198\\times 10^{-9}\\) & \\(-0.301\\times 10^{-8}\\) \\\\ \\(\\widehat{S}_{1}^{2}\\) (m) & \\(\\ldots\\) & \\(-0.683\\times 10^{-9}\\) & \\(0.466\\times 10^{-8}\\) \\\\ \\(\\widehat{C}_{2}^{2}\\) (m) & \\(\\ldots\\) & \\(0.492\\times 10^{-9}\\) & \\(-0.624\\times 10^{-8}\\) \\\\ \\(\\widehat{S}_{2}^{2}\\) (m) & \\(\\ldots\\) & \\(-0.148\\times 10^{-9}\\) & \\(-0.202\\times 10^{-8}\\) \\\\ \\hline \\end{tabular} \\end{table} Table 4: Dominant planetary terms from Frequency Decomposition \\begin{table} \\begin{tabular}{l c} \\hline \\hline \\multicolumn{1}{c}{ Parameter} & SIM \\\\ \\hline Semi-major axis, \\(a\\) (AU) & 0.995 \\\\ Orbital period, \\(P\\) (d) & 362.5 \\\\ Eccentricity, \\(e\\) & 0.015 \\\\ Epoch of periastron, \\(T_{p}\\) (JD) & 2451519.44 \\\\ Longitude of periastron, \\(\\omega\\) & 74.\\({}^{\\circ}\\)67 \\\\ Longitude of ascending node, \\(\\Omega\\) & 0.\\({}^{\\circ}\\)005 \\\\ Inclination, \\(i\\) & 23.\\({}^{\\circ}\\)45 \\\\ \\hline \\end{tabular} \\end{table} Table 3: SIM orbital elements in SSB reference frame \\begin{table} \\begin{tabular}{l l l} \\hline \\hline \\multicolumn{1}{c}{ Parameter} & \\multicolumn{1}{c}{Planet I} & \\multicolumn{1}{c}{Planet II} \\\\ \\hline Semi-major axis, \\(\\widehat{a}\\) (AU) & 0.132 & 0.813 \\\\ Orbital period, \\(P\\) (d) & 241.21 & 1265.65 \\\\ Eccentricity, \\(e\\) & 0.17 & 0.41 \\\\ Epoch of periastron, \\(T_{p}\\) (JD) & 2450152.63 & 2451306.95 \\\\ Longitude of periastron, \\(\\omega\\) & 240\\({}^{\\circ}\\)34 & 247\\({}^{\\circ}\\)88 \\\\ Longitude of ascending node, \\(\\Omega\\) & 29\\({}^{\\circ}\\)65 & 59\\({}^{\\circ}\\)96 \\\\ Inclination, \\(i\\) & 44\\({}^{\\circ}\\)91 & 45\\({}^{\\circ}\\)04 \\\\ \\hline \\end{tabular} \\end{table} Table 6: Orbital elements from standard model \\begin{table} \\begin{tabular}{l l l} \\hline \\hline \\multicolumn{1}{c}{ Parameter} & \\multicolumn{1}{c}{Planet I} & \\multicolumn{1}{c}{Planet II} \\\\ \\hline Semi-major axis, \\(\\widehat{a}\\) (mas) & 0.130 & 0.733 \\\\ Orbital period, \\(P\\) (d) & 241.35 & 1265.87 \\\\ Eccentricity, \\(e\\) & 0.22 & 0.39 \\\\ Epoch of periastron, \\(T_{p}\\) (JD) & 2450164.79 & 2451301.97 \\\\ Longitude of periastron, \\(\\omega\\) & 255\\({}^{\\circ}\\)05 & 243\\({}^{\\circ}\\)15 \\\\ Longitude of ascending node, \\(\\Omega\\) & 31\\({}^{\\circ}\\)09 & 62\\({}^{\\circ}\\)95 \\\\ Inclination, \\(i\\) & 44\\({}^{\\circ}\\)50 & 43\\({}^{\\circ}\\)13 \\\\ \\hline \\end{tabular} \\end{table} Table 5: Orbital elements derived from FD parametersFigure 1: Figure 4: Figure 3: Figure 6: Figure 7: Figure 8: Figure 9:
We present theoretical analysis of the astrometric searches for extrasolar planets with the Space Interferometry Mission (SIM). Particularly, we derive a model for the future measurements with SIM and discuss the problem of reliable estimation of orbital elements of planets. For this purpose we propose a new method of data analysis and present a numerical test of its application on simulated SIM astrometric measurements of \\(\\upsilon\\) Andromedae planetary system. We demonstrate that our approach allows successfull determination of its orbital elements. astrometry -- methods: data analysis -- planetary systems
Summarize the following text.
arxiv-format/0009019v1.md
# A Generalization of the Maximum Noise Fraction Transform Christopher Gordon C. Gordon is with the School of Computer Science and Mathematics at the University of Portsmouth in the UK. E-mail:[email protected] ## I Introduction The maximum noise fraction (MNF) transform was introduced by Green _et al._[1]. It is similar to the principle component transform [2] in that it consists of a linear transform of the original data. However, the MNF transform orders the bands in terms of noise fraction. One application of the MNF transform is noise filtering of multivariate data [1]. The data is MNF transformed, the high noise fraction bands are filtered and then the reverse transform is performed. We show an example where the MNF noise removal adds artificial features due to the nonlinear relationship between the different variables of the data. A polynomial generalization of the MNF is introduced which removes this problem. In Section II we summarize the MNF procedure. The problem data set is introduced in Section III and the MNF is applied to it. In Section IV, the generalized MNF transform is explained and applied. The conclusions are given in Section V. ## II The Maximum Noise Fraction (MNF) Transform In this section we define the MNF transform and list some of its properties. For further details the reader is referred to Green _et al._[1] and Switzer and Green [3]. A good review is also given by Nielsen [4]. A reformulation of the MNF transform as the noise-adjusted principle component (NAPC) transform was given by Lee _et al._[5]. An efficient method of computing the MNF transform is given by Roger [6]. Let \\[Z_{i}(x),\\quad i=1,\\ldots,p\\] be a multivariate data set with \\(p\\) bands and with \\(x\\) giving the position of the sample. The means of \\(Z_{i}(x)\\) are assumed to be zero. The data can always be made to approximately satisfy this assumption by subtracting the sample means. An additive noise model is assumed: \\[Z(x)=S(x)+N(x)\\] C. Gordon is with the School of Computer Science and Mathematics at the University of Portsmouth in the UK. E-mail:[email protected] \\({}^{1}\\)Copyright (c) 2000 Institute of Electrical and Electronics Engineers. Reprinted from [IEEE Transactions on Geoscience and Remote Sensing, Jan 01, 2000, v38, n1 p2, 608]. This material is posted here with permission of the IEEE. Internal or personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes for creating new collective works for resale or redistribution must be obtained from the IEEE by sending a blank email message to [email protected]. By choosing to view this document, you agree to all provisions of the copyright laws protecting it. where \\(Z^{T}(x)=\\{Z_{1}(x),\\ldots,Z_{p}(x)\\}\\) is the corrupted signal and \\(S(x)\\) and \\(N(x)\\) are the uncorrelated signal and noise components of \\(Z(x)\\). The covariance matrices are related by: \\[\\mathbf{Cov}\\{Z(x)\\}=\\Sigma=\\Sigma_{S}+\\Sigma_{N}\\] where \\(\\Sigma_{N}\\) and \\(\\Sigma_{S}\\) are the noise and signal covariance matrices. The noise fraction of the \\(i\\)th band is defined as \\[\\mathbf{Var}\\{N_{i}(x)\\}/\\mathbf{Var}\\{Z_{i}(x)\\}.\\] The maximum noise fraction transform (MNF) results in a new \\(p\\) band uncorrelated data set which is a linear transform of the original data: \\[Y(x)=A^{T}Z(x).\\] The linear transform coefficients, \\(A\\), are found by solving the eigenvalue equation: \\[A\\Sigma_{N}\\Sigma^{-1}=\\Lambda A \\tag{1}\\] where \\(\\Lambda\\) is a diagonal matrix of the eigenvalues, \\(\\lambda_{i}\\). The noise fraction in \\(Y_{i}(x)\\) is given by \\(\\lambda_{i}\\). By convention the \\(\\lambda_{i}\\) are ordered so that \\(\\lambda_{1}\\geq\\lambda_{2}\\geq\\ldots\\geq\\lambda_{p}\\). Thus the MNF transformed data will be arranged in bands of _decreasing_ noise fraction. The proportion of the noise variance described by the first \\(r\\) MNF bands is given by \\[\\frac{\\sum_{i=1}^{r}\\lambda_{i}}{\\sum_{i=1}^{p}\\lambda_{i}}.\\] The eigenvectors are normed so that \\(A^{T}\\Sigma A\\) is equal to an identity matrix. The advantages of the MNF transform over the PC transform are that it is invariant to linear transforms on the data and the MNF transformed bands are ordered by noise fraction. The high noise fraction bands can be filtered and then the transform reversed. This can lead to an improvement in the filtering results because the high noise fraction bands should contain less signal that might be distorted by the filtering. Examples of this approach have been given by Green _et al._[1], Nielsen and Larsen [7] and Lee _et al._[5]. An extreme version of MNF filtering is based on excluding the effects of the first \\(r\\) components. That is \\(r\\) is chosen so as to include only bands with high enough noise ratios. This can be achieved by: \\[Z^{*}(x)=(A^{-1})^{T}RA^{T}Z(x) \\tag{2}\\] where \\(Z^{*}(x)\\) is the filtered data and \\(R\\) is an identity matrix with the first \\(r\\) diagonal elements set to zero. Thus eliminating the effect of one or more of the MNF bands produces a filtered data set which is a linear transform of the original data. This MNF based filter uses interband correlation to remove noise. In order to use Equation (1) to compute \\(A\\), \\(\\Sigma_{N}\\) has to be known. Nielsen and Larsen [7] have given four different ways of estimating \\(N(x)\\). They all rely on the data being spatially correlated. A simple method for computing \\(N(x)\\) is by \\[N(x)=Z(x)-Z(x+\\delta) \\tag{3}\\]where \\(\\delta\\) is an appropriately determined step length. We are effectively assuming \\[S(x)=S(x+\\delta).\\] To the extent that this is not true, the estimate of \\(N(x)\\) is in error.When this method of noise estimation is used, the MNF transform is equivalent to the min / max autocorrelation factor transform [3]. ## III Airborne Electromagnetic Data We test the MNF filtering methodology on a flight line produced by SPECTREM's time dependent airborne electromagnetic (AEM) system. Background information on this AEM system has been explained by Leggatt [8]. A multiband image can be formed by consecutive flight lines but usually each flight line is examined separately. Fig. 1 shows a flight line of data, consisting of the seven windowed AEM X band spectra. All seven bands are displayed stacked above each other. The amplitude of a band at a particular point is proportional to the vertical distance of the spectrum from its corresponding zero amplitude reference (dotted) line. Neighbouring points along a line are responses from neighbouring points on the ground. The higher band numbers are associated with greater underground depths. ## References * [1] A. A. Green, M. Berman, P. Switzer, and M. D. Craig, \"A transformation for ordering multispectral data in terms of image quality with implications for noise removal,\" _IEEE Transactions on Geoscience and Remote Sensing_, vol. 26, no. 1, pp. 65-74, 1988. * [2] Rafael C. Gonzalez and Richard E. Woods, _Digital Image Processing_, Addison-Wesley publishing company, 1992. * [3] P. Switzer and A. Green, \"Min/max autocorrelation factors for multivariate spatial imagery,\" Tech. Rep. 6, Department of Statistics, Stanford University, 1984. * [4] Alan Aasbjerg Nielsen, _Analysis of Regularly and Irregularly Sampled Spatial, Multivariate, and Multi-temporal Data_, Ph.D. thesis, Institute of Mathematical Modeling. University of Denmark, 1994. * [5] J. B. Lee, A. S. Woodyatt, and M. Berman, \"Enhancement of high spectral resolution remote-sensing data by a noise-adjusted principal components transform,\" _IEEE Transactions on Geoscience and Remote Sensing_, vol. 28, no. 3, pp. 295-304, 1990. * [6] R. E. Roger, \"A faster way to compute the noise-adjusted principal components transform matrix.,\" _IEEE Transactions on Geoscience and Remote Sensing_, vol. 32, no. 6, 1994. * [7] Alan Aasbjerg Nielsen and R. Larsen, \"Restoration of GERIS data using the maximum noise fractions transform,\" in _Proceedings form the First International Airborne Remote Sensing Conference and Exhibition, Volume II_, Strasbourg, France, 1994, pp. 557-568. * [8] Peter Bethune Leggatt, _Some Algorithms and Code for the Computation of the Step Response Secondary EMF Signal for the SPECTREM AEM System_, Ph.D. thesis, University of the Witwatersrand, Johannesburg, South Africa, 1996. * [9] R. Gnanadesikan and M. B. Wilk, \"Data analytic methods in multivariate statistical analysis,\" in _Multivariate Analysis II_, P. Krishnaiah, Ed. 1969, pp. 593-638, Academic Press. New York, U. S. A. Figure 2: A comparison of the MNF and GMNF filtering methods. Only a portion of the flight line for bands 5, 6 and 7 is shown for each figure. The sample number is displayed on the horizontal axis of each subplot. (a) Unfiltered AEM data. (b) MNF filtered AEM data. The ‘S’ symbols mark parts of the data where spurious features have been introduced by the MNF filtering. (c) GMNF filtered AEM data.
A generalization of the maximum noise fraction (MNF) transform is proposed. Powers of each band are included as new bands before the MNF transform is performed. The generalized MNF (GMNF) is shown to perform better than the MNF on a time dependent airborne electromagnetic (AEM) data filtering problem. 1 Footnote 1: Copyright (c) 2000 Institute of Electrical and Electronics Engineers. Reprinted from [IEEE Transactions on Geoscience and Remote Sensing, Jan 01, 2000, v38, n1 p2, 608]. This material is posted here with permission of the IEEE. Internal or personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes for creating new collective works for resale or redistribution must be obtained from the IEEE by sending a blank email message to [email protected]. By choosing to view this document, you agree to all provisions of the copyright laws protecting it. Maximum noise fraction transform, noise filtering, time dependent airborne electromagnetic data.
Write a summary of the passage below.
arxiv-format/0009041v2.md
# Flow equation for Halpern-Huang directions of scalar O(\\(N\\)) models Holger Gies _Institut fur theoretische Physik, Universitat Tubingen,_ _72076 Tubingen, Germany_ ## 1 Introduction Common belief holds that only polynomial interactions up to a certain degree depending on the spacetime dimension are renormalizable, in the sense that interactions of even higher order require an infinite number of subtractions in a perturbative analysis. This can be attributed to the implicit assumption that the higher-order couplings, which in general are dimensionful, set independent scales. Such nonrenormalizable theories can only be defined with a cutoff scale \\(\\Lambda\\), while the unknown physics beyond the cutoff is encoded in the (thereby independent) values of the couplings. Starting from the viewpoint that the cutoff \\(\\Lambda\\) is the only scale in the theory, Halpern and Huang [1, 2] pointed out the existence of theories with higher-order and even nonpolynomial interactions within the conventional setting of quantum field theory. This happens because the higher-order couplings, by assumption, are proportional to a corresponding power of \\(1/\\Lambda\\) and therefore die out sufficiently fast in the limit \\(\\Lambda\\to\\infty\\); the theories remain perturbatively renormalizable in the sense that infinitely many subtractions arenot required. Perhaps most important, Halpern and Huang so discovered nonpolynomial scalar theories which are asymptotically free, offering an escape route to the \"problem of triviality\" of standard scalar theories [3]. To be more precise, Halpern and Huang analyzed the renormalization group (RG) trajectories for the interaction potential in the vicinity of the Gaussian fixed point. The exact form of the potential was left open by using a Taylor series expansion in the field as an ansatz. Employing the Wegner-Houghton [4] (sharp-cutoff) formulation of the Wilsonian RG, the eigenpotentials, i.e., tangential directions to the RG trajectories at the Gaussian fixed point, were identified in linear approximation. While the standard polynomial interactions turn out to be irrelevant as expected, some nonpolynomial potentials which increase exponentially for strong fields prove to be relevant perturbations at the fixed point. For the irrelevant interactions, the Gaussian fixed point is infrared (IR) stable, whereas the relevant ones approach this fixed point in the ultraviolet (UV). Possible applications of these new relevant directions are discussed in [1] for the Higgs model and in [5] for quintessence. Further nonpolynomial potentials and their applications in Higgs and inflationary models have been investigated in [6]. Considering the complete RG flow of such asymptotically free theories from the UV cutoff \\(\\Lambda\\) down to the infrared, the Halpern-Huang result teaches us only something about the very beginning of the flow close to the cutoff and thereby close to the Gaussian fixed point. Each RG step in a coarse-graining sense \"tends to take us out of the linear region into unknown territory\" [2]. It is the purpose of the present work to perform a first reconnaissance of this territory with the aid of the RG flow equations for the \"effective average action\" [7]. In this framework, the standard effective action \\(\\Gamma\\) is considered as the zero-IR-cutoff limit of the effective average action \\(\\Gamma_{k}[\\phi]\\) which is a type of coarse-grained free energy with a variable infrared cutoff at the mass scale \\(k\\). \\(\\Gamma_{k}\\) satisfies an exact renormalization group equation, and interpolates between the classical action \\(S=\\Gamma_{k\\to\\Lambda}\\) and the standard effective action \\(\\Gamma=\\Gamma_{k\\to 0}\\). In this work, we identify the classical action \\(S\\) given at the cutoff \\(\\Lambda\\) with a scalar \\(\\mathrm{O}(N)\\) symmetric theory defined by a standard kinetic term and a generally nonpolynomial potential of Halpern-Huang type. Therefore, we have the following scenario in mind: at very high energy, the system is at the UV stable Gaussian fixed point. As the energy decreases, the system undergoes an (unspecified) perturbation which carries it away from the fixed point initially into some tangential direction to one of all possible RG trajectories. We assume that this perturbation occurs at some scale \\(\\Lambda\\) which then sets the only dimensionful scale of the system. Any other (dimensionless) parameter of the system should also be determined at \\(\\Lambda\\); for the Halpern-Huang potentials, there are two additional parameters: one labels the different RG trajectories; the other specifies the \"distance\" scale along the trajectory. Finally, the precise form of the potential at \\(\\Lambda\\) serves as the boundary condition for the RG flow equation which governs the behavior of the theory at all scales \\(k\\leq\\Lambda\\). Since the RG flow equations for \\(\\Gamma_{k}\\) are equivalent to an infinite number of coupled differential equations of first order, a number of approximations (truncations) are necessary to arrive at explicit solutions. In the present work, we shall determine the RG trajectory \\(k\\to\\Gamma_{k}\\) for \\(k\\in[0,\\Lambda]\\) explicitly only in the large-\\(N\\) limit which simplifies the calculationsconsiderably. The paper is organized as follows: Sec. 2, besides introducing the notation, briefly rederives the Halpern-Huang result in the language of the effective average action, generalizing it to a nonvanishing anomalous dimension. Sec. 3 investigates the RG flow equation for the Halpern-Huang potentials in the large-\\(N\\) limit, concentrating on \\(d=3\\) and \\(d=4\\) space-time dimensions; here, we emphasize the differences to ordinary \\(\\phi^{4}\\) theory particularly in regard to mass renormalization and symmetry-breaking properties. Sec. 4 summarizes our conclusions and discusses open questions related to finite values of \\(N\\). As an important caveat, it should be mentioned that the results of Halpern and Huang have been questioned (see [8] and also [9]), and these questions raised also affect the present work. To be honest, we have hidden the problems in the \"scenario\" described above in which an \"unspecified\" perturbation controls the shift of the system from the Gaussian fixed point (the continuum limit) to the cutoff scale \\(\\Lambda\\) along a _tangential_ direction. But since the cutoff scale \\(\\Lambda\\), though large, is not at all infinitesimally separated from the Gaussian fixed point, this tangential approximation is probably not sufficient to stay on the true renormalized trajectory during the shift. Not only the tangent but also all (infinitely many) curvature moments of the trajectory had to be known in order to find an initial point right on the renormalized trajectory at \\(\\Lambda\\). This point would correspond to a so-called \"perfect action\" [11]. Of course, this requires an infinite number of conditions to be imposed on the initial action at \\(\\Lambda\\) which we cannot specify. In conventional field theories, this problem is solved by adjusting (fine-tuning) the initial action close to the unstable Gaussian fixed point, leaving open only one a priori chosen relevant direction to the flow. But in the present case, there is an infinite number of relevant directions corresponding to the continuum of possible Halpern-Huang directions, and thus it seems impossible to single out only one relevant direction while frustrating the others by tuning infinitely many parameters. In other words, upon studying the flow from \\(\\Lambda\\) down to zero within our scenario, the continuum limit of our system remains unspecified, and therefore one important ingredient to a complete field theoretic system is missing. With these reservations in mind, we nevertheless believe that there are some lessons to be learned from the application of the RG flow equations to such potentials. ## 2 Scalar O(\\(N\\)) theories close to the Gaussian fixed point Concerning the investigation of the RG flow equation for the Euclidean effective average action in \\(d\\) dimensions, we closely follow the original work of Wetterich [7]. Polynomial potentials and the large-\\(N\\) limit to be discussed later have been explored in [10] and [12] in the effective average action approach. A comprehensive review and an extensive list of references on this subject can be found in [13]. The effective average action can be expanded in terms of all possible O(\\(N\\)) invariants, \\[\\Gamma_{k}[\\phi]=\\int d^{d}x\\left\\{U_{k}(\\rho)+\\frac{1}{2}Z_{k}(\\rho)\\,\\partial_{ \\mu}\\phi^{b}\\partial_{\\mu}\\phi^{b}+\\frac{1}{4}Y_{k}(\\rho)\\,\\partial_{\\mu}\\rho \\partial_{\\mu}\\rho+\\ldots\\right\\}, \\tag{1}\\] where \\(\\rho:=\\frac{1}{2}\\phi^{b}\\phi^{b}\\), \\(b=1\\ldots N\\) labels the real components of the scalar field, and the dots represent terms involving higher derivatives; for convenience, we shall always assume that \\(d>2\\) during the calculation. Halpern and Huang derived their result in the \"local-potential approximation\" which is constituted by setting the wave function renormalization constant \\(Z_{k}\\equiv 1\\) and neglecting \\(Y_{k}\\) and higher-derivative terms. In the present work, we shall generalize their result to a \\(k\\)-dependent \\(Z_{k}\\) which is parametrized by the anomalous dimension, \\[\\eta:=-\\partial_{t}\\ln Z_{k},\\quad\\mbox{where}\\quad\\partial_{t}\\equiv k\\frac{d }{dk} \\tag{2}\\] denotes the derivative with respect to the RG \"time\", \\(t\\in]-\\infty,0]^{1}\\)\\(t=\\ln k/\\Lambda\\). Here we neglect \\(Y_{k}\\) and any \\(\\rho\\) dependence of \\(Z_{k}\\). Following [7], the RG flow equation for the effective average potential \\(U_{k}(\\rho)\\) can be written as \\[\\partial_{t}U_{k}(\\rho)=\\frac{1}{2}\\int\\frac{d^{d}q}{(2\\pi)^{d}}\\,\\partial_{t }R_{k}\\,\\left(\\frac{N-1}{Z_{k}q^{2}+R_{k}+U_{k}^{\\prime}(\\rho)}+\\frac{1}{Z_{k} q^{2}+R_{k}+U_{k}^{\\prime}+2\\rho U_{k}^{\\prime\\prime}(\\rho)}\\right), \\tag{3}\\] where the prime denotes the derivative with respect to the argument \\(\\rho\\). The cutoff function \\(R_{k}=R_{k}(q^{2})\\) is to some extent an arbitrary positive function that interpolates between \\(R_{k}(q^{2})\\to Z_{k}k^{2}\\) for \\(q^{2}\\to 0\\) and \\(R_{k}(q^{2})\\to 0\\) for \\(q^{2}\\to\\infty\\). It suppresses the small-momentum modes by a mass term \\(k^{2}\\) acting as the IR cutoff. In Eq. (3) the distinction between the \\(N-1\\) \"Goldstone modes\" and the \"radial mode\" is visible. Provided that \\(\\eta\\) is given (which we shall always assume in the this work), the flow of the effective potential \\(U_{k}\\) (and thus of the effective action \\(\\Gamma_{k}\\) in the present approximation) is determined by Eq. (3). Even if \\(\\eta\\) is neglected, Eq. (3) produces qualitatively good results for polynomial effective potentials in \\(d>2\\)[13]. We expect similar behavior for nonpolynomial potentials. The Halpern-Huang result can be rederived by assuming that the system is close to the Gaussian fixed point so that the effective potential and its derivatives are small. Linearizing the right-hand side of Eq. (3) with respect to the potential and its derivatives gives \\[\\partial_{t}U_{k}(\\rho)=-v_{d}\\left(NU_{k}^{\\prime}(\\rho)+2\\rho U_{k}^{\\prime \\prime}(\\rho)\\right)\\int\\limits_{0}^{\\infty}dw\\,w^{d/2-1}\\,\\frac{\\partial_{t} R_{k}(w)}{(Z_{k}w+R_{k}(w))^{2}}+{\\cal O}(U_{k}^{2}), \\tag{4}\\] where we introduced the abbreviation \\(v_{d}=2^{-(d+1)}\\pi^{-d/2}\\Gamma^{-1}(d/2)\\), which is related to the volume of \\(d\\) spheres. It is convenient to remove the explicit \\(Z_{k}\\) and \\(k\\) dependence by usingdimensionless scaling variables: \\[\\varphi=Z_{k}^{1/2}k^{1-d/2}\\phi,\\quad\\tilde{\\rho}=\\frac{1}{2}\\varphi^{2}=Z_{k}k^ {2-d}\\rho,\\quad u_{k}(\\varphi)=k^{-d}\\,U_{k}(\\phi). \\tag{5}\\] In the same spirit, we write for the cutoff function \\[R_{k}(q^{2})=Z_{k}k^{2}\\,C(q^{2}/k^{2}), \\tag{6}\\] where \\(C(w)\\) is a dimensionless function of a dimensionless argument, satisfying \\(C(w\\to 0)\\to 1\\) and \\(C(w\\to\\infty)\\to 0\\). Rewriting Eq. (4) in terms of these variables and taking the RG time derivative \\(\\partial_{t}\\) on the left-hand side at fixed \\(\\tilde{\\rho}\\), we obtain the differential equation \\[\\partial_{t}u_{k}(\\tilde{\\rho})=-du_{k}(\\tilde{\\rho})+(d-2+\\eta)\\tilde{\\rho} \\dot{u}_{k}(\\tilde{\\rho})-\\frac{1}{2}\\kappa\\big{(}2\\tilde{\\rho}\\ddot{u}_{k}( \\tilde{\\rho})+N\\dot{u}_{k}(\\tilde{\\rho})\\big{)}, \\tag{7}\\] where the dot denotes a derivative with respect to the argument \\(\\tilde{\\rho}\\), and the complete cutoff dependence is contained in \\[\\kappa=\\kappa(d,\\eta;C)=2v_{d}\\int\\limits_{0}^{\\infty}dw\\left[(d-2)\\frac{w^{d /2-2}C(w)}{w+C(w)}-\\eta\\frac{w^{d/2-1}C(w)}{(w+C(w))^{2}}\\right]. \\tag{8}\\] We are looking for eigenpotentials, i.e., tangential directions to the RG flow of the scaling form \\(u_{k}\\sim\\mathrm{e}^{-\\lambda t}\\), where \\(\\lambda\\) classifies the possible directions and distinguishes between irrelevant (\\(\\lambda<0\\)), marginal (\\(\\lambda=0\\)) and relevant (\\(\\lambda>0\\)) perturbations away from the Gaussian fixed point. Solutions of this form can be given in terms of the Kummer function \\(M\\)[16] \\[u_{k}(\\tilde{\\rho})=-\\mathrm{e}^{-\\lambda t}\\,\\frac{2\\kappa r}{d-\\lambda} \\left[M\\left(\\frac{\\lambda-d}{d-2+\\eta},\\frac{N}{2};\\frac{d-2+\\eta}{\\kappa} \\,\\tilde{\\rho}\\right)-1\\right]. \\tag{9}\\] For given dimension, \\(N\\), cutoff specification and anomalous dimension, the Halpern-Huang potential (9) depends on two dimensionless parameters: \\(\\lambda\\) and \\(r\\). The latter sets a \"distance\" scale along the RG trajectories; since it is an overall factor, the position of possible extrema of \\(u_{k}\\) are independent of \\(r\\). To make contact with the literature, we note that we rediscover the results of Periwal [14] in the limit \\(\\eta=0\\), where the Halpern-Huang result was generalized to arbitrary cutoffs within the Polchinski RG approach [15]. The results of Halpern and Huang are recovered by employing a sharp cutoff, for which \\(\\kappa\\) is related to the volume of the \\(d-1\\) dimensional sphere2: Footnote 2: The sharp cutoff limit of Eq. (8) has to be defined carefully; details can be found in [13, 9]. \\[\\kappa(d,\\eta=0,C_{\\rm sc})=\\frac{\\mathrm{vol.}(S^{d-1})}{(2\\pi)^{d}}. \\tag{10}\\]Various representations for the Kummer function \\(M(\\alpha,\\beta;x)\\) exist in the literature [16]; for further discussion, it is useful to replace the parameter \\(\\lambda\\) by the combination \\[a:=1+\\frac{\\lambda-d}{d-2+\\eta}. \\tag{11}\\] Then, Eq. (9) reduces to standard polynomial potentials of degree \\(n\\) in \\(\\tilde{\\rho}\\) (\\(2n\\) in \\(\\varphi\\)) if \\(a=-n+1\\); for all such polynomial potentials, the Gaussian fixed point is IR stable. For \\(a=1\\), the potential vanishes, and for any other value of \\(a\\), the potential is nonpolynomial. For these cases, the asymptotic behavior for large third argument \\(x\\) is given by an exponential increase \\[M(\\alpha,\\beta;x)\\simeq\\frac{\\Gamma(\\beta)}{\\Gamma(\\alpha)}\\,x^{(\\alpha- \\beta)}\\,\\mathrm{e}^{x}\\big{(}1+\\mathcal{O}(x^{-1})\\big{)}. \\tag{12}\\] The Gaussian fixed point is UV stable (\\(\\lambda>0\\)) for \\[a>-\\frac{2-\\eta}{d-2+\\eta}, \\tag{13}\\] (as long as \\(d-2+\\eta>0\\)). A particularly interesting case is given by the parameter set \\[-1<a<0,\\quad r<0, \\tag{14}\\] for which the eigenpotential Eq. (9) is nonpolynomial and develops a minimum, inducing spontaneous symmetry breaking. To conclude our derivation of the Halpern-Huang results, we mention that in the particular case of \\(N=1\\) there exist (physically admissible) solutions to Eq. (7) which are odd under \\(\\varphi\\to-\\varphi\\)[5]. The linearized flow equation has also been studied from a different perspective employing its similarity to a Fokker-Planck form [17]. According to the scenario outlined in the introduction, we shall now consider the potentials found in Eq. (9) taken at \\(t=0\\) (\\(k=\\Lambda\\)) as the boundary condition for the complete flow equation (3). Provided that the anomalous dimension \\(\\eta\\) is only weakly dependent on \\(k\\) and bounded (as is the case, e.g., for polynomial interactions in \\(d>2\\)), some features can immediately be read off from Eq. (3): for nonpolynomial potentials with exponential asymptotics given by Eq. (12), the denominators on the right-hand side of the flow equation (3) vanish exponentially for large values of \\(\\rho\\). Therefore, \\(\\partial_{t}U_{k}(\\rho)\\to 0\\) for large \\(\\rho\\), and the flow halts, leaving \\(U_{k}\\) essentially unchanged. In particular, for symmetry-preserving potentials with a minimum at \\(\\rho=0\\) and \\(a>1\\), we may expect a rather unspectacular flow: for large \\(\\rho\\), the above argument holds, whereas for small \\(\\rho\\), we may always find a small region where the linearization of the flow equation is a good approximation; there, the Halpern-Huang potential will still be an appropriate approximation. Therefore, these potentials are expected to behave stiffly under the flow. For potentials with a minimum at nonvanishing \\(\\rho\\) (spontaneous symmetry breaking) with \\(-1<a<0\\), the asymptotics for large \\(\\rho\\) will also stop the flow. However, the flow of \\(U_{k}\\) near the nontrivial minimum can be more complicated, since \\(U_{k}^{\\prime}\\) and \\(U_{k}^{\\prime\\prime}\\) are no longer monotonic functions in this region. To the right of the minimum, these potentials may also be stiff under the flow, but the region around the origin and the minimum appear as a loose end. These heuristic arguments will be worked out and confirmed in the following section in the large-\\(N\\) limit. ## 3 RG flow of Halpern-Huang theories in the large-\\(N\\) limit For solving flow equations for the effective average potential of the type of Eq. (3), several techniques have been developed. Of course, it is always possible to search numerically for solutions by putting the differential equation on a computer; in fact, if one is looking for accurate results, this is the most appropriate option. However, since the potentials under consideration exhibit an exponential increase, straightforward numerics may come to its limits and a clever variable substitution has to be guessed. Another possibility is to expand the potential in terms of a complete set of functions and decompose the flow equation into differential equations for the \\(k\\)-dependent coefficients (generalized couplings). Here, a choice for a useful set of functions again has to be guessed; obviously, the polynomials as the standard choice are of no use, because the important information is contained in the nonpolynomial nature of the potential. Therefore, we decide to work in the large-\\(N\\) limit which puts no a priori restrictions on the form of the potential and allows for a complete integration of the flow equation. Of course, the validity of the results for finite values of \\(N\\) can hardly be controlled at this early stage. ### RG flow equation in the large-\\(N\\) limit In the large-\\(N\\) limit, the RG flow equation (3) for the potential simplifies considerably; here we shall follow the presentation given in [12] and [13]. Not only does the anomalous dimension \\(\\eta\\) vanish [18], but so does the influence of higher derivative terms (\\(Y_{k},\\dots\\)). Moreover, the Goldstone modes dominate the right-hand side of Eq. (3) and any contribution from the radial mode can be neglected (this essentially changes the order of the differential equation). For technical reasons, one finally chooses a sharp cutoff function \\(R_{k}\\) and decides to consider the flow equation for the _derivative_ of the potential. In dimensionless variables, the large-\\(N\\) limit of the flow equation reads \\[2\\dot{u}_{k}=-\\frac{\\partial\\dot{u}_{k}}{\\partial t}+\\left((d-2)\\tilde{\\rho}- \\frac{2v_{d}N}{1+\\dot{u}_{k}}\\right)\\frac{\\partial\\dot{u}_{k}}{\\partial\\tilde {\\rho}}. \\tag{15}\\] Of course, this equation can be obtained directly from the sharp-cutoff formulation of the RG and has already been studied by Wegner and Houghton [4]; further investigations of the Wegner-Houghton approach have been made in [19]. Following [12], this partial differential equation of first order can be solved using the standard method of characteristics and we find that the solution \\(\\dot{u}_{k}(\\tilde{\\rho})\\) has to satisfy the equation \\[\\tilde{\\rho}=s(\\dot{u}_{k})\\,{\\rm e}^{-(d-2)t}-v_{d}N\\,I(d,t;\\dot{u}_{k}), \\tag{16}\\] where \\(I(d,t;\\dot{u}_{k})\\) is defined by the integral \\[I(d,t;\\dot{u}_{k}):={\\rm e}^{-(d-2)t}\\int\\limits_{0}^{\\exp(-2t)}dw\\,\\frac{w^{- d/2}}{1+{\\rm e}^{2t}\\,\\dot{u}_{k}\\,w}. \\tag{17}\\] This function is studied in App. A and explicit representations for \\(d=3\\) and \\(d=4\\) are given. The function \\(s(\\dot{u}_{k})\\) is implicitly defined by the equation \\[\\dot{u}_{\\Lambda}(s)=\\dot{u}_{k}{\\rm e}^{2t}\\quad\\Longrightarrow\\quad s\\equiv s (\\dot{u}_{k}), \\tag{18}\\] where \\(\\dot{u}_{\\Lambda}(s)\\) represents the boundary condition for the flow equation at \\(k=\\Lambda\\) (\\(t=0\\)); here, \\(s\\) as a variable parametrizes the boundary condition and corresponds to the \\(\\tilde{\\rho}\\) axis at \\(t=0\\) in the \\(\\tilde{\\rho},t\\) plane. It is exactly Eq. (18) that is to be inserted into Eq. (16), where the nonpolynomial potentials enter the investigation. Now the route to an explicit solution is clear: (i) we specify the boundary condition via Eq. (18), (ii) insert this and an explicit representation for \\(I(d,t;\\dot{u}_{k})\\) into Eq. (16), and \"solve\" (or invert) the resulting equation for \\(\\dot{u}_{k}(\\tilde{\\rho})\\). However, in practise, some complications are encountered: e.g., inverse functions of such complicated objects as the Kummer function \\(M\\) are not easily obtainable. But the large-\\(N\\) limit comes to the rescue once more as demonstrated in the next subsection. Let us finally extract the flow of a possible minimum of the potential which is defined by \\(\\dot{u}_{k}(\\tilde{\\rho}_{\\rm min})=0\\); from Eq. (16), we can easily extract that \\[\\tilde{\\rho}_{\\rm min}(k)=\\tilde{\\rho}_{\\rm min}(\\Lambda)\\,{\\rm e}^{-(d-2)t}- v_{d}N\\,I(d,t;0), \\tag{19}\\] where \\(\\tilde{\\rho}_{\\rm min}(\\Lambda)\\) denotes the minimum of the potential at \\(k=\\Lambda\\) (t=0), i.e., the minimum of the Halpern-Huang potential (in the large-\\(N\\) limit); by construction, it is identical to \\(\\tilde{\\rho}_{\\rm min}(\\Lambda)=s(\\dot{u}_{k}(\\tilde{\\rho}_{\\rm min})=0)\\). The function \\(I(d,t;0)\\) can be read off from Eqs. (A.3) and (A.4) of the appendix (\\(I(d,t;0)\\equiv i_{0}(d,t)\\)). Reinstating dimensionful quantities (cf. Eq. (5)), we find for the flow of a minimum of the potential \\[\\rho_{\\rm min}(k)=\\rho_{\\rm min}(\\Lambda)-\\tilde{\\rho}_{\\rm cr}\\big{(}\\Lambda ^{d-2}-k^{d-2}\\big{)},\\quad\\tilde{\\rho}_{\\rm cr}:=\\frac{2v_{d}N}{d-2}. \\tag{20}\\] Here, we introduced a \"critical\" (dimensionless) field strength \\(\\tilde{\\rho}_{\\rm cr}\\). Of course, Eq. (20) is well known in the literature [13] and makes no particular reference to the type of potential under consideration. The only place where the potential type enters is the position of the initial minimum \\(\\rho_{\\rm min}(\\Lambda)\\). If \\(\\rho_{\\rm min}(\\Lambda)>\\tilde{\\rho}_{\\rm cr}\\Lambda^{d-2}\\), then the classical as well as the quantum theory exhibit spontaneous symmetry breaking, since \\(\\rho_{\\rm min}(k\\to 0)>0\\); if \\(\\rho_{\\rm min}(\\Lambda)<\\tilde{\\rho}_{\\rm cr}\\Lambda^{d-2}\\) the quantum theory will preserve \\({\\rm O}(N)\\) symmetry. Finally, if \\(\\rho_{\\rm min}(\\Lambda)=\\tilde{\\rho}_{\\rm cr}\\Lambda^{d-2}\\) the classical potential \\(U_{\\Lambda}\\) is \"fine-tuned\" in such a way that the theory shows symmetry breaking for finite values of \\(k\\), but restores \\({\\rm O}(N)\\) symmetry in the limit \\(k\\to 0\\); additionally, the potential has a vanishing mass term: \\(M^{2}:=U^{\\prime}_{k\\to 0}(0)=0\\) (by construction)3. Footnote 3: In the language of statistical mechanics, the theory is exactly at the critical temperature. In standard \\(\\phi^{4}\\) theory, the position of the minimum \\(\\rho_{\\rm min}(\\Lambda)\\) of \\(U_{\\Lambda}\\) can be chosen at will by an appropriate tuning of the negative mass term and the coupling. By contrast, for Halpern-Huang potentials, once the precise type of the potential is chosen by fixing \\(\\lambda\\) (or \\(a\\)), there is no parameter left for any fine-tuning, since a possible minimum (for theories with \\(-1<a<0\\)) is independent of the last free parameter \\(r\\) in Eq. (9). The question as to whether a symmetry-breaking quantum theory of the Halpern-Huang potentials exists has to be answered by determining the position of the initial minimum \\(\\rho_{\\rm min}(\\Lambda)\\). This will also be investigated in the next subsection in the large-\\(N\\) limit. ### Large-\\(N\\) limit of the Halpern-Huang potentials In our scenario, the Halpern-Huang potential enters the flow equation as its boundary condition at the cutoff. Since the flow equation is considered in the large-\\(N\\) limit, it is not only useful to insert the large-\\(N\\) limit of the Halpern-Huang potential into Eq. (16), but it is mandatory for reasons of consistency. Otherwise, nonleading large-\\(N\\) information would be mixed with large-\\(N\\) behavior, introducing some arbitrariness into this approximation. From Eq. (9), we read off that the parameter \\(N\\) occurs only in the second argument of the Kummer function. Unfortunately, we could not find any asymptotic expression for the Kummer function with large second argument in the literature. Instead of investigating this limit in terms of some appropriate series or integral representation, which might involve awkward interchanges of limiting processes, we shall use a more physically motivated approach: since the Kummer function was identified with the tangential direction to the RG flow in the vicinity of the Gaussian fixed point, its large-\\(N\\) approximation should naturally be deducible from a large-\\(N\\) study of the same subject. In other words, the desired function has to be a solution to the linearized flow equation in the large-\\(N\\) limit. For technical reasons, we again turn to the derivative of the potential with respect to \\(\\tilde{\\rho}\\). Then, the desired differential equation is obtained by linearizing Eq. (15). Looking for potentials which satisfy the eigenpotential scaling condition \\(-\\partial_{t}\\dot{u}_{k}=\\lambda\\,\\dot{u}_{k}\\), the large-\\(N\\) Halpern-Huang equation reads \\[\\frac{d\\dot{u}_{\\Lambda}(\\tilde{\\rho})}{d\\tilde{\\rho}}=-\\frac{a}{\\tilde{\\rho}- \\tilde{\\rho}_{\\rm cr}}\\,\\dot{u}_{\\Lambda}(\\tilde{\\rho}), \\tag{21}\\] where we again traded \\(\\lambda\\) for the parameter \\(a\\) as defined in Eq. (11). Additionally, we made use of the \"critical\" field strength \\(\\tilde{\\rho}_{\\rm cr}\\) defined in Eq. (20). Equation (21) can easily be solved for the various boundary conditions; let us begin with a symmetry-preserving potential satisfying \\(\\dot{u}_{\\Lambda}(\\tilde{\\rho})>0\\) and \\(a>0\\): \\[\\dot{u}_{\\Lambda}(\\tilde{\\rho})=\\upsilon_{+}\\,\\left(\\frac{\\tilde{\\rho}_{\\rm cr }}{\\tilde{\\rho}_{\\rm cr}-\\tilde{\\rho}}\\right)^{a},\\quad\\upsilon_{+}:=\\dot{u}_{ \\Lambda}(0), \\tag{22}\\] where the initial value \\(\\upsilon_{+}\\) also satisfies \\(\\upsilon_{+}>0\\). The parameter \\(\\upsilon_{+}\\) is, of course, the large-\\(N\\) analogue of the distance parameter \\(r\\) in Eq. (9), \\(r\\to(N/4)\\upsilon_{+}\\). At first sight, one may doubt the validity of Eq. (22) as the large-\\(N\\) limit of Eq. (9), because it diverges at the critical field strength \\(\\tilde{\\rho}\\to\\tilde{\\rho}_{\\rm cr}\\). Nevertheless, this indeed reflects the behavior of the Kummer function for large second argument, as can be checked numerically (see Fig. 1(a)) [20]. The critical field strength \\(\\tilde{\\rho}_{\\rm cr}\\) marks the point where the asymptotic exponential increase (cf. Eq. (12)) sets in; and for larger values of \\(N\\), the slope increases without bound4. Of course, a potential that diverges for finite values of its argument is usually considered as inadmissible in field theory (see, e.g., [4] and [21]); however, in the present case, we take the viewpoint that this potential wall at \\(\\tilde{\\rho}=\\tilde{\\rho}_{\\rm cr}\\) only symbolizes the exponential increase of the potential for finite values of \\(N\\). Footnote 4: This might be the reason why there is no large-\\(N\\) limit of the Kummer function in the literature: it cannot be defined for arbitrary values of \\(\\tilde{\\rho}\\). Let us now turn to the solution of Eq. (21) for the symmetry-breaking potentials with \\((-1<a<0)\\). In the inner region of the potential where \\(\\dot{u}_{\\Lambda}<0\\), we obtain the solution \\[\\dot{u}_{\\Lambda}(\\tilde{\\rho})=\\upsilon_{-}\\left(\\frac{\\tilde{\\rho}_{\\rm cr }-\\tilde{\\rho}}{\\tilde{\\rho}_{\\rm cr}}\\right)^{-a},\\quad\\upsilon_{-}:=\\dot{u} _{\\Lambda}(0),\\quad|a|=-a, \\tag{23}\\] where the initial value this time satisfies \\(\\upsilon_{-}<0\\). The derivative of the large-\\(N\\) Halpern-Huang potential has a zero at \\(\\tilde{\\rho}=\\tilde{\\rho}_{\\rm cr}\\), so that the potential itself exhibits a minimum at Figure 1: Large-\\(N\\) behavior of (a) a symmetry-preserving (\\(a=2\\)) and (b) a symmetry-breaking Halpern-Huang potential (\\(a=-1/2\\)) in \\(d=4\\) dimensions versus \\(\\tilde{\\rho}/\\tilde{\\rho}_{\\rm cr}\\). For \\(N\\to\\infty\\), the potentials develop a “wall” at \\(\\tilde{\\rho}=\\tilde{\\rho}_{\\rm cr}\\). In (a), the normalization of the potentials is appropriately adapted for illustrative purposes. this position. Now let us turn to the large-\\(N\\) limit of the symmetry-breaking potential to the right of the minimum \\(\\tilde{\\rho}>\\tilde{\\rho}_{\\rm cr}\\) where \\(\\dot{u}_{\\Lambda}>0\\). As a matter of fact, the unique solution of Eq. (21) increases only very slowly, \\(\\dot{u}_{\\Lambda}\\sim\\tilde{\\rho}^{|a|}\\) for \\(\\tilde{\\rho}\\to\\infty\\) and \\(-1<a<0\\). This does certainly not reflect the expected exponential increase; hence this solution has to be discarded. Even if a more appropriate solution existed for \\(\\tilde{\\rho}>\\tilde{\\rho}_{\\rm cr}\\), we would not be able to match them properly at \\(\\tilde{\\rho}=\\tilde{\\rho}_{\\rm cr}\\), because the second derivative of the potential diverges at this point. In view of the results for the symmetry-preserving potential and owing to the fact that the asymptotic exponential increase for both types of potentials is the same (see Eq. (12)), the only possibility for the large-\\(N\\) limit is to continue the potential at \\(\\tilde{\\rho}=\\tilde{\\rho}_{\\rm cr}\\) by a potential wall at \\(\\tilde{\\rho}=\\tilde{\\rho}_{\\rm cr}+0^{+}\\). Again, a numerical analysis of the full Kummer function for large \\(N\\) confirms this conjecture, as is depicted in Fig. 1(b). This concludes our large-\\(N\\) analysis of the Halpern-Huang potential; note that both types of the potential are formally equivalent, so that we combine them in the notation \\[\\dot{u}_{\\Lambda}(\\tilde{\\rho})=\\upsilon\\left(\\frac{\\tilde{\\rho}_{\\rm cr}}{ \\tilde{\\rho}_{\\rm cr}-\\tilde{\\rho}}\\right)^{a},\\quad\\upsilon:=\\dot{u}_{ \\Lambda}(0), \\tag{24}\\] where \\(\\upsilon\\) stands for \\(\\upsilon_{+}\\) or \\(\\upsilon_{-}\\) and \\(a\\in\\mathbb{R}\\). Actually, this also covers a \\(\\phi^{4}\\) potential for \\(a=-1\\). ### Nonperturbative evolution of the Halpern-Huang potentials Now we are in a position to study the flow of the Halpern-Huang potentials from \\(k=\\Lambda\\) into the infrared regime \\(k\\to 0\\) in the large-\\(N\\) limit. The missing piece of information to be inserted into Eq. (16) is given by the inverse of Eq. (24): \\[\\dot{u}_{\\Lambda}(s)=\\upsilon\\left(\\frac{\\tilde{\\rho}_{\\rm cr}}{\\tilde{\\rho} _{\\rm cr}-s}\\right)^{a},\\quad\\stackrel{{\\eqref{eq:24}}}{{ \\Rightarrow}}\\quad s(\\dot{u}_{k})=\\tilde{\\rho}_{\\rm cr}\\left[1-\\left(\\frac{ \\upsilon}{\\dot{u}_{k}{\\rm e}^{2t}}\\right)^{1/a}\\right]. \\tag{25}\\] Employing the representation (A.3) for the function \\(I(d,t;\\dot{u}_{k})\\), we find that the derivative of the potential has to satisfy the equation \\[0=(\\tilde{\\rho}_{\\rm cr}-\\tilde{\\rho})-\\tilde{\\rho}_{\\rm cr}\\left(\\frac{ \\upsilon}{\\dot{u}_{k}(\\tilde{\\rho})}\\right)^{1/a}{\\rm e}^{-(\\lambda/a)t}+\\frac {d-2}{2}\\tilde{\\rho}_{\\rm cr}\\,\\dot{u}_{k}(\\tilde{\\rho})\\,J(d,t;\\dot{u}_{k}), \\tag{26}\\] where the function \\(J(d,t;\\dot{u}_{k})\\) is defined in Eq. (A.5). Finally, Eq. (26) has to be solved for \\(\\dot{u}_{k}(\\tilde{\\rho})\\), which we shall do in the limit \\(k\\to 0\\) (\\(t\\to-\\infty\\)) for various cases in order to obtain the complete quantum effective potential. The following consideration will serve as a guide to the necessary approximations: at the cutoff \\(k=\\Lambda\\), the dimensionful potential is of the order of the cutoff \\(U^{\\prime}_{\\Lambda}\\sim\\Lambda^{2}\\). For small deviations from the cutoff, \\(k/\\Lambda\\lesssim 1\\), the potential scales according to the linearized flow equation (Halpern-Huang equation): \\(\\dot{u}_{k}\\sim{\\rm e}^{-\\lambda t}\\). Then, the dimensionful potential scales as \\(U^{\\prime}_{k}\\sim{\\rm e}^{-(\\lambda-2)t}\\sim{\\rm e}^{-(d-2)at}\\). Therefore, if \\(a>0\\) (symmetry-preserving potentials), \\(U^{\\prime}_{k}\\) increases as we approach the infrared, whereas if \\(a<0\\) (symmetry-breaking potentials), \\(U^{\\prime}_{k}\\) decreases towards the infrared. Of course, this argument holds strictly close to \\(k\\simeq\\Lambda\\) only, but it turns out to reproduce the unique consistent approximation schemes for extracting analytical results. #### 3.3.1 Symmetry-preserving potentials in \\(d=4\\) Let us first consider the \\(d=4\\) potentials with \\(a>0\\) and \\(U^{\\prime}_{\\Lambda}>0\\) that exhibit no symmetry breaking at the cutoff. Employing Eq. (A.6) and reinstating dimensionful quantities via Eq. (5), Eq. (26) reads, after neglecting terms of order \\(k^{2}/U^{\\prime}_{k}\\) in the limit \\(k\\to 0\\) (\\(U^{\\prime}_{k\\to 0}\\equiv U^{\\prime}\\)): \\[0=\\tilde{\\rho}_{\\rm cr}\\,U^{\\prime}(\\rho)\\ln\\left(1+\\frac{\\Lambda^{2}}{U^{ \\prime}(\\rho)}\\right)-\\rho-\\tilde{\\rho}_{\\rm cr}\\Lambda^{2}\\left(\\frac{U^{ \\prime}_{\\Lambda}(0)}{U^{\\prime}(\\rho)}\\right)^{1/a}, \\tag{27}\\] where \\(U^{\\prime}_{\\Lambda}(0)=\\upsilon_{+}\\,\\Lambda^{2}\\equiv M_{\\Lambda}^{2}\\) denotes the mass of the theory at the cutoff. Let us study Eq. (27) in two limits: first, at \\(\\rho\\) close to \\(\\tilde{\\rho}_{\\rm cr}\\Lambda^{2}\\), and secondly at the origin \\(\\rho\\to 0\\). At \\(\\rho\\) close to the potential wall at \\(\\tilde{\\rho}_{\\rm cr}\\Lambda^{2}\\), the potential diverges, and we can approximate \\(\\Lambda^{2}\\ll U^{\\prime}(\\rho\\to\\tilde{\\rho}_{\\rm cr}\\Lambda^{2})\\), leading us to \\[U^{\\prime}(\\rho\\to\\tilde{\\rho}_{\\rm cr}\\Lambda^{2})=U^{\\prime}_{\\Lambda}(0) \\left(\\frac{\\tilde{\\rho}_{\\rm cr}\\Lambda^{2}}{\\tilde{\\rho}_{\\rm cr}\\Lambda^{2 }-\\rho}\\right)^{a}. \\tag{28}\\] In this limit, the effective potential \\(U\\equiv U_{k\\to 0}\\) remains formally identical to the large-\\(N\\) Halpern-Huang potential (cf. Eq. (22))! This confirms our heuristic argument that the potential behaves stiffly under the flow in the region where it increases exponentially. Concerning the opposite limit \\(\\rho\\to 0\\), there would be no mass renormalization at all, if Eq. (28) were also correct in this limit, \\(M^{2}:=U^{\\prime}(0)\\stackrel{{?}}{{=}}U^{\\prime}_{\\Lambda}(0)=M _{\\Lambda}^{2}\\). However, in this limit, the approximation \\(\\Lambda^{2}\\ll U^{\\prime}\\) no longer holds, and instead we deduce from Eq. (27) the transcendental equation \\[1=\\upsilon_{+}\\left(\\frac{M^{2}}{M_{\\Lambda}^{2}}\\right)^{(a+1)/a}\\ln\\left(1+ \\frac{1}{\\upsilon_{+}}\\frac{M_{\\Lambda}^{2}}{M^{2}}\\right). \\tag{29}\\] Therefore, the mass renormalization is governed by the only free parameter of the theory, \\(\\upsilon_{+}\\): for large \\(\\upsilon_{+}\\), there is effectively no renormalization, whereas the renormalized mass \\(M^{2}\\) exceeds the \"classical\" mass \\(M_{\\Lambda}^{2}\\) for \\(\\upsilon_{+}\\lesssim 1\\). Typical values are \\(M^{2}\\simeq 10M_{\\Lambda}^{2}\\) for \\(\\upsilon_{+}=0.01\\) and \\(a=2\\); for larger values of the RG trajectory parameter \\(a\\), the mass shift even increases: \\(M^{2}\\simeq 100M_{\\Lambda}^{2}\\) for \\(\\upsilon_{+}=0.01\\) and \\(a=20\\). The \\(M^{2}/M_{\\Lambda}^{2}\\) relation is plotted against \\(\\upsilon_{+}\\) for various \\(a\\) in Fig. 2(a). By reintroducing the cutoff again via \\(M_{\\Lambda}^{2}=\\upsilon_{+}\\Lambda^{2}\\), Eq. (29) can be interpreted differently by writing \\[\\upsilon_{+}=\\left(\\frac{M^{2}}{\\Lambda^{2}}\\right)^{a+1}\\left[\\ln\\left(1+ \\frac{\\Lambda^{2}}{M^{2}}\\right)\\right]^{a}. \\tag{30}\\] This equation tells us that the physical mass of the theory in the infrared can easily be much smaller than the cutoff by tenth of orders of magnitude, provided that \\(\\upsilon_{+}\\) is correspondingly small. Since \\(\\upsilon_{+}\\) sets the distance scale on the RG trajectory, the demand for a small value of \\(\\upsilon_{+}\\) is consistent with our scenario: if we leave the Gaussian fixed point with a very tinyperturbation \\(\\sim\\upsilon_{+}\\) at the high-energy scale \\(\\Lambda\\), it is only _natural_ to arrive at a low-energy theory with a similarly tiny mass compared to the cutoff. Moreover, consistency of our scenario requires \\(\\upsilon_{+}\\) to be small in order to justify the linearization of the flow equation in deriving the Halpern-Huang result. To summarize, the symmetry-preserving Halpern-Huang potential qualitative does not change its form during the flow into the infrared; in particular, no symmetry breaking occurs. Only the slope of the potential at the origin of the theory increases for \\(k\\to 0\\), which corresponds to a mass renormalization. #### 3.3.2 Symmetry-breaking potentials in \\(d=4\\) Let us begin with a dimension-independent statement referring to the position of the minimum of symmetry-breaking Halpern-Huang potentials with \\(-1<a<0\\): in Subsec. 3.2 we learned that the position of the minimum of the Halpern-Huang potentials in the large-\\(N\\) limit is independent of the parameters \\(a\\) and \\(v_{-}\\): \\(\\tilde{\\rho}_{\\rm min}(\\Lambda)=\\tilde{\\rho}_{\\rm cr}\\), or in dimensionful quantities: \\(\\rho_{\\rm min}(\\Lambda)=\\tilde{\\rho}_{\\rm cr}\\Lambda^{d-2}\\). According to the discussion following Eq. (20), the Halpern-Huang potentials are \"fine-tuned\" in the sense that the minimum vanishes exactly in the infrared limit \\(k\\to 0\\): \\[\\rho_{\\rm min}(k)=\\tilde{\\rho}_{\\rm cr}k^{d-2}. \\tag{31}\\] Therefore, there is no symmetry breaking in the full quantum theory of Halpern-Huang potentials in the large-\\(N\\) limit. Moreover, since \\(M^{2}=U^{\\prime}(\\rho=0)\\equiv U^{\\prime}_{k=0}(\\rho=0)=0\\), the potential is flat at the origin and the renormalized quantum theory is massless. Following the line of argument given below Eq. (26), the inner region of the potential where \\(U^{\\prime}_{k}<0\\) decreases towards the infrared; hence we approximate \\(|U^{\\prime}_{k}|/\\Lambda^{2}\\ll 1\\) in Eq. (26) and obtain the transcendental equation in \\(d=4\\): \\[-U^{\\prime}_{k}=k^{2}-\\Lambda^{2}\\exp\\left(-\\frac{\\tilde{\\rho}_{\\rm cr}k^{2}- \\rho}{\\tilde{\\rho}_{\\rm cr}(-U^{\\prime}_{k})}\\right). \\tag{32}\\] Here we can read off that \\(|U^{\\prime}_{k}|\\) is always smaller than \\(k^{2}\\). This reflects the approach to convexity of the inner part of the effective potential. To summarize, we have found, on the one hand, that the originally nontrivial minimum of the potential moves to the origin during the flow; the inner region of the potential shrinks to a point. On the other hand, we know from the preceding subsection that the potential wall at \\(\\rho=\\tilde{\\rho}_{\\rm cr}\\Lambda^{2}+0^{+}\\) does not change its position under the flow. It remains to be investigated what happens in between the minimum and the potential wall. Unfortunately, we cannot answer this question by the large-\\(N\\) version of the flow equation, because we do not have a boundary condition for this region. At the cutoff \\(k=\\Lambda\\), the inner region borders directly at the potential wall; hence, there is no \"in-between\" that could serve as a boundary condition. Of course, it is plausible to assume that the potential may interpolate smoothly between the origin with zero slope and the potential wall at \\(\\rho=\\tilde{\\rho}_{\\rm cr}\\Lambda^{2}+0^{+}\\) with infinite slope. But alternatively, the potential can also remain flat for \\(\\rho\\in[0,\\tilde{\\rho}_{\\rm cr}\\Lambda^{2}]\\), resembling a particle-in-a-box potential. Our ignorance about that part of the potential is unfortunately accompanied by our inability to predict the mass of the radial mode; but this should not come as a surprise, since the large-\\(N\\) limit neglects the radial mode anyway. #### 3.3.3 Effective potentials in \\(d=3\\) The investigation of the various types of potentials in \\(d=3\\) proceeds analogously to the \\(d=4\\) case with almost identical results. In particular, the symmetry-breaking potentials offer no new information: the inner region shrinks to a point, while the potential minimum moves to zero for \\(k\\to 0\\), and the potential wall remains at \\(\\rho=\\tilde{\\rho}_{\\rm cr}\\Lambda\\). In between, no confirmed statement can be made within the large-\\(N\\) limit, since no boundary condition governs this part of the potential. For symmetry-preserving potentials with \\(a>0\\), the potential again remains in the same form as at the cutoff for values of \\(\\rho\\) close to the potential wall at \\(\\tilde{\\rho}_{\\rm cr}\\Lambda\\) (cf. Eq. (28) with \\(\\Lambda^{2}\\) replaced by \\(\\Lambda\\)). Close to the origin \\(\\rho\\to 0\\), the shape of the potential is modified; this is reflected by a mass renormalization. Employing the same line of argument as given above in \\(d=4\\), and using Eq. (A.7), we find the \\(d=3\\) analogue of Eq. (29): \\[1=\\sqrt{\\upsilon_{+}}\\left(\\frac{M}{M_{\\Lambda}}\\right)^{(a+2)/a}\\arctan\\frac {1}{\\sqrt{\\upsilon_{+}}}\\frac{M_{\\lambda}}{M}. \\tag{33}\\] Again, we find that there is no mass renormalization for large values of \\(\\upsilon_{+}\\); corrections for small values of \\(\\upsilon_{+}\\) are plotted in Fig 2(b). Figure 2: Double-logarithmic plot of the renormalized-to-cutoff mass ratio depending on the RG distance parameter \\(\\upsilon_{+}\\) for (a) \\(d=4\\) (cf. Eq. (29)) and (b) \\(d=3\\) (cf. Eq. (33)) for various values of \\(a\\). Conclusions In the present paper, we have investigated the RG flow of particular nonpolynomial potentials for O(\\(N\\)) symmetric scalar theories using the effective-average-action method. These Halpern-Huang potentials arise from small relevant perturbations at the Gaussian fixed point as tangential directions to the RG flow. Apart from serious, unresolved problems with the continuum limit of these potentials, we were able to follow the flow from a given ultraviolet scale \\(\\Lambda\\) down to the nonperturbative infrared; for this, a number of approximations have been made which are only under limited control. In a first step, we have neglected the influence of possible derivative couplings on the flow of the potential. Secondly, assuming that the anomalous dimension is only weakly dependent on \\(k\\) and bounded, the qualitative features of the flow could already be guessed from the form of the flow equation: this is because the exponential increase of the potentials essentially causes the flow to stop for large enough field values. Therefore, the form of the potentials was recognized as stiff under the flow; only the loose ends of the potential near the origin or possible extrema make room for more diversified behavior. These considerations have been verified explicitly in the large-\\(N\\) limit of the system. In this limit, the exponential increase of the potentials is represented by a potential wall. The potential close to the wall and the wall itself remain unchanged even in the far infrared. Those potentials with an O(\\(N\\)) symmetric ground state (\\(a>0\\)) at the cutoff preserve this symmetry down to \\(k\\to 0\\). Our main result for such potentials is summarized in Eqs. (29), (30) and (33), where the particular form of the mass renormalization is stated. Contrary to polynomial scalar interactions where the mass varies \\(\\sim k^{2}\\) during the flow, the Halpern-Huang potentials exhibit corrections which are governed by the RG distance parameter \\(\\upsilon_{+}\\). In particular, if one demands that a renormalized (infrared) mass differ by several orders of magnitude from the cutoff scale \\(\\Lambda\\), the bare parameters of a _polynomial_ theory at the cutoff scale have to be fine-tuned accurately to several decimal places. By contrast, to achieve such a separation of mass scales with a _nonpolynomial_ Halpern-Huang potential, an adjustment of the RG distance parameter at the cutoff to some small value is required with much less precision. Additionally, the smallness of this value arises naturally, if the (unknown) perturbation at the Gaussian fixed point is tiny. The (symmetry-preserving) Halpern-Huang potentials thus has no problem of _naturalness_. Owing to the general properties of the complete flow equation mentioned above, we believe that these properties of the symmetry-preserving potentials in the large-\\(N\\) limit also hold for finite values of \\(N\\). The status of the large-\\(N\\) limit is certainly different for Halpern-Huang potentials which offer spontaneous symmetry breaking (\\(-1<a<0\\)). These potentials exhibit the remarkable property that the nontrivial minimum persists for any finite value of \\(k\\) but vanishes in the complete quantum theory for \\(k\\to 0\\) in the large-\\(N\\) limit; the O(\\(N\\)) symmetry is restored and the potential becomes flat near the origin. The coincidence between the position of the minimum and the critical value of the field strength may finally be ascribed to the formal resemblance between the large-\\(N\\) flow equation and its linearized version determining the Halpern-Huang potentials. Since the complete flow equation is much more complex, it appears rather improbable that this property continues to hold for finite \\(N\\). Therefore, whether or not spontaneous symmetry breaking occurs in the quantum version of the Halpern-Huang potential at finite \\(N\\) remains an open question. The present investigation at least observes a tendency of the system to restore \\({\\rm O}(N)\\) symmetry. This is in concordance with [22], where a one-loop calculation for the effective potential reveals a restoration of \\({\\rm O}(N)\\) symmetry for potentials with (\\(-1<a\\lesssim-0.585\\)). In this context, a possible application of the Halpern-Huang potentials to the Higgs sector of the standard model is still questionable. Even if a quantum version of the potential with spontaneous symmetry breaking exists, the naturalness of the scalar sector alone is not sufficient to solve the hierarchy problem. This is because the (standard) Yukawa coupling to the fermions leads to large scalar mass renormalizations by fermion loops. Therefore, some appropriate nonpolynomial interaction has to be chosen, also in this sector. Nevertheless, the price to be paid would not be too high, because not only the hierarchy problem could be circumvented without additional degrees of freedom, but also the problem of \"triviality\" would be evaded. From an intuitive point of view, the fact that the form of the potential is stable under the RG flow appears to be disappointing: since the potential remains inherently nonpolynomial, it is impossible to make contact with a would-be classical behavior that is determined by only a few (polynomial) terms. The latter is usually expected at large distances. For example, merely for very weak fields do the first terms in a Taylor expansion of the Kummer function represent a good approximation. For stronger fields, the application of the Halpern-Huang potentials might therefore be limited in this sense. From a technical viewpoint, our calculations hold for \\(d>2\\). We have given explicit results for \\(d=3\\) and \\(d=4\\), and generalizations to higher dimensions are straightforward. The limiting case \\(d=2\\) has to be treated with great care for several reasons. First of all, finite \\(N\\) results may only be trusted if the flow of the anomalous dimension \\(\\eta\\) is taken into account; at least in the case of polynomial potentials, this turned out to be obligatory [13] in order to obtain a good picture of the Kosterlitz-Thouless transition. Furthermore, the limit \\(d\\to 2\\) of the Halpern-Huang potentials offers several possibilities. It has already been observed variously in the literature (see, e.g., [17]), that the Sine-Gordon as well as the Liouville potentials solve the linearized flow equation in \\(d=2\\). In fact, as can be easily shown with the aid of some identities of [16], both types of potentials arise as limiting cases of the Halpern-Huang potentials for \\(N=1\\) in combination with the \\(\\phi\\to-\\phi\\) odd solution of the linearized flow equation: to be precise, the Sine-Gordon potential is recovered in the limit \\(d\\to 2^{+}\\) for \\(\\lambda>2\\), whereas the Liouville potential is obtained by taking the limit \\(d\\to 2^{-}\\) for \\(0<\\lambda<2\\). As far as the Liouville theory is concerned, further similarities to the present results for the symmetry-preserving potentials are visible. In [23], the Liouville potential has also been found to behave stiffly under the RG flow for similar reasons as in the present case. In particular, quantum Liouville theory appears to equal classical Liouville theory, except for a flow of the central charge by one unit and a modified mass parameter. These similarities confirm the viewpoint that the Halpern-Huang potentials can be regarded as higher-dimensional analogues of Liouville theory. ## Acknowledgement The author wishes to thank W. Dittrich for helpful conversations and for carefully reading the manuscript. Useful discussions with R. Shaisultanov are also gratefully acknowledged. ## Appendix A Integrals for the large-\\(N\\) flow equation In this appendix, we present some details about the function \\(I(d,t;\\dot{u}_{k})\\) appearing in the solution (16) to the flow equation (15); this function is defined as \\[I(d,t;\\dot{u}_{k}):=\\mathrm{e}^{-(d-2)t}\\int\\limits_{0}^{\\exp(-2t)}dw\\,\\frac{w ^{-d/2}}{1+\\mathrm{e}^{2t}\\,\\dot{u}_{k}\\,w}.\\] (A.1) Substituting \\(w=\\exp[-2(T+t)]\\), we arrive at the form \\[I(d,t;\\dot{u}_{k})=2\\int\\limits_{0}^{-t}dT\\,\\frac{\\mathrm{e}^{(d-2)T}}{1+\\dot {u}_{k}\\,\\mathrm{e}^{-2T}},\\] (A.2) where \\(t=\\ln k/\\Lambda\\) is always nonpositive: \\(t\\in]-\\infty,0]\\). Separating the zeroth-order term of a Taylor expansion of the integrand, we find the convenient representation \\[I(d,t;\\dot{u}_{k})=i_{0}(d,t)-\\dot{u}_{k}\\,J(d,t;\\dot{u}_{k}),\\] (A.3) with the auxiliary functions \\(i_{0}(d,t)\\) and \\(J(d,t;\\dot{u}_{k})\\) defined by \\[i_{0}(d,t) := \\frac{2}{d-2}\\big{(}\\mathrm{e}^{-(d-2)t}-1\\big{)},\\] (A.4) \\[J(d,t;\\dot{u}_{k}) := 2\\int\\limits_{0}^{-t}dT\\,\\frac{\\mathrm{e}^{(d-4)T}}{1+\\dot{u}_{ k}\\,\\mathrm{e}^{-2T}}.\\] (A.5) Note that \\(i_{0},J\\geq 0\\) for \\(t\\leq 0\\) and \\(\\dot{u}_{k}>-1\\). The explicit form of \\(J\\) depends on the spacetime dimension. For \\(d=4\\), the integral can easily be evaluated by standard means, yielding \\[J(4,t;\\dot{u}_{k})=\\ln\\frac{\\mathrm{e}^{-2t}+\\dot{u}_{k}}{1+\\dot{u}_{k}}.\\] (A.6) In \\(d=3\\), we take care of the possibility of a nontrivial minimum (spontaneous symmetry breaking) and find to the right of a possible minimum \\[J(3,t;\\dot{u}_{k}>0)=-\\frac{2}{\\sqrt{\\dot{u}_{k}}}\\left(\\arctan\\frac{1}{\\sqrt {\\dot{u}_{k}}}-\\arctan\\frac{\\mathrm{e}^{-t}}{\\sqrt{\\dot{u}_{k}}}\\right).\\] (A.7)In the \"inner\" region to the left of a possible minimum, we obtain \\[J(3,t;\\dot{u}_{k}<0)=\\frac{2}{\\sqrt{-\\dot{u}_{k}}}\\left(\\mbox{Artanh}\\frac{1}{ \\sqrt{-\\dot{u}_{k}}}-\\mbox{Artanh}\\frac{\\mbox{e}^{-t}}{\\sqrt{-\\dot{u}_{k}}} \\right),\\] (A.8) where \\(\\dot{u}_{k}>-1\\) for reasons of consistency. ## References * [1] K. Halpern and K. Huang, Phys. Rev. Lett. **74**, 3526 (1995). * [2] K. Halpern and K. Huang, Phys. Rev. D **53**, 3252 (1996). * [3] A.I. Larkin and D.E. Khumel'nitskii, Sov. J. Nucl. Phys. **29**, 1123 (1969); K.G. Wilson, Phys. Rev. Lett. **28**, 248 (1972). * [4] F.J. Wegner and A. Houghton, Phys. Rev. A **8**, 401 (1973). * [5] V. Branchina, hep-ph/0002013 (2000). * [6] K. Langfeld and H. Reinhardt, Mod. Phys. Lett. A **13**, 2495 (1998); R.F. Langbein, K. Langfeld, H. Reinhardt, L. v. Smekal, Mod. Phys. Lett. A **11**, 631 (1996). * [7] C. Wetterich, Phys. Lett. B **301**, 90 (1993). * [8] T.R. Morris, Phys. Rev. Lett. **77**, 1658 (1996); K. Halpern and K. Huang, Phys. Rev. Lett. **77**, 1659 (1996). * [9] C. Bagnuls and C. Bervillier, hep-th/0002034 (2000). * [10] N. Tetradis and C. Wetterich, Nucl. Phys. B **422**, 541 (1994). * [11] P. Hasenfratz, in Proc. _Advanced School of Non-Perturbative Quantum Field Physics_, edited by M. Asorey and A. Dobado, Singapore, World Scientific, 1998, hep-lat/9803027. * [12] N. Tetradis and D. F. Litim, Nucl. Phys. B **464**, 492 (1996) [hep-th/9512073]. * [13] J. Berges, N. Tetradis and C. Wetterich, HD-THEP-00-26, hep-ph/0005122 (2000). * [14] V. Periwal, Mod. Phys. Lett. A **11**, 2915 (1996). * [15] J. Polchinski, Nucl. Phys. B **231**, 269 (1984). * [16] M. Abramowitz and I.A. Stegun, _Handbook of Mathematical Functions_, National Bureau of Standards, Washington, (1964). * [17] A. Bonanno, Phys. Rev. D **62**, 027701 (2000). * [18] J. Zinn-Justin, _Quantum field theory and critical phenomena_, Oxford University Press, (1989). * [19] J. Comellas and A. Travesset, Nucl. Phys. B498, 539 (1997). * [20] The Kummer functions (Hypergeometric1F1[a,b,z]) can numerically as well as partly algebraically be treated by Mathematica, Version 4.0.1.0, Wolfram Research, Champaign (1999). * [21] A. Hasenfratz and P. Hasenfratz, Nucl. Phys. **B270**, 687 (1986). * [22] K. Halpern, Phys. Rev. D **57**, 6337 (1998). * [23] M. Reuter and C. Wetterich, Nucl. Phys. B **506**, 483 (1997).
A class of asymptotically free scalar theories with O(\\(N\\)) symmetry, defined via the eigenpotentials of the Gaussian fixed point (Halpern-Huang directions), are investigated using renormalization group flow equations. Explicit solutions for the form of the potential in the nonperturbative infrared domain are found in the large-\\(N\\) limit. In this limit, potentials without symmetry breaking essentially preserve their shape and undergo a mass renormalization which is governed only by the renormalization group distance parameter; as a consequence, these scalar theories do not have a problem of naturalness. Symmetry-breaking potentials are found to be \"fine-tuned\" in the large-\\(N\\) limit in the sense that the nontrivial minimum vanishes exactly in the limit of vanishing infrared cutoff: therefore, the O(\\(N\\)) symmetry is restored in the quantum theory and the potential becomes flat near the origin.
Summarize the following text.
arxiv-format/0010068v1.md
# The no-sticking effect in ultra-cold collisions Areez Mody, Michael Haggerty and Eric J. Heller Department of Physics, Harvard University, Cambridge, MA 02138 August 2000 ## I Introduction The problem of low energy sticking to surfaces has attracted much attention over the years [1, 2, 3, 4, 5]. The controversial question has been the ultralow energy limit of the incoming species, for either warm or cold surfaces. A battle has ensued between two countervailing effects, which we will call classical sticking and quantum reflection. The concept of quantum reflection is intimately tied into threshold laws, and was recognized in the 1930's by Lennard-Jones [1]. Essentially, flux is reflected from a purely attractive potential with a probability which goes as \\(1-\\alpha\\sqrt{\\epsilon}\\), as \\(\\epsilon\\to 0\\), where \\(\\alpha\\) is a constant and \\(\\epsilon\\) is the translational energy of the particle incident on the surface. Classically the transmission probability is unity. Reflection at long range prevents inelastic processes from occurring, but if the incoming particle should penetrate into the strongly attractive region, the ensuing acceleration and hard collision with the repulsive short range part of the potential leads to a high probability of inelastic processes and sticking. The blame for the quantum reflection can be laid at the feet of the WKB approximation, which breaks down in the long range attractive part of the potential at low energy. Very far out, the WKB is good even for low energy, because the potential is so nearly flat. Close in, the kinetic energy is high, because of the attractive potential, even if the asymptotic energy is very low, and again WKB is accurate. But in between there is a breakdown, which has been recognized and exploited by several groups [6, 7, 8, 9, 10, 11]. We show in the paper filling this one that the breakdown occurs in a region around \\(|V|\\approx\\epsilon\\); i.e. aproximately where the kinetic and potential energies are equal. It would seem that quantum reflection would settle the issues of sticking, since if the particle doesn't make it in close to the surface there is no sticking. (Fig 1) There is one caveat, however, which must be considered: quantum reflection can be defeated by the existence of a resonance in the internal region, i.e. a threshold resonance. (Fig 2) The situation is very analogous to a high Q Fabry-Perot cavity, where using nearly 100% reflective, parallel mirrors gives near 100% reflection except at very specific wavelengths. At these specific energies a resonace buildup occurs in the interior of the cavity, permitting near 100% transmission. Such resonances are rare in a one dimensional world, but the huge number of degrees of freedom in a macroscopic solid particle makes resonance ubiquitous. Indeed, the act of colliding with the surface, creating a phonon and dropping into a local bound state of the attractive potential describes a Feshbach resonance. Thus, the resonances are just the sticking we are investigating, and we must not treat them lightly! Perhaps it is not obvious after all whether sticking occurs. After the considerable burst of activity surrounding the sticking issue on the surface of liquid Helium [12, 13], and after a very well executed theoretical study by Clougherty and Kohn [4], the controversy has settled down, and the common wisdom has grown that sticking does not occur at sufficiently low energy. While we agree with this conclusion, we believe the theoretical foundation for it is not complete, nor stated in a wide enough domain of physical situations. For example, Ref. [4] treats only a harmonic slab with one or two phonon excitation. It is not clear whether the results apply to a warm surface. On the experimental side, even though quantum reflection was observed from a liquid Helium surface, that surface has a very low density of available states (essentially only the ripplons) which could be a special case with respect to sticking. Thus, the need for more rigorous and clear proof of non-sticking in general circumstances is evident. This paper gives such an analysis. In a following paper, application is made to specific atom-surface and slab combinations, and the rollover to the sticking regime as energy is increased (which can be treated essentially analytically) is given. The strategy we use puts a very general and exact scattering formalism to work, providing a template into which to insert the properties of our target and scatterer. Then very general results emerge, such as the non-sticking theorem at zero energy. The usual procedure of defining model potentials and considering one phonon processes etc. is not necessary. All such model potentials and Hamiltonians wind up as parameters in the R-matrix formalism. The details of a particular potential are of course important for quantitative results, but the range of possible results can be much more easily examined by inserting various parameters into the R-matrix formalism. All the possible choices of R-matrix parameters give the correct threshold laws. Certain trends are built into the R-matrix formalism which are essentially independent of the details of the potentials. Before commencing with the R matrix treatment, we briefly consider the problem perturbatively in order to better elucidate the role played by quantum reflection. We emphasize that none of the perturbation section is actually necessary for our final conclusions. In a perturbative treatment for our slab geometry, quantum reflection simply results in the entrance channels' wave function (at threshold) having its amplitude in the interaction region go to zero as \\(k_{e}\\sim\\sqrt{\\epsilon}\\) when normalized to have a fixed incoming flux. (\\(k_{e}\\) is the magnitude \\(|\\vec{k}_{e}|\\) of the incident wavevector of the incoming atom). The inelastic transition probabilities are proportional to the potential weighted overlap of the channel wavefunctions and this immediately leads to the conclusion that the inelastic probability itself vanishes as \\(k_{e}\\sim\\sqrt{\\epsilon}\\). As mentioned, this conclusion is shown to rigorously remain true using the R matrix. We show in this paper that in spite of the inherently many-body nature of the problem, in the ultra-cold limit we can correctly obtain the long-range form of the entrance channel's wavefunction by solving for the one-dimensional motion in the long-range surface-atom attraction (i.e. the diagonal element of the many-channel potential matrix). Thisallows quantitative predictions of the sticking probability, which we do in the following paper. There, we further exploit the perturbative point of view together with an analysis of WKB to predict a 'post-threshold' behavior as quantum reflection abates, when the incoming energy is increased. ## II Geometry and notation The incident atom is treated as a point particle at position \\((x,y)\\). To keep the notation simple we leave out the \\(z\\)-coordinate and confine our discussion to two spatial dimensions. Thus a cross-section will have dimensions of length etc. It will be quite obvious how and where \\(z\\) may be inserted in all that follows. Let \\(u\\) represent all the bound degrees of freedom of the scattering target, which we take to be a slab of crystalline or amorphous material. Let \\(\\Omega_{c}(u)\\), \\(c=1,2,\\cdots\\), be the manybody target wave functions in the absence of interactions with the incident particle, and having energy \\(E_{c}^{\\rm target}\\). These are normalized as \\(\\int_{\\rm all\\ u}du\\;|\\Omega_{c}(u)|^{2}=1\\). \\(x\\) is the distance of the scatterer (atom) from the face of the slab which Figure 2: A schematic view of a Feshbach resonance wherein the incident atom forms a long lived quasi-bound state with the target. The many body wavefunction in this situation (not shown) has a large amplitude in the ‘interior’ region near the slab. Figure 1: The stationary state one body wavefunction of the incident atom moving in the \\(y\\)-independent mean potential felt by it. The amplitude inside the interaction region is supressed by \\(k_{e}\\sim\\sqrt{\\epsilon}\\). This is tantamount to the reflection of the atom. is approximately (because the wall is rough) along the line \\(x=0\\). The internal constituents of the slab lie to the left of \\(x=0\\) and the scatterer is incident from the right with kinetic energy \\(\\epsilon=\\hbar^{2}k_{e}^{2}/2m\\). The total energy \\(E\\) of the system is \\[E=\\epsilon+E_{e}^{\\rm target} \\tag{1}\\] where \\(c=e\\) is the index of the 'entrance channel' i.e. the initial internal state of the slab before the collision is \\(\\Omega_{e}(u)\\). Notice that we say nothing about the value of \\(E_{e}^{\\rm target}\\) itself. In particular the slab need not be cold. \\(k_{c}\\) is the magnitude of the wave vector \\(\\vec{k}_{c}\\) of the particle when it leaves the target in the state \\(\\Omega_{c}(u)\\) after the collision. Our interest focusses on \\(k_{e}\\to 0\\). \\(k_{e}\\) is the magnitude of the wavevector of the incoming particle. For the open channels \\(c=1,\\cdots n\\) (this defines n) for which \\(E>E_{c}^{\\rm target}\\) \\[k_{c}\\equiv\\sqrt{\\frac{2m(E-E_{c}^{\\rm target})}{\\hbar^{2}}}\\qquad(c\\leq n)\\;; \\tag{2}\\] whereas for the closed channels (\\(c>n\\)), \\(E<E_{c}^{\\rm target}\\) and \\[k_{c}\\equiv i\\sqrt{\\frac{2m(E_{c}^{\\rm target}-E)}{\\hbar^{2}}}\\equiv i\\kappa_ {c}\\qquad(c>n)\\;. \\tag{3}\\] \\(\\kappa_{c}>0\\). We will use \\((k_{cx},k_{cy})\\) as the \\(x,y\\) components of \\(\\vec{k}_{c}\\). Let \\(U_{\\rm int}(x,y,u)=(2m/\\hbar^{2})V_{\\rm int}(x,y,u)\\), where \\(V_{\\rm int}(x,y,u)\\) describes quite generally the interaction potential between the incident atom and all the internal degrees of freedom of the slab. For simplicity we assume for the moment that there is no interaction between slab and atom for \\(x>a\\). ## III Preliminaries: Perturbation As stated above, we excerise the perturbative treatment for insight only; our final conclusions are based on nonperturbative arguments. We treat the interaction \\(U_{\\rm int}(x,y,u)\\) between slab and atom by separating out a'mean' potential felt by the atom that is independent of \\(y\\) and \\(u\\); call it \\(U^{(0)}(x)\\). The remainder \\(U^{(1)}(x,y,u)\\equiv U_{\\rm int}(x,y,u)-U^{(0)}(x)\\) is treated as a perturbation. Now the incident beam is scattered by the entire length (say from \\(y=-L\\) to \\(L=2L\\)) of wall which it illuminates. If all measurements are made close to the wall so that its length \\(2L\\) is the largest scale in the problem, then it is appropriate to speak of a cross-section per unit length of wall, a dimensionless probability. More specifically, we will assume that the matrix elements \\(U^{(1)}_{cc^{\\prime}}(x,y)\\equiv\\int\\limits_{\\rm all\\ u}du\\,\\Omega_{c}^{*}(u)U^ {(1)}(x,y,u)\\Omega_{c^{\\prime}}(u)\\) of the perturbation \\(U^{(1)}(x,y,u)\\) in the \\(\\Omega_{c}(u)\\) basis are given by the simple form \\(U^{(1)}_{cc^{\\prime}}(x,y)=U^{(1)}_{cc^{\\prime}}(x)f(y)\\) for \\(y\\in[-L,L]\\) and \\(0\\) elsewhere. \\(f(y)\\) is a random persistent (does not die to \\(0\\) as \\(|L|\\to\\infty\\)) function that models the random roughness of the slab and is characterized by its so-called spectral density function \\(S\\), a smooth positive-valued non-random function, such that \\[\\left|\\int\\limits_{-L}^{L}dy\\,e^{iky}f(y)\\right|^{2}\\equiv 2LS(k)\\quad\\forall k \\tag{4}\\]as \\(L\\to\\infty\\). Now, applying either time-independent perturbation (equivalently the Born approximation for this geometry) or time-dependent perturbation theory via the Golden Rule, gives that the cross-section per unit length of wall for inelastic scattering to a final channel \\(c\\) is \\[P_{c\\gets e}^{\\rm in}(\\theta)=\\frac{2\\pi}{k_{e}}\\left(\\int\\limits_{-\\infty }^{a}dx^{\\prime}\\phi(x^{\\prime};k_{ex})U_{ce}^{(1)}(x^{\\prime})\\phi(x^{\\prime} ;k_{ex})\\right)^{2}S(k_{cy}-k_{ey}) \\tag{5}\\] where \\(\\phi(x;k_{x})\\) is the solution of the o.d.e. \\[\\left(\\frac{d^{2}}{dx^{2}}-U^{(0)}(x)+k_{x}^{2}\\right)\\phi(x;k_{x})=0 \\tag{6}\\] which is regular or goes to zero as \\(x\\to-\\infty\\) inside the slab and is normalized as \\[\\phi(x;k_{x})\\sim\\sin(k_{x}x+\\delta)\\quad{\\rm as}x\\to\\infty \\tag{7}\\] Accepting for the moment that as \\(k_{e}\\to 0\\) the amplitude of \\(\\phi(x;k_{ex})\\) in the internal region \\(x<a\\) goes to zero as \\(k_{e}\\sim\\sqrt{\\epsilon}\\), then the square of the overlap integral in Eq. (5) behaves as \\(k_{e}^{2}\\), because by our proposition the amplitude of \\(\\phi(x^{\\prime};k_{ex})\\sim k_{ex}\\sim k_{e}\\). Together with the \\(1/k_{e}\\) prefactor we get an overall behavior of \\(k_{e}\\) for the inelastic probability as claimed. To show that indeed as \\(k_{e}\\to 0\\) the amplitude of \\(\\phi(x;k_{ex})\\) in the internal region \\(x<a\\) goes to zero as \\(k_{e}\\sim\\sqrt{\\epsilon}\\), we temporarily disregard the required normalization of \\(\\phi(x;k_{x})\\) of Eq. (7) and fix its initial conditions (slope and value) at some point inside the interaction region \\(x<a\\) such that the regularity condition is ensured. We then integrate out to \\(x=a\\). Let us denote this unnormalized solution with a prime, as \\(\\phi^{\\prime}(x;k_{x})\\). The point is for \\(k_{x}\\) varying near 0, both \\(v\\)(the value) and \\(s\\)(the slope) that the solution emerges with at \\(x=a\\), are independent of \\(k_{x}\\) and in fact the interior solution thus obtained is itself independent of \\(k_{x}\\). This is because the local wave vector \\(k(x)=\\sqrt{2m(\\epsilon-U(x))/\\hbar^{2}}\\) essentially stays the same function of \\(x\\) for all \\(\\epsilon\\) near 0. Therefore for \\(x>a\\)\\(\\phi(x;k_{x})\\) continues onto \\[v\\cos[k_{x}(x-a)]+\\frac{s}{k_{x}}\\sin[k_{x}(x-a)]\\quad x>a \\tag{8}\\] This is a phase-shifted sine wave of amplitude \\(\\sim 1/k_{x}\\). We must enforce the normalization of Eq. (7) and get \\(\\phi(x;k_{x})\\sim k_{x}\\phi^{\\prime}(x;k_{x})\\). As a result, the interior solution gets multiplied by \\(k_{x}\\) and we thereby have our result. \\(\\phi(x;k_{x})\\) is the solution of a one-dimensional Schrodinger equation for the incoming particle in the one-dimensional long-range potential created by the slab. The suppression of its amplitude by \\(\\sqrt{\\epsilon}\\) near the slab is due to the reflection it suffers where the interaction turns on. Within the perturbative set-up the non-sticking conclusion is then already foregone [1]. The problem is whether we can really accept this verdict of the one-dimensional unperturbed solution, when in fact we know that the turning on of the perturbation (many body interactions) causes a multitude of resonances to be created, internal resonances being exactly the situation in which the Proposition above is known to badly fail. It appears that the perturbation is in no sense a small physical effect. Therefore a nonpeturbative approach is needed. Here we use R-matrix theory in its general form to accomplish the task. ## IV S-matrix and R-matrix One point that the preceding section has made clear is that it is the energies (both initial and final) in the \\(x\\)-direction, perpendicular to the slab that are most relevant. In fact as regards the final form of our answers the motion of the \\(y\\) degree of freedom may as well have been the motion of another internal degree of freedom of the slab. In other words, mathematically speaking, the \\(y\\) degree of freedom may be subsumed by incorporating it as just another \\(u\\). For example, we may imagine the incident atom being confined in the \\(y\\)-direction by the walls of a wave-guide at \\(y=-L{\\rm and}L\\) that is large enough so that it could not possibly change the physics of sticking. Then we quite rigorously have a bound internal state of the form \\[\\Omega_{c,n}(y,u)=\\Omega_{c}(u)\\sin\\frac{n\\pi y}{L} \\tag{9}\\] \\(x\\) is now the only scattering degree of freedom. There will be no necessity in carrying along the extra index \\(n\\) and variable \\(y\\) as in Eq. (9), and we will simply continue to write \\(\\Omega_{c}(u)\\) instead. Thus with this understanding, the problem is essentially one-dimensional in the scattering degree of freedom. We proceed to derive the expression for the \\({\\bf S}\\) matrix in terms of the so-called \\({\\bf R}\\) matrix, and derive the structure of the \\({\\bf R}\\) matrix. For simplicity we continue to assume for the moment that there is no interaction for \\(x>a\\). Then for \\(x>a\\), the scattering wavefunction of the interacting system corresponding to the scattering particle coming in on one entrance channel, say \\(c=e\\), with energy \\(\\epsilon=\\hbar^{2}k_{e}^{2}/(2m)\\) is \\[\\psi(x,u)=\\sum_{c=1}^{\\infty}\\left(\\frac{e^{-ik_{e}x}}{\\sqrt{k_{e}}}\\delta_{ ce}-\\frac{e^{ik_{c}x}}{\\sqrt{k_{c}}}S_{ce}\\right)\\Omega_{c}(u)\\qquad x>a \\tag{10}\\] where the sum must include all channels, even though the open channels are finite in number. The factors of \\(k_{c}^{-1/2}\\) in Eq. (10) mean that the flux in each channel is proportional only to the square of the coefficient and hence ensure the unitarity of \\({\\bf S}\\). With this convention, the open-open part of the \\({\\bf S}\\)-matrix--the \\(n\\times n\\) submatrix \\(S_{cc^{\\prime}}\\) with \\(c,c^{\\prime}=1,2,\\ldots,n\\)--is unitary. \\(\\sqrt{k_{c}}\\equiv e^{i\\pi/4}\\sqrt{\\kappa_{c}}\\) may be arbitrarily chosen since it cannot affect the open-open part of \\({\\bf S}\\). \\({\\bf S}\\) is found in analogy to the one-dimensional case by introducing the matrix version of the inverse logarithmic derivative at \\(x=a\\) called \\({\\bf R}(E)\\) the Wigner \\({\\bf R}\\)-matrix defined by \\[\\vec{v}={\\bf R}(E)\\ \\vec{s} \\tag{11}\\] where the components of \\(\\vec{v}\\) and \\(\\vec{s}\\) are the expansion coefficients of \\(\\psi(x=a,u)\\) and \\(\\frac{\\partial\\psi(x=a,u)}{\\partial x}\\) respectively in the \\(\\Omega_{c}(u)\\) basis. Supposing \\(\\frac{\\partial\\psi(x=a,u)}{\\partial x}\\) to be known, we will (like in electrostatics) use the Neumann Green's function \\(G_{N}(x,u;x^{\\prime},u^{\\prime})\\) to construct \\(\\psi(x,u)\\) everywhere in the interior \\(x<a\\). \\(\\psi(x,u)\\) satisfies the full Schrodinger equation with energy \\(E\\). We need \\(\\chi_{\\lambda}(x,u)\\ \\lambda=1,2,\\cdots\\), the normalized eigenfunctions of the full Schrodinger equation in the interior \\(x<a\\) with energies \\(E_{\\lambda}\\), satisfying Neumann boundary conditions \\(\\frac{\\partial\\chi(x=a,u)}{\\partial x}=0\\). So \\[\\left(\\frac{-\\hbar^{2}}{2m}\ abla^{2}+V_{\\rm int}(x,u)-E\\right)\\psi(x,u)=0 \\tag{12}\\]\\[\\left(\\frac{-\\hbar^{2}}{2m}\ abla^{2}+V_{\\rm int}(x,u)-E_{\\lambda} \\right)\\chi_{\\lambda}(x,u)=0 \\tag{13}\\] \\[\\left(\\frac{-\\hbar^{2}}{2m}\ abla^{2}+V_{\\rm int}(x,u)-E\\right)G_{N }(x,u;x^{\\prime},u^{\\prime})=\\delta(x-x^{\\prime})\\delta(u-u^{\\prime}) \\tag{14}\\] where \\(\ abla^{2}\\equiv\\frac{\\partial^{2}}{\\partial x^{2}}+\\frac{\\partial^{2}}{ \\partial u^{2}}\\) and \\[\\frac{\\partial G_{N}(x=a,u;x^{\\prime},u^{\\prime})}{\\partial x} = 0\\hskip 28.452756pt{\\rm and}\\hskip 28.452756pt\\frac{\\partial\\chi(x=a,u)} {\\partial x}=0 \\tag{15}\\] \\[\\Rightarrow G_{N}(x,u;x^{\\prime},u^{\\prime}) = \\sum_{\\lambda=1}^{\\infty}\\frac{\\chi_{\\lambda}(x,u)\\chi_{\\lambda}( x^{\\prime},u^{\\prime})}{E_{\\lambda}-E} \\tag{16}\\] \\(G_{N}\\) is symmetric in the primed and unprimed variables. By Stokes' Theorem, \\[(-\\hbar^{2}/2m)\\int\\limits_{x^{\\prime}<a}dx^{\\prime}\\int\\limits_{\\rm all\\ u^{ \\prime}}du^{\\prime}\\left(\\phi_{1}\ abla^{\\prime 2}\\phi_{2}-\\phi_{2}\ abla^{ \\prime 2}\\phi_{1}\\right)=(-\\hbar^{2}/2m)\\int\\limits_{\\rm x^{\\prime}=a,\\ all\\ u^{ \\prime}}du^{\\prime}\\left(\\phi_{1}\ abla^{\\prime}_{\\dot{n}}\\phi_{2}-\\phi_{2} \ abla^{\\prime}_{\\dot{n}}\\phi_{1}\\right) \\tag{17}\\] where \\(\ abla^{\\prime}_{\\dot{n}}(\\cdot)\\equiv\\dot{x}^{\\prime}(\\cdot)\\cdot\ abla^{\\prime}\\) with \\(\\phi_{1}=\\psi(x^{\\prime},u^{\\prime})\\) and \\(\\phi_{2}=G_{N}(x,u;x^{\\prime},u^{\\prime})\\) gives \\[\\psi(x,u)=\\frac{\\hbar^{2}}{2m}\\int\\limits_{\\rm all\\ u^{\\prime}}du^{\\prime}\\ G_{N}(x,u;x^{\\prime},u^{\\prime})\\frac{\\partial\\psi(x^{ \\prime}=a,u^{\\prime})}{\\partial x^{\\prime}}\\hskip 28.452756ptx<a \\tag{18}\\] Put \\(x=a\\) and it is deduced using Eqs. (11) and (18) together that \\[R_{cc^{\\prime}}(E)=\\sum_{\\lambda=1}^{\\infty}\\frac{\\gamma_{\\lambda c}\\gamma_{ \\lambda c^{\\prime}}}{E_{\\lambda}-E} \\tag{19}\\] where \\(\\gamma_{\\lambda c}=\\sqrt{\\frac{\\hbar^{2}}{2m}}\\int\\limits_{\\rm all\\ u}du\\ \\chi_{ \\lambda}(a,u)\\Omega_{c}(u)\\). ### The S matrix Now shifting attention to the outside (\\(x>a\\)), we see that we can compute both \\(\ abla_{\\dot{n}}\\psi(a,u)\\) and \\(\\psi(a,u)\\) on the surface \\(x=a\\) using the asymptotic form of Eq. (10) which automatically gives these expanded in the \\(\\Omega_{c}(u)\\) basis. Writing the matrix Eq. (11) is now simple. It is best to do it all in matrix notation, and thus be able to treat all possible independent asymptotic boundary conditions simultaneously. Let \\(e^{ikx}\\), \\(\\sqrt{k}\\) and \\(1/\\sqrt{k}\\) be diagonal matrices with diagonal elements \\(e^{ik_{c}x}\\), \\(\\sqrt{k_{c}}\\) and \\(1/\\sqrt{k_{c}}\\). Then Eq. (11) reads \\[\\frac{e^{-ika}}{\\sqrt{k}}-\\frac{e^{ika}}{\\sqrt{k}}{\\bf S}=i{\\bf R}k\\left(\\frac {-e^{-ika}}{\\sqrt{k}}-\\frac{e^{ika}}{\\sqrt{k}}{\\bf S}\\right)\\;. \\tag{20}\\] Each column \\(c=1,\\ldots,n\\) of the matrix equation above is just Eq. (11) for the solution corresponding to an incoming wave only in channel \\(c\\) (For \\(c>n\\) the wavefunctions blow up as \\(x\\rightarrow\\infty\\)). Remembering that non-diagonal matrices don't commute, we solve for \\({\\bf S}\\) to get \\[{\\bf S}=e^{-ika}\\sqrt{k}\\frac{1}{1-i{\\bf R}k}(1+i{\\bf R}k)\\frac{1}{\\sqrt{k}}e^{-ika} \\tag{21}\\] or, with some simple matrix manipulation, \\[{\\bf S}=e^{-ika}\\frac{1}{1-i\\sqrt{k}{\\bf R}\\sqrt{k}}(1+i\\sqrt{k}{\\bf R}\\sqrt{k}) e^{-ika}\\;. \\tag{22}\\] ## V S Matrix Near a Resonance As discussed in the introduction, the resonances are a key to the sticking issue. Sticking is essentially a long lived Feshbach resonance in which energy has been supplied to surface and bulk degrees of freedom, temporarily dropping the scattering particle into a bound state of the attractive potential. Thus we must study resonances in various circumstances in the low incident translational energy regime. We derive the approximation for \\({\\bf S}(E)\\) near \\(E=E_{0}\\), a resonant energy of the compound system. \\(E_{0}\\) is the total energy of the joined (resonant) system. Within the R-matrix approach, the \\(\\chi_{\\lambda}(x,u)\\) of section IV are bound, compound states with Neuman boundary conditions at \\(x=a\\). \\(R\\)-matrix theory properly couples these bound state to the continuum, but some of the eigenstates are nonetheless weakly coupled to the continuum, as evidenced by small values of the \\(\\gamma_{\\lambda c}\\)'s of section IV; these are the measure of the strength of the continuum couplings. While every one of the \\(R-\\)matrix bound states will result in a pole \\(E_{\\lambda}\\) in the \\(R\\) matrix expansion, only the weakly coupled ones are the true long lived Feshbach resonances of physical interest. It is also helpful to know that the values of these 'truly' resonant poles at \\(E_{\\lambda}\\) are the most stable to changes in the position \\(x=a\\) of the box. This in fact provides one unambiguous way to identify them. Our purpose here is to derive the resonant approximation to the \\({\\bf S}\\) matrix in the vicinity of one of these Feshbach resonances. We do so using the form of the \\({\\bf R}\\)-matrix in Eq. (19). Note that the energy density \\(\\rho(E)=1/D(E)\\) of these Feshbach resonances will be large because of the large number of degrees of freedom of the target. \\(D(E)\\) is the level spacing of the quasibound, resonant states. ### Isolated Resonance As mentioned, the point of view we will take is to identify a resonant energy with a particular pole \\(E_{\\lambda}\\) in the \\({\\bf R}\\) matrix expansion of Eq. (19). Those \\(E_{\\lambda}\\) corresponding to resonances are a subsequence of the \\(E_{\\lambda}\\) appearing in the expansion in Eq. (19). For \\(E\\) near a well isolated resonance at \\(E_{\\lambda}\\) we separate the sum-over-poles expansion of the R-matrix into a single matrix term having elements \\(\\frac{\\gamma_{\\lambda c}\\gamma_{\\lambda c^{\\prime}}}{E_{\\lambda}-E}\\), plus a sum over all the remaining terms, call it \\(N\\). If the energy interval between \\(E_{\\lambda}\\) and all the other poles is large compared to the open-open residue at \\(E_{\\lambda}\\) then we may expect that the \\(n\\times n\\) open-open block of \\(N\\) will have all its elements to be small. Then rewriting the inverse in Eq. (22) \\[\\frac{1}{1-i\\sqrt{k}{\\bf R}\\sqrt{k}}\\equiv\\frac{1}{1-i\\left(M+\\frac{V}{E_{ \\lambda}-E}\\right)} \\tag{23}\\]where \\(M\\equiv\\sqrt{k}N\\sqrt{k}\\) and \\(V_{cc^{\\prime}}\\equiv(\\sqrt{k_{c}}\\gamma_{\\lambda c})(\\sqrt{k_{c^{\\prime}}}\\gamma_ {\\lambda c^{\\prime}})\\), and setting \\(M=0\\) allows us to simplify the central term in Eq. (22) exactly. (We will return to the case \\(M\ eq\\)0.) \\[\\frac{1}{1-i\\sqrt{k}{\\bf R}\\sqrt{k}}(1+i\\sqrt{k}{\\bf R}\\sqrt{k}) \\tag{24}\\] \\[= 1+\\frac{1}{1-i\\sqrt{k}{\\bf R}\\sqrt{k}}2i\\sqrt{k}{\\bf R}\\sqrt{k}\\] (25) \\[= 1+\\frac{1}{1-\\frac{iV}{E_{\\lambda}-E}}2i\\frac{V}{E_{\\lambda}-E} \\hskip 14.226378pt({\\rm with}M=0)\\] (26) \\[= 1+\\frac{1}{E_{\\lambda}-E-iV}2iV\\] (27) \\[= 1+\\frac{1}{E_{\\lambda}-E-i(\\Gamma_{\\lambda}/2+i\\Delta E)}2iVk \\tag{28}\\] where we used \\[V^{2} = \\left((\\gamma_{\\lambda 1}^{2}k_{1}+\\cdots+\\gamma_{\\lambda n}^{2}k_{n })+(\\gamma_{\\lambda(n+1)}^{2}\\kappa_{n+1}+\\cdots)\\right)V \\tag{29}\\] \\[\\equiv \\left(\\left(\\frac{\\Gamma_{\\lambda 1}}{2}+\\cdots+\\frac{\\Gamma_{ \\lambda n}}{2}\\right)+i(\\gamma_{\\lambda(n+1)}^{2}\\kappa_{n+1}+\\cdots)\\right)V\\] (30) \\[\\equiv \\left(\\frac{\\Gamma_{\\lambda}}{2}+i\\Delta E_{\\lambda}\\right)V \\tag{31}\\] to get the identities \\[[E_{\\lambda}-E-iV]V = [E_{\\lambda}-E-i(\\Gamma_{\\lambda}/2+i\\Delta E)]V \\tag{32}\\] \\[\\Rightarrow\\frac{1}{E_{\\lambda}-E-i(\\Gamma_{\\lambda}/2+i\\Delta E )}V = \\frac{1}{E_{\\lambda}-E-iV}V \\tag{33}\\] Also define \\((\\Gamma_{\\lambda c}/2)^{1/2}\\equiv\\gamma_{\\lambda c}\\sqrt{k_{c}},c=1,2,\\cdots,n\\). This defines the sign of the square-root on the lhs. to be the sign of \\(\\gamma_{\\lambda c}\\) and allows the convenience of expressing things in terms of the \\(\\Gamma_{\\lambda c}\\)'s and their square-roots, and not having to use the \\(\\gamma_{\\lambda c}\\)'s themselves. Thus we arrive at \\[S_{cc^{\\prime}}=e^{-ik_{c}a}\\left(\\delta_{cc^{\\prime}}+\\frac{i\\Gamma_{\\lambda c }^{1/2}\\Gamma_{\\lambda c^{\\prime}}^{1/2}}{E_{\\lambda}^{(r)}-E-i\\Gamma_{ \\lambda}/2}\\right)e^{-ik_{c^{\\prime}}a} \\tag{34}\\] where \\(E_{\\lambda}^{(r)}\\equiv E_{\\lambda}+\\Delta E_{\\lambda}\\), for the \\(n\\times n\\) open-open unitary block of \\(S\\) in the neighbourhood of a single isolated resonance after neglecting the contribution of the background matrix \\(M\\). For us the essential point is that \\[\\Gamma_{\\lambda c}=2\\,k_{c}(E)\\gamma_{\\lambda c}^{2}, \\tag{35}\\] that the partial widths \\(\\Gamma_{\\lambda c}\\) depend on the energy \\(E\\), through the kinematic factor \\(k_{c}(E)\\). Mostly this energy dependence is small and irrelevant except where the \\(k_{c}\\)'s and hence \\(\\Gamma_{\\lambda c}\\)'s are varying near 0. These are the partial widths of the open channels near threshold. Hence \\(|S_{ce}|^{2}\\) (\\(c\ eq e\\)) an inelastic probability behaves like \\(k_{e}\\sim\\sqrt{\\epsilon}\\) when the entrance channel is at threshold. Including the background term (\\(M\ eq 0\\)) does not change this. To see this we may perform the inverse in Eq. (22) to first order in \\(M\\) and then get an additional contribution of the terms \\[e^{-ika}\\left(\\frac{2i}{1-\\frac{iV}{E_{\\lambda}-E}}M+\\frac{1}{1-\\frac{iV}{E_{ \\lambda}-E}}+\\frac{1}{1-\\frac{iV}{E_{\\lambda}-E}}iM\\frac{1}{1-\\frac{iV}{E_{ \\lambda}-E}}2iV\\right)e^{-ika} \\tag{36}\\] to the \\(S\\)-matrix. Now, both \\(M\\) and \\(V\\) have a factor of \\(\\sqrt{k_{c}}\\) multiplying their \\(c\\)th columns (and rows) from their definitions and so a matrix element \\(b_{cc^{\\prime}}\\) of the matrix in parentheses in Eq. (36) will have a \\(\\sqrt{k_{c}}\\) and \\(\\sqrt{k_{c^{\\prime}}}\\) dependence. An inelastic element of \\(S(c\ eq c^{\\prime})\\) would now take the form \\[S_{cc^{\\prime}}=e^{-ik_{c}a}\\left(b_{cc^{\\prime}}+\\frac{i\\Gamma_{\\lambda c}^{1/ 2}\\Gamma_{\\lambda c^{\\prime}}^{1/2}}{E_{\\lambda}^{(r)}-E-i\\Gamma_{\\lambda}/2} \\right)e^{-ik_{c^{\\prime}}a}, \\tag{37}\\] As mentioned our interest is in the case when the entrance channel is at threshold so that this dependence is \\(\\sqrt{k_{e}}\\), making the inelastic probability \\(|S_{ce}|^{2}\\) still continue to behave as \\(k_{e}\\sim\\sqrt{\\epsilon}\\). ### Overlapping Resonances Here we require the form of the \\({\\bf S}\\) matrix near an energy \\(E\\) where many of the quasi-bound states may be simultaneously excited, i.e. the resonances overlap. Again, neglecting background for the moment, the \\({\\bf S}\\) matrix is simply taken to be a sum over the various resonances. \\[{\\bf S}=1-\\sum_{\\lambda}\\frac{iA_{\\lambda}}{E-E_{\\lambda}^{(r)}+i\\Gamma_{ \\lambda}/2} \\tag{38}\\] where \\(A_{\\lambda}\\) is a \\(n\\times n\\) rank 1 matrix with the \\(cc^{\\prime}\\)th component as \\(\\Gamma_{\\lambda c}^{1/2}\\Gamma_{\\lambda c^{\\prime}}^{1/2}\\). There is no entirely direct justification of this form, but one can see that there is much which it gets correct. The \\(A_{\\lambda}\\) are symmetric, hence \\({\\bf S}\\) is symmetric. Obviously it has the poles in the right places allowing the existence of decaying states with a purely outgoing wave at the resonant energies. A crucial additional assumption that also makes \\({\\bf S}\\) approximately unitary is that the signs of the \\(\\Gamma_{\\lambda c}^{1/2}\\) are random and uncorrelated both in the index \\(\\lambda\\) as well as \\(c\\), regardless of how close the energy intervals involved may be. One simple consequence is that we approximately have that \\[A_{\\lambda}A_{\\lambda^{\\prime}}=\\delta_{\\lambda\\lambda^{\\prime}}\\Gamma_{ \\lambda}A_{\\lambda} \\tag{39}\\] in the sense that the l.h.s. is negligible for \\(\\lambda\ eq\\lambda^{\\prime}\\) in comparison to the value for \\(\\lambda=\\lambda^{\\prime}\\). With Eq. (39) it is easy to verify the approximate unitarity of \\({\\bf S}\\). We investigate now the onset of the overlapping regime as \\(E\\) increases. \\(D(E)\\), the level spacing of the resonant \\(E_{\\lambda}^{(r)}\\), is a rapidly decreasing function of its argument. On the other hand, \\(\\Gamma_{\\lambda}=\\Gamma_{\\lambda 1}+\\Gamma_{\\lambda 2}+\\cdots+\\Gamma_{\\lambda n}\\), and since more channels are open at higher energy, \\(\\Gamma_{\\lambda}\\) is increasing with the energy of the resonance. The widths must therefore eventually overlap,and \\(\\Gamma_{\\lambda}\\gg D\\left(E_{\\lambda}^{(r)}\\right)\\) for the larger members of the sequence of \\(E_{\\lambda}^{(r)}\\)'s. In this regard there is a useful estimate due to Bohr and Wheeler [15], that for \\(n\\) large \\[\\frac{\\Gamma_{\\lambda}}{D(E_{\\lambda}^{(r)})}\\simeq n. \\tag{40}\\] Appendix A derives this using a phase space argument. Here we point out that this is entirely consistent with the assumption of the random signs, indeed requiring it to be true. Take for example a typical inelastic amplitude \\[{\\bf S}_{cc^{\\prime}}=-i\\sum_{\\lambda}\\frac{\\Gamma_{\\lambda c}^{1/2}\\Gamma_{ \\lambda c^{\\prime}}^{1/2}}{E_{\\lambda}^{(r)}-E-i\\Gamma_{\\lambda}/2}\\ \\ \\ \\ \\ (c\ eq c^{\\prime}) \\tag{41}\\] First let us note that the \\(\\Gamma_{\\lambda}\\) being the sum of many random variables (the partial widths \\(\\Gamma_{\\lambda c}\\)) do not fluctuate much. Let \\(\\Gamma\\) denote their typical value over the \\(n\\) overlapping resonances. Also since \\(\\Gamma=nD\\) it follows that the typical size of a partial width \\(\\Gamma_{\\lambda c}\\) is \\(D\\). Therefore the typical size of the product \\(\\Gamma_{\\lambda c}^{1/2}\\Gamma_{\\lambda c^{\\prime}}^{1/2}\\) is \\(D\\) but these random variables fluctuate randomly over the index \\(\\lambda\\), and moreover the sign is random. Thus for energies in the overlapping domain \\(S_{cc^{\\prime}}\\) is a sum of \\(n\\) complex numbers each of typical size \\(D/\\Gamma=1/n\\), but random in sign. This makes for a sum of order \\(1/\\sqrt{n}\\). Clearly this is as required to make the \\(n\\times n\\) matrix \\({\\bf S}\\) unitary. Note that the above argument fails (as is should) if \\(c\ eq c^{\\prime}\\) because then the signs of \\(\\Gamma_{\\lambda c}^{1/2}\\Gamma_{\\lambda c}^{1/2}=\\Gamma_{\\lambda}>0\\) are of course not random. Unlike the case of the isolated resonance, the S-matrix elements here are smoothly varying in E. Addition of a background term \\(B_{cc^{\\prime}}\\) \\[{\\bf S}_{cc^{\\prime}}=B_{cc^{\\prime}}-i\\sum_{\\lambda}\\frac{\\Gamma_{\\lambda c}^ {1/2}\\Gamma_{\\lambda c^{\\prime}}^{1/2}}{E_{\\lambda}^{(r)}-E-i\\Gamma_{\\lambda}/2}. \\tag{42}\\] just shifts this smooth variation by a constant. If \\(B_{cc^{\\prime}}\\) is also thought of as arising from a sum over the individual backgrounds then for the same reasons as discussed at the end of the preceding section \\(|B_{cc}|^{2}\\sim k_{e}\\sim\\sqrt{\\epsilon}\\) for an entrance channel near threshold. For simplicity we will continue to take \\(B_{cc^{\\prime}}\\) to be 0 and look at the case with background in the appendix. ## VI Q-Matrix and sticking From the viewpoint of scattering theory, the sticking of the incident particle to the target is just a long-lived resonance. It is natural then to investigate the time-delay for the collision. Smith [14] introduced the collision lifetime or \\(Q\\)-matrix \\[{\\bf Q}\\equiv i\\hbar{\\bf S}\\frac{\\partial{\\bf S}^{\\dagger}}{\\partial E} \\tag{43}\\] which encapsulates such information. We review some of the relevant properties of \\({\\bf Q}\\). The rhs of Eq. (43) involves the 'open-open' upper left block of \\({\\bf S}\\) so that \\({\\bf Q}\\) is also an \\(n\\times n\\) energy-dependent matrix, having dimensions of time. For 1-dimensional elastic potential scattering \\({\\bf S}=e^{i\\phi(e)}\\) and \\({\\bf Q}\\) reduces to the familiar time delay \\(i\\hbar\\frac{\\partial\\phi(E)}{\\partial E}\\). If \\(\\vec{v}\\) is a vector whose entries are the coefficients of the incoming wave in each channel then \\(\\vec{v}^{\\rm tr}{\\bf Q}(E)\\vec{v}\\) is the average delay time experienced by such an incoming wave. Because physically the particle is incident on only one channel, \\(\\vec{v}\\) consists of all 0's except for a 1 in the \\(e\\)th slot so that the relevant quantity is just the matrix element \\({\\bf Q}_{ee}(E)\\). Smith shows that this delay time is the surplus probability of being in a neighborhood of the target (measured relative to the probability if no target were present) divided by the flux arriving in channel \\(e\\). This matches our intuition that when the delay time is long, there is a higher probability that the particle will be found near the target. Furthermore, as a Hermitian matrix, \\({\\bf Q}(E)\\), can be resolved into its eigenstates \\(\\vec{v}^{(1)}\\cdots\\vec{v}^{(n)}\\) with eigenvalues \\(q_{1}\\cdots q_{n}\\). The components of \\(\\vec{v}^{(1)}\\) are the incoming coefficients of a quasi-bound state with lifetime \\(q_{1}\\) and so on. Then \\[\\vec{v}^{\\rm tr}{\\bf Q}(E)\\vec{v}=\\sum_{j=1}^{n}q_{j}|\\vec{v}^{(j)}\\cdot\\vec{v} |^{2}. \\tag{44}\\] As can be seen from this expression, the average time delay results, in general, from the excitation of multiple quasi-stuck states each with its lifetime \\(q_{j}\\) and probability of formation \\(|\\vec{v}^{(j)}\\cdot\\vec{v}|^{2}\\). However, we will find that using our resonant approximation to the \\({\\bf S}\\) matrix near a resonant energy \\(E_{\\lambda}^{(r)}\\) the time delay will consist of only one term from the sum on the rhs of Eq. (44), all the other eigenvalues being identically 0. Using equation Eq. (43), \\[{\\bf Q}(E)=i\\hbar\\Biggl{(}\\sum_{\\lambda^{\\prime}}\\frac{-iA_{\\lambda^{\\prime}} }{\\left[E-E_{\\lambda^{\\prime}}^{(r)}-i\\Gamma_{\\lambda^{\\prime}}/2\\right]^{2}}- \\sum_{\\lambda\\lambda^{\\prime}}\\frac{A_{\\lambda}A_{\\lambda^{\\prime}}}{\\left[E-E_ {\\lambda}^{(r)}+i\\Gamma_{\\lambda}/2\\right]\\left[E-E_{\\lambda^{\\prime}}^{(r)}-i \\Gamma_{\\lambda^{\\prime}}/2\\right]^{2}}\\Biggr{)} \\tag{45}\\] which using Eq. (39) simplifies to \\[=\\sum_{\\lambda}\\frac{\\hbar}{(E-E_{\\lambda}^{(r)})^{2}+(\\Gamma_{\\lambda}/2)^{2} }A_{\\lambda}\\, \\tag{46}\\] a remarkably simple answer. We need \\(Q_{ee}(E)\\), where \\(e\\) is the entrance channel. \\[Q_{ee}(E) = \\sum_{\\lambda}\\frac{\\hbar\\Gamma_{\\lambda e}}{(E-E_{\\lambda}^{(r)} )^{2}+(\\Gamma_{\\lambda}/2)^{2}} \\tag{47}\\] \\[= \\sum_{\\lambda}\\left(\\frac{\\hbar\\Gamma_{\\lambda}}{(E-E_{\\lambda}^ {(r)})^{2}+(\\Gamma_{\\lambda}/2)^{2}}\\times\\frac{\\Gamma_{\\lambda e}}{\\Gamma_{ \\lambda}}\\right) \\tag{48}\\] where the second equation has the interpretation (for each term) as the life-time of the mode, multiplied by the probability of its formation. Note how for each resonance \\(E_{\\lambda}^{(r)}\\) there is only one term corresponding to the decomposition of Eq. (44). The actual measured lifetime is the average of \\(Q_{ee}(E)\\) averaged over the energy spectrum \\(|g(E)|^{2}\\) of the collision process. ### Energy averaging over spectrum With the target in state \\(\\Omega_{e}(u)\\) where \\(c=e\\) is the entrance channel, the energy of the target is fixed, and the time-dependent solution will look like \\[\\psi(x,u,t)=\\int dE\\left(g(E)\\sum_{c=1}^{\\infty}\\left(\\frac{e^{-ik_{c}(E)x}}{\\sqrt{k _{e}(E)}}\\delta_{ee}-\\frac{e^{ik_{c}(E)x}}{\\sqrt{k_{c}(E)}}S(E)_{ce}\\right)\\Omega_ {c}(u)\\right). \\tag{49}\\] Recall, \\(E\\) is the total energy of the system. We are interested in the threshold situation where the incident kinetic energy of the incoming particle \\(\\epsilon\\to 0\\). This can be arranged if \\(g(E)\\) is peaked at \\(E_{0}\\) with a spread \\(\\Delta E\\) such that i) \\(E_{0}\\) is barely above \\(E_{e}^{\\rm target}\\) and ii) \\(\\Delta E=\\delta\\epsilon\\) is some small fraction of \\(\\epsilon\\), the mean energy of the incoming particle. The second condition ensures that we may speak unambiguously of the incoming particle's mean energy. So, \\[\\langle Q_{ee}(E)\\rangle \\equiv \\int dE|g(E)|^{2}Q_{ee}(E) \\tag{50}\\] \\[\\simeq \\frac{1}{\\Delta E}\\int dEQ_{ee}(E) \\tag{51}\\] \\(\\langle\\rangle\\) denotes the average over the \\(\\Delta E\\) interval. Now \\(Q_{ee}(E)\\) is just a sum of Lorentzians centred at the \\(E_{\\lambda}^{(r)}\\)'s with width \\(\\Gamma_{\\lambda}\\) and Eq. (51) is just a measure of their mean value over the \\(\\Delta E\\) interval. So long as the \\(\\Delta E\\) interval around which we are averaging, is broad enough to straddle many of these Lorentzians, the mean height is just \\[\\frac{1}{\\Delta E}\\times\\rho(E)\\Delta E\\times\\frac{\\hbar\\pi\\Gamma_{\\lambda e} }{\\Gamma_{\\lambda}} \\tag{52}\\] where the second factor is the number of Lorentzians in the \\(\\Delta\\)E interval and the third factor is the area under the '\\(\\lambda\\)th' Lorentzian. This is true regardless of whether or not they are overlapping. It will be convenient to write \\(\\Gamma_{\\lambda}\\) as \\[\\Gamma_{\\lambda}=n\\,\\times 2\\bar{k}_{\\lambda}\\,{\\rm var}(\\gamma_{\\lambda}) \\tag{53}\\] where \\({\\rm var}(\\gamma_{\\lambda})\\) is the variance of the set of \\(\\gamma_{\\lambda c}{}^{\\prime}s\\) over the \\(n\\) open channels and \\(\\bar{k}_{\\lambda}\\) is a mean or effective wavenumber \\(k_{c}\\) over the open channels, which for a particular realization \\(\\lambda\\) we take to be defined by Eq. (53) itself. Let \\(\\langle\\ \\rangle\\) denote the average over the occurrences of the quantity in the \\(\\Delta E\\) interval. \\(\\Gamma\\equiv\\langle\\Gamma_{\\lambda}\\rangle\\), \\(\\bar{k}\\equiv\\langle\\bar{k}_{\\lambda}\\rangle\\). Then Eq. (52) simplifies \\[\\langle Q_{ee}(E)\\rangle \\simeq \\hbar\\frac{1}{D}\\frac{k_{e}\\langle\\gamma_{\\lambda e}^{2}\\rangle} {nk\\langle{\\rm var}(\\gamma_{\\lambda})\\rangle} \\tag{54}\\] \\[\\simeq \\frac{\\hbar}{\\Gamma}\\frac{k_{e}}{\\bar{k}} \\tag{55}\\] which tends to \\(0\\) as \\(k_{e}\\sim\\sqrt{\\epsilon}\\). The form of Eq. (55) and all the steps leading up to it remain valid whether the Lorentzians are overlapping or not, as long as the \\(\\Delta E=\\Delta\\epsilon\\) interval which we are averaging over includes many of them. ### On an isolated resonance If the target is cold enough that the resonances are isolated, then as the incident particle's energy \\(\\epsilon\\to 0\\), adhering to the condition \\(\\Delta\\epsilon<\\epsilon\\) will eventually result in \\(\\Delta\\epsilon\\) becoming narrower than the resonance widths. It becomes possible then for \\(\\Delta\\epsilon\\) to be centered right around a single isolated resonance at \\(E_{\\lambda}^{(r)}\\). In this case \\(\\langle Q_{ee}(E)\\rangle\\) is found simply by putting \\(E=E_{\\lambda}^{(r)}\\), because the spectrum \\(|g(E)|^{2}\\) is well approximated by \\(\\delta(E-E_{\\lambda}^{(r)})\\). So \\[\\langle Q_{ee}(E)\\rangle=\\frac{\\hbar\\Gamma_{\\lambda e}}{\\Gamma_{\\lambda}^{2}} =\\frac{\\hbar}{\\Gamma_{\\lambda}}\\frac{\\Gamma_{\\lambda e}}{\\Gamma_{\\lambda}}= \\frac{\\hbar}{\\Gamma_{\\lambda}}\\frac{k_{e}}{n\\hbar}. \\tag{56}\\] Even in this case there is the \\(\\sqrt{\\epsilon}\\) behavior as \\(\\epsilon\\to 0\\) and there is no sticking. In the extreme case that there are no other open channels at all (\\(n=1\\)), \\(\\langle Q_{ee}(E)\\rangle\\simeq\\frac{\\hbar\\Gamma_{\\lambda e}}{\\Gamma_{\\lambda}^ {2}}=\\frac{\\hbar}{\\Gamma_{\\lambda e}}\\) because \\(\\Gamma_{\\lambda}=\\Gamma_{\\lambda e}\\). In fact, \\(e=1\\), and \\(\\langle Q_{ee}(E)\\rangle\\) diverges, implying in this case that it is possible to have the particle stick. This is an exception to all the cases above but is experimentally not so relevant because we may always expect to find some exothermic channels open for a target with many degrees of freedom. ## VII Inelastic cross sections and sticking Another physically motivated measure of the sticking probability may be obtained by studying the total inelastic cross-section of the collision. The idea is that any long lived \"sticking\" is overwhelmingly likely to result in an inelastic collision process; i.e. that the scattering particle will leave in a different channel than it entered with. Using the original Wigner approach it is possible to show that for our case where we have only one scattering degree of freedom, the inelastic probability for an exothermic and endothermic collision vanishes like \\(k_{e}\\). The only possible exception to this is a measure zero chance of a resonance exactly at the threshold energy, \\(E_{e}^{\\rm target}\\). In the event that there is a resonance \\(E_{\\lambda}^{(r)}\\) close to but above this threshold energy, it is only necessary that \\(E\\) is below \\(E_{\\lambda}^{(r)}\\) (by an energy of at least \\(\\Delta E\\), the spread in energy) in order to observe the usual Wigner threshold behavior: \\[P_{\\rm inelastic}\\to 0\\ \\ {\\rm like}\\ \\ k_{e}\\propto\\sqrt{\\epsilon} \\tag{57}\\] for the inelastic probability. However our problem is unusual in the sense that because of the large number of degrees of freedom of the target, we will always find resonances between \\(E_{e}^{\\rm target}\\) and \\(E\\) no matter how small \\(E-E_{e}^{\\rm target}=\\epsilon\\) is. Thus the Wigner regime is not accessible. Still the surprise is that a simple computation reveals the same behavior holds for large \\(n\\): \\[P_{\\rm inelastic}(E) = \\sum_{c\ eq e}P_{c\\gets e}(E) \\tag{58}\\] \\[= \\sum_{c\ eq e}|S_{ce}(E)|^{2}\\] (59) \\[= \\sum_{c\ eq e}\\sum_{\\lambda}\\sum_{\\lambda^{\\prime}}\\frac{{\\Gamma _{\\lambda c}}^{1/2}{\\Gamma_{\\lambda e}}^{1/2}}{E-E_{\\lambda}^{(r)}-i\\Gamma_{ \\lambda}/2}\\frac{{\\Gamma_{\\lambda^{\\prime}c}}^{1/2}{\\Gamma_{\\lambda^{\\prime}e }}^{1/2}}{E-E_{\\lambda}^{(r)}+i\\Gamma_{\\lambda}/2}\\] (60) \\[\\Rightarrow P_{\\rm inelastic}(E) = \\sum_{\\lambda}\\frac{\\Gamma_{\\lambda}}{(E-E_{\\lambda}^{(r)})^{2}+( \\Gamma_{\\lambda}/2)^{2}}\\ \\Gamma_{\\lambda e} \\tag{61}\\] where we used the random sign property of the \\(\\Gamma_{\\lambda c}^{1/2}\\)'s and the understanding that \\(\\sum\\limits_{c\ eq e}\\Gamma_{\\lambda c}\\simeq\\sum\\limits_{\\rm all\\ c}\\Gamma_{ \\lambda c}=\\Gamma_{\\lambda}\\). Since the sum \\(\\sum\\limits_{c\ eq e}\\) is over the \\(n\\gg 1\\) open channels, omission of a single term can hardly matter. Apart from the factor \\(\\hbar/\\Gamma_{\\lambda}\\), the rhs of the above equation is identical to the expression for \\(Q_{ee}(E)\\) in Eq. (48). Averaging \\(P_{\\rm inelastic}(E)\\) over many resonances \\(E_{\\lambda}^{(r)}\\) (overlapping or not) we may use the same algebraic simplifications as before to show \\[\\langle P_{\\rm inelastic}\\rangle=\\frac{k_{e}}{\\bar{k}} \\tag{62}\\] As \\(k_{e}\\) tends to 0, this gives the \\(\\sqrt{\\epsilon}\\) Wigner behavior showing that there is no sticking. The above argument fails when there is only one open channel. There are no inelastic channels to speak of. In this case, if the energy \\(E\\) coincides with a resonant energy \\(E_{\\lambda}^{(r)}\\) we will have the exceptional case of sticking, as discussed at the end of the previous section. But as pointed out there, this is primarily of theoretical interest only. ## VIII Channel Decoherence The only case for which we stick is seen to be the case of when we are sitting right on top of a resonance with the incoming energy so well resolved that we are completely within the resonance width, AND there are no exothermic channels open. Having no such channels open amounts to an infintesimally low energy for a large target. Otherwise, the sticking probability tends to 0 as \\(\\sqrt{\\epsilon}\\) in every case. ### Time dependent picture From the time independent point of view, the physical reason for the absence of low energy sticking is contained in the factor \\(\\frac{\\Gamma_{\\lambda e}}{\\Gamma_{\\lambda}}\\) of Eq. (48). This is the formation probability for the compound state. We will explain physically why it is small for \\(n\\gg 1\\). The resonance state is a many-body entangled state. If we imagine the decay of this compound state (already prepared by some other means say) each open channel carries away some fraction of the outgoing flux, with no preference for any one particular channel. Running this whole process in reverse it becomes evident that the optimum way to _form_ the compound state is to have each channel carry an incoming flux with exactly the right amplitude and phase. This corresponds to however an entangled initial state. With all the incoming flux instead constrained to be in only one channel it becomes clear that we are not exciting the resonance in the optimal way and the buildup of amplitude inside is not so large; i.e., the compound state has a small probability of forming. The time dependent view is even more revealing. Imagine a wave packet incident on the system. For a single open channel Feshbach resonance, the build-up of amplitude in the interior region can be decomposed as follows. As the leading edge of the wavepacket approaches the region of attraction, most is turned away due to the quantum reflection phenomena. (It is a useful model to think of the quantum reflection as due to a barrier located some distance away from the interaction region.) The wavefunction in the interaction region constructively interferes with new amplitude entering the region. At the same time, the amplitude leaving the region is out of phase with the reflected wave, cancelling it and assisting more amplitude to enter. Now suppose many channels are open. All the flux entering the interior must of course return, but it does so fragmented into all the other open channels. Only the fraction that makes it back into the entrance channel has the opportunity to interfere (constructively) with the rest of the entering wavepacket. The constructive interference is no longer efficient and is in fact almost negligible for \\(n\\gg 1\\), thereby ruining the delicate process that was responsible for the buildup of the wave function inside. The orthogonality of the other channels prevents interference in the scattering dimension. If we trace over the target coordinates, leaving only the scattering coordinate, most of the coherence and the constructive interference is lost, and no resonant buildup occurs. Therefore, one way to understand the non-sticking is to say that decoherence is to blame. ### Fabry-Perot and Measurement Analogy Suppose we have a resonant quantum mechanical Fabry-Perot cavity, where the particle has a high probability of being found in between the two reflecting barriers. Now, during the time it takes for the probability to build up in the interior, suppose we continually measure the position of the particle inside. In doing so we decohere the wave function and in fact never find it there at all. Alternatively, imagine simply tilting one barrier (mirror) to make it non-parallel to the first and redirecting the flux into an orthogonal direction, again spoiling the resonance. Measurement entangles other (orthogonal) degrees of freedom with the one of interest, resulting in flux being effectively re-directed into orthogonal states. Thus the states of the target (if potentially excitable) are in effect continually monitoring (measuring) to see if the incoming particle has made it in inside, ironically then preventing it from ever doing so. The buildup process of constructive interference in the interaction region, described in the preceding paragraph, is slower than linear in \\(t\\). Therefore, the constant measurement of the particle's presence (and resultant prevention of sticking) is an example of the Zeno \"paradox\" in measurement theory. ## IX Conclusion We have presented a general approach to the low energy sticking problem, in the form of \\(R\\)-matrix theory. This theory is well suited for the task, since it highlights the essential features of multichannel scattering at low incident translational energy. We did not need to make a harmonic or other approximate assumptions about the solid target, which is characterized by its long range interaction with the incoming particle and its density of states. \"Warm\" surfaces are included in the formalism, and do not change the non-sticking conclusion. Several supporting arguments for the non-sticking conclusion were given. Perhaps most valuable is the physical decoherence picture associated with the conclusion that there is no sticking in the zero translational energy limit. Reviewing the observations leading up to the non-sticking conclusion, we start with the near 100% sticking in the zero translational energy limit classically (sticking probability 1). We then invoke the phenomenon of quantum reflection (Fig.1), which keeps the incident particle far from the surface (sticking probability 0). Third, we note that quantum reflection can be overcome by resonances (Fig. 2), and since resonances are ubiquitous in a many body target, being the Feshbach states by which a particle could stick to the surface, perhaps sticking approaches 1 after all. Fourth, we suggest that decoherence (from the perspective of the incoming channel, with elestic scattering definded as coherent) ruins the resonance effect, reinstating the quantum reflection as the determining effect. Finally, then, there is no sticking, and the short answer as to why is: quantum reflection and many channel decoherence. The ultrashort explanation is simply quantum reflection, but this is dangerous and non-rigorous, as we have tried to show. All this does not tell us much about how sticking turns on as incident translational energy is raised. This is the subject of the following paper, where a WKB analysis proves very useful. Quantum reflection is a physical phenomenon liked directly to the failure of the WKB approximation. ###### Acknowledgements. This work was supported by the National Science Foundation through a grant for the Institute for Theoretical Atomic and Molecular Physics at Harvard University and Smithsonian Astrophysical Observatory:National Science Foundation Award Number CHE-0073544. ## Appendix A \\(\\Gamma\\simeq ND\\) With the large number of degrees of freedom involved and assuming thorough phase space mixing associated with the resonance we may reasonably describe the compound state wavefunction by a classical ensemble of points \\((x,p_{x},u,p_{u})\\) in the combined phase space of the joint system given by the normalized distribution \\[\\frac{1}{\\rho_{C}(E)}\\delta(E-H(x,p_{x},u,p_{u})). \\tag{10}\\] It is understood in the above that the system is restricted to be in the region \\(x<a\\). This makes all accessible states of energy \\(E\\) with \\(x<a\\) equally likely. Then the rate of escape \\(\\Gamma/\\hbar\\) through the hypersurface \\(x=a\\) of the members of this ensemble is \\[\\frac{\\Gamma}{\\hbar}=\\frac{1}{\\rho_{C}(E)}\\int\\limits_{x=a}dudp_{u}\\int \\limits_{p_{x}\\in[0,\\infty]}dp_{x}\\frac{p_{x}}{m}\\delta(E-H(x,p_{x},u,p_{u})). \\tag{11}\\] \\(p_{x}/m\\) is just the velocity in phase space of a point at \\(x=a\\) in the \\(\\hat{x}\\) direction. At \\(x=a\\) we have supposed no interaction. Hence the Hamiltonian separates in Eq. (11). Therefore \\[\\frac{\\Gamma}{\\hbar} = \\frac{1}{\\rho_{C}(E)}\\int dudp_{u}\\int\\limits_{0}^{\\infty}d\\left( \\frac{p_{x}^{2}}{2m}\\right)\\delta\\left(E-\\left(\\frac{p_{x}^{2}}{2m}+H^{\\rm target }(u,p_{u})\\right)\\right) \\tag{12}\\] \\[= \\frac{1}{\\rho_{C}}\\int\\limits_{H^{\\rm target}(u,p_{u})<E}dudp_{u}\\] (13) \\[= \\frac{1}{\\rho_{C}}\\Omega_{C}\\simeq\\frac{1}{2\\pi\\hbar\\rho_{Q}} \\Omega_{Q}=\\frac{1}{2\\pi\\hbar}nD. \\tag{14}\\]Therefore \\(\\frac{\\Gamma}{D}\\simeq n\\). \\(\\rho_{Q}\\) (\\(\\rho_{C}\\)) is the quantum (classical) density of states (phase space volume) of the joint system at energy E. \\(\\Omega_{Q}\\) (\\(\\Omega_{C}\\)) is the quantum (classical) total number of states (total phase space volume) of only the target below energy E. We have used the correspondence between the Classical and Quantum density of states. \\(1/\\rho_{Q}\\) is identified with \\(D\\), and the number of states of the target having energy less that \\(E\\) is just \\(n\\), the number of open channels. ## Appendix B Inelastic probability with background We show here that the inelastic probabilities remain essentially unaffected in magnitude with the presence of a background term in the S-matrix. In the isolated case the addition of \\(b_{cc^{\\prime}}\\) to an inelastic element \\(S_{cc^{\\prime}}\\) simply changes the Lorentzian profile of \\(|S_{cc^{\\prime}}|^{2}\\). In the more important overlapping case, the energy variation of \\(S_{cc^{\\prime}}\\) is smooth in any case without background and \\[|{\\bf S}_{cc^{\\prime}}|^{2} = \\left|B_{cc^{\\prime}}-i\\sum\\limits_{\\lambda}\\frac{\\Gamma_{ \\lambda c}^{1/2}\\Gamma_{\\lambda c^{\\prime}}^{1/2}}{E_{\\lambda}^{(r)}-E-i \\Gamma_{\\lambda}/2}\\right|^{2}\\cdot \\tag{10}\\] \\[= |B_{cc^{\\prime}}|^{2}+\\sum\\limits_{\\lambda}\\frac{\\Gamma_{\\lambda c }\\Gamma_{\\lambda c^{\\prime}}}{\\left(E_{\\lambda}^{(r)}-E\\right)^{2}+{\\Gamma_{ \\lambda}}^{2}/4} \\tag{11}\\] where we have used the random sign property of the products \\(\\Gamma_{\\lambda c}^{1/2}\\Gamma_{\\lambda c}^{\\ \\ \\ 1/2}\\) to neglect the 2nd cross-term in comparison to the last one where again the same property is used to simplify the double sum to a single one. Summing over all the inelastic channels then leads to the same result of Eq. ( 61) with an added term of \\(\\sum\\limits_{c\ eq e}|B_{cc^{\\prime}}|^{2}\\) which itself is proportional to \\(k_{e}\\) as discussed at the end of Section V.2. ## References * [1] J. E. Lennard-Jones _et. al._, _Proc. R. Soc._ London, Ser. A **156** 6, (1936); Ser. A **156** 36, (1936). * [2] T.W. Hijmans, J.T.M. Walraven, and G.V. Shlyapnikov, _Phys. Rev._ B **45**, 2561 (1992). * [3] W. Brenig, _Z. Phys._ B **36**, 227 (1980). * [4] D.P. Clougherty and W. Kohn, _Phys. Rev._ B, **46** 4921 (1992). * [5] E. R. Bittner, _J. Chem. Phys._**100**, 5314 (1993). * [6] P. S. Julienne and F. H. Mies, _J. Opt. Soc. Am._**B 6**, 2257 (1989). * [7] P. S. Julienne, A.M. Smith, and K. Burnett, Adv. At. Mol. Op. Phys. **30**, 141 (1992). * [8] L.D. Landau and E.M. Lifshitz, _Quantum Mechanics (Non-relativistic Theory)_ (Pergamon Press, Oxford (UK) 1981). * [9] C.J. Joachain, _Quantum Collision Theory_ (North-Holland, Amsterdam 1975). * [10] G.F. Gribakin and V. V. Flambaum, _Phys. Rev._ A **48** 546 (1993). * [11] R. Cote, E. J. Heller, and A. Dalgarno, \"Quantum suppression of cold atom collisions\" Phys. Rev. A **53**, 234-41 (1996). * [12] I. A. Yu, J. Doyle, J. C. Sandberg, C. L. Cesar, D. Kleppner, and T. J. Greytak, _Phys. Rev. Lett_**71** 1589 (1993). * [13] J. Doyle, J. C. Sandberg, I. A. Yu, C. L. Cesar, D. Kleppner, and T. J. Greytak, _Phys. Rev. Lett_**67** 603 (1991); C. Carraro and M.W. Cole, _Phys. Rev._ B **45**, 12931 (1992); T.W. Hijmans, J.T.M. Walraven, and G.V. Shlyapnikov, _Phys. Rev._ B **45**, 2561 (1992). * [14] F. T. Smith, _Phys. Rev._ **118**, 349 (1960). * [15] N. Bohr and J. Wheeler, _Phys. Rev._**56** (5),416-450 (1939). see p. 426 Sec. III.
We provide the theoretical basis for understanding the phenomenon in which an ultra cold atom incident on a possibly warm target will not stick, even in the large \\(n\\) limit where \\(n\\) is the number of internal degrees of freedom of the target. Our treatment is non-perturbative in which the full many-body problem is viewed as a scattering event purely within the context of scattering theory. The question of sticking is then simply and naturally identified with the formation of a long lived resonance. One crucial physical insight that emerges is that the many internal degrees of freedom serve to decohere the incident one body wavefunction, thus upsetting the delicate interference process necessary to form a resonance in the first place. This is the physical reason for not sticking.
Write a summary of the passage below.
arxiv-format/0010069v1.md
# Transition from reflection to sticking in ultracold atom-surface scattering Areez Mody, John Doyle and Eric J. Heller Department of Physics, Harvard University, Cambridge, MA 02138 August 2000 ## I Introduction The problem of sticking of atoms to surfaces at very low collision velocities has a long history and has met with some controversy. The issue goes back to the early distorted wave Born approximation results of Lennard-Jones [3], who obtained the threshold law sticking probability going as \\(k\\) in the limit of low velocities. This paper is a companion to paper I [1], wherein we put the problem on a firmer theoretical foundation. We showed (non-perturbatively) that in an ultracold collision a simplistic one-body view of things is essentially correct even if the number of internal degrees of freedom is very large. We concluded that approaching atoms will not stick to surfaces if the approach velocity is low enough, even if the surface is warm. From the methods used, it is clear that the non-sticking rule would apply to clusters as well as semi-infinite surfaces, and would also apply to projectiles more complex than atoms. From an experimental perspective atom-surface sticking could impact the area of guiding and trapping atoms in material wires and containers. In those applications it is necessary to predict the velocities needed for quantum reflection, sticking, and the transition regime between them. We do so in this paper. Above a certain temperature or kinetic energy, but still well below the attractive well depth of the atom surface potential, atoms will stick to surfaces with near 100 percent certainty. The reason for this is simple: Classical trajectory simulations of atom-surface collisions at low collision velocities indicate sticking with near certainty because the acceleration in the attractive regime is followed by a hard collision with the wall. This almost always leads to sufficient energy loss from the particle to the surface that immediate escape is not possible. This is true so long as the approach energy is significantly less than the well depth, which is itself greater than the temperature of the surface. Therefore, the onset of quantum reflection is heralded by a break down in the WKB approximation - an approximation based purely on the (sticking) classical trajectories. Thus, there must exist a transition region between the non-sticking regime for very low collision velocities, and the sticking regime at higher velocities. The key to understanding the transition region is to understand the validity of classical mechanics (WKB) as applied to the sticking problem. The correctness of the simplistic one-body physics of quantum reflection from the surface, focuses our study on the WKB approximation to the coordinate normal to the surface. The entrance channel wavefunction thus obtained may also be used as input into the Golden Rule to study the threshold behaviour of the inelastic cross-sections. We do this in Section VI. The nonsticking threshold behavior we established in paper I is interpreted as an extension of the validity of the so-called Wigner threshold behavior. We are also able to make definite predictions about the nature of the post threshold behavior of sticking in terms of inelastic cross-sections. (Section VI). ## II Quantum reflection and WKB We consider the typical case of an attractive potential arising out of the cumulative effect of Van der Waals attractions between target and incident atoms. A classical atom would proceed straight into the interaction region showing no sign of reflection, but the quantum mechanical probability of being found inside is suppressed by a factor of \\(k\\) (as \\(k\\to 0\\)) as compared to the classical probability (Section VI.1 ), where \\(k\\) is the wave vector of the incoming atom. This is tantamount to saying that quantum mechanically the amplitude is reflected back without penetrating the interaction region, analogous to the elementary case of reflection from the edge of a step-down potential in one dimension while attempting to go over the edge. A useful way to view this is to attribute the reflection to the failure of the WKB approximation. To be specific, we keep the geometry of paper I in mind: an atom is incident from the right (\\(x>0\\)) upon the face of a slab (\\(x=0\\)) that lies to the left of \\(x=0\\). For a low incoming energy \\(\\epsilon\\equiv\\hbar^{2}k^{2}/2m\\), a left-moving WKB solution begun well inside the interaction region will fail to match onto a purely left-going WKB solution as we integrate out to large distances because the WKBcriterion \\[|\\lambda^{\\prime}(x)|=\\left|\\frac{\\hbar p^{\\prime}}{p^{2}}\\right|\\ll 1 \\tag{1}\\] for the local accuracy of the wavefunction will in general fail to be valid in some intermediate region. For bounded potentials that turn on abruptly for example at \\(x=a\\), it is obvious that WKB will fail near \\(x\\sim a\\). For long-range potentials such as a power law \\(V(x)=-c_{n}/x^{n}\\) it is not immediately obvious where this region of WKB failure lies, if it exists at all. It turns out that even in this case it is possible to identify (for small enough \\(\\epsilon\\)) a distance (dependent on \\(\\epsilon\\)) at which the potential 'turns on' and where WKB will fail. We will show below that WKB is at its worst (\\(|\\lambda^{\\prime}(x)|\\) is maximized) at a distance \\(x\\) where the kinetic and potential energies are approximately equal, i.e where \\(|V(x)|\\sim\\epsilon\\). The distance away from the slab at which the particle is turned around - or quantum reflected - is precisely this distance. Furthermore, one may heuristically expect that the greater the failure of WKB, the greater the reflection. Fig. 1 shows a plot of the error term in Eq. (1) for three different values of the incoming energy of neon on a semi-infinite slab of SiN. The essential points to notice are: 1) There is a greater error incurred in attempting to apply the WKB (classical mechanics) approximation to colder atoms than to warmer ones. Consequently, we expect that the slower the atom, the more non-classical its behavior. In particular, slow enough atoms will be 'quantum reflected' and will not stick. 2) As the incoming velocity is decreased the atom is reflected at distances progressively further and further away from the slab. This is because the interval in \\(x\\) around which the WKB error is large, may be identified as the region from which the atom is reflected. A useful qualitative rule of thumb obtained in Section III below is that the region of WKB error reaches all the way out to those regions where the potential energy is still roughly the same order of magnitude as the incoming energy (Eq. (5)). This means that as \\(\\epsilon\\to 0\\) the error is still large where the potential energy graph looks essentially flat. In fact as \\(\\epsilon\\to 0\\) it is easily shown that a plot of the WKB error will show a non-uniform convergence to a polynomial proportional to \\[x^{\\frac{n}{2}-1}\\hskip 28.452756pt\\mbox{for all $n>0$} \\tag{2}\\] Fig. 1 shows the case for \\(n=3\\). ## III WKB failure Differentiating \\(p^{2}/2m+V(x)=\\epsilon\\) w.r.t. x, we have \\[p^{\\prime}=\\frac{-mV^{\\prime}}{p} \\tag{3}\\] which when used repeatedly to eliminate \\(p^{\\prime}\\) shows that \\(|p^{\\prime}/p^{2}|\\) in Eq. (1) is maximized when \\[\\frac{p^{2}}{3m}=\\frac{V^{\\prime 2}}{V^{\\prime\\prime}}. \\tag{4}\\] For \\(V(x)=-c_{n}/x^{n}\\), this is exactly when \\[|V(x)|=\\epsilon\\left(\\frac{2(n+1)}{n-2}\\right). \\tag{5}\\] We discover that for \\(n>2\\) only, we have a point where WKB is at its worst at a distance \\(x\\) where \\(|V(x)|\\sim\\epsilon\\), and moreover, that this maximum behaves like \\[\\max\\left|\\frac{p^{\\prime}}{p^{2}}\\right|\\sim\\frac{1}{c_{n}^{1/n}\\,\\epsilon^{ \\frac{1}{2}-\\frac{1}{n}}}\\sim\\frac{1}{c_{n}^{1/n}\\,k^{1-\\frac{2}{n}}} \\tag{6}\\] which for \\(n>2\\) diverges as \\(k\\to 0\\). Note how a _weaker_ potential (smaller \\(c_{n}\\)) is _better_ at reflecting a particle at the same energy, but allows the atom to approach closer. Heuristically a sketch of \\(V(x)=-c_{n}/x^{n}\\) reveals why: the weaker potential is seen to turn on more abruptly at a point closer to \\(x=0\\), promoting an greater breakdown of WKB there. Alternatively a simple scaling argument with Schrodinger's equation reveals the same trend. The above conclusions are valid only for \\(n>2\\). For \\(n\\leq 2\\) the error term of Eq. (1) looks qualitatively different from that in Fig. 1. It is small at all distances except near \\(x=0\\) where it diverges to infinity, as is evident from Eq.(2). If the physical parameters are such that this region where WKB fails very close to the slab is never actually manifest in the long-range part of the potential then the 'no-reflection' classical behaviour will be valid all the way up to distances near the slab where the atom will begin to feel the short range forces and loose energy to the internal degrees of freedom. For such Figure 1: The WKB error of Eq. (1) for three different values of the incoming energy 200, 2 and 0.02 nK, vs. the distance \\(x\\) nm from the slab (SiN). The long range form of the potential \\(-c_{3}/x^{3}\\) (\\(c_{3}\\)=220 meV Å\\({}^{3}\\)) is also shown for which the negative ’y-axis’ is calibrated in the different units of energy. The sticking probabilities for the three cases are approximately 1, 0.6, 0.1. a case then with \\(n<2\\) we believe one will _not_ observe quantum reflection. ## IV Sticking probability Having established that the reflection is caused by a well-defined localized region, we solve the one-dimensional Schrodinger equation around this region to accurately compute the reflection probability. For an attractive power law potential \\(V(x)=-c_{n}/x^{n}\\), the relevant one dimensional equation is \\[\\left(\\frac{d^{2}}{dx^{2}}+\\frac{a_{n}^{n-2}}{x^{n}}+k^{2}\\right)\\phi_{e}(x)=0. \\tag{7}\\] \\(\\phi_{e}(x)\\) is the entrance channel wavefunction. The length scale \\[a_{n}\\equiv(2mc_{n}/\\hbar^{2})^{\\frac{1}{k-2}}, \\tag{8}\\] contains all the qualitative information about the reflection. Its relevance is twofold. Firstly, the sticking probability for small \\(k\\), behaves as \\[P_{\\rm sticking}\\sim N_{n}\\,k\\ a_{n} \\tag{9}\\] where \\(N_{n}\\) is a pure numeric constant (roughly of order 10 for \\(n=3\\), and of order 1 for \\(n=4,5\\)), see Ref. [6]. \\(P_{\\rm sticking}\\) may be computed numerically for any \\(k\\), and Fig.2 shows \\(P_{\\rm sticking}\\) vs. \\(ka_{n}\\) for \\(n=3,4\\), and 5. Secondly, the distance at which the particle is turned around is estimated by solving \\[\\left(\\frac{a_{n}}{x}\\right)^{n}=(ka_{n})^{2} \\tag{10}\\] for x, which is just the requirement that \\(|V(x)|=\\epsilon\\). Equation (9) together with equation (8) makes plain that a smaller \\(c_{n}\\) is more conducive to making quantum reflection happen, while Eq. (10) indicates that the turnaround point is then necessarily closer to the surface. With these effects in mind, we look at some specific cases. ## V Examples We examine the case of incidence on a slab which may be treated as semi-infinite, and also the case when it is a thin film. It is useful to first look at these cases pretending there is no Casimir interaction, and assuming that the short range form of the potential is everywhere valid. Afterwards we put in the Casimir interaction. For clarity we will pick a specific example of target and incident atoms for most of our discussions, by specifying the numeric values for the short range potential between the atom and semi-infinite slab, since these are most comprehensively tabulated in reference [4]. Fig. 3 shows the sticking probability vs. the temperature of an incoming Ne atom in units of \\(10^{-9}\\) Kelvin. The slab is silicon-nitride (SiN). The various curves are for the different cases depending on whether we are we are considering a thick or thin slab, and whether the Casimir effect is included or not. We will discuss these cases below, pointing out the relevant length and energy scales involved in deciding to label the slab as semi-infinite or thin. The mapping from the mathematically natural \\(ka_{n}\\) (with \\(n=3\\) and \\(c_{3}=220\\) meV \\(\\mbox{\\AA}^{3}\\)) scale of Fig. 2 to the more physical temperature scale of Fig. 3 is made using \\[T\\simeq\\ [69.08\\ \\ {\\rm Kelvin}]\\ \\left(\\frac{m_{H}}{m_{\\rm atom}}\\right)^{3} \\left(\\frac{{\\rm meV\\,\\mbox{\\AA}}^{3}}{c_{3}}\\right)^{2}\\,(ka_{3})^{2} \\tag{11}\\] where we used \\(\\langle\\epsilon\\rangle=(3/2)k_{B}T\\) to compute the temperature by setting \\(\\langle\\epsilon\\rangle\\) equal to the incoming energy. \\(m_{H}=\\) mass of hydrogen atom, and for our example \\(m_{\\rm Ne}=20.03\\,m_{H}\\). All the graphs in Fig. 3 have an initial slope of 0.5 indicating the \\(\\sqrt{\\epsilon}\\) behaviour of the sticking probabilities once the energies are low enough to be in the Quantum Reflection regime. A particular temperature at which there is a transition to the post-threshold sticking regime, we arbitrarily (but intuitively) define as the temperature where the slope becomes 0.4. For the thin film case of 10 \\(nm\\) in our example this temperature is 10 nK. While the parameters in our example are fairly typical, it is clear that the cubic dependence on mass and quadratic dependence on the \\(c_{3}\\) coefficient in Eq.(11), will make this temperature range over quite a few orders of magnitude. The \\(c_{3}\\)'s in Ref [4], listed in units of \\({\\rm meV-\\mbox{\\AA}}^{3}\\) for a variety of surface atom pairs, range in values from 100 to 3000. Figure 2: Sticking probabilities for an atom incident on surface providing a long range interaction of the form \\(V(x)=-c_{n}/x^{n}\\) for the cases n=3,4,5. Note that the length scale \\(a_{n}\\) used to compute the dimensionless \\(ka_{n}\\) coordinate on the ’x-axis’ vs. which we plot the sticking probabilities, is different for each \\(n\\). ### Semi-Infinite Slab (without Casimir) Even though \\(c_{3}\\) coefficients are known both theoretically and experimentally for many surface-atom pairs, for completeness we take a moment to look at a quick way of estimating them. This is provided by the London formula \\[V_{\\rm atom-atom}(r)=\\frac{-3}{2}\\frac{I_{A}I_{B}}{I_{A}+I_{B}}\\frac{\\alpha_{A} \\alpha_{B}}{r^{6}}\\equiv\\frac{-c_{6}}{r^{6}}, \\tag{12}\\] which estimates the Van der Waals interaction between any two atoms. \\(I\\) is the ionization potential and \\(\\alpha\\) the polarizability of each atom. Then summing over all the atoms in the semi-infinite slab (thick) we get \\[V_{\\rm slab-atom}(x)=\\frac{-\\pi c_{6}\\rho_{\\rm atoms}}{6}\\times\\frac{1}{x^{3}} \\equiv\\frac{-c_{3}}{x^{3}} \\tag{13}\\] where \\(\\rho_{\\rm atoms}=\\) the density of slab atoms. These estimates are not very accurate, but correctly indicate the physical quantities on which the answer depends. Reference [4] provides a useful compendium of these coefficients. We have used \\(c_{3}=220\\pm 4\\) meV \\(\\mbox{\\AA}^{3}\\) for neon atoms incident on silicon nitride from work of [5]. This choice of \\(c_{3}\\) makes \\[a_{3}\\simeq 212\\;{\\rm nm} \\tag{14}\\] Thus the'semi-infinite slab' curve of Fig. 3 is the \\(n=3\\) curve of Fig. 2 scaled to temperature units using Eq. (11). ### Thin Slab (without Casimir) From far enough away any slab will appear thin. The surface-atom interaction will behave like \\[\\frac{-c_{3}}{x^{3}}-\\frac{-c_{3}}{(x+d)^{3}}\\simeq\\frac{-3dc_{3}}{x^{4}} \\tag{15}\\] for \\(x\\gg d\\), where d is the thickness of the slab. The resulting \\(c_{4}\\) coefficient equal to \\(3dc_{3}\\) gives an \\(a_{4}\\) coefficient that can be written as \\[a_{4}=\\sqrt{\\frac{2m}{\\hbar^{2}}3dc_{3}}=a_{3}\\biggl{[}\\frac{3d}{a_{3}}\\biggr{]} ^{(1/2)}. \\tag{16}\\] For macroscopic values of \\(d(\\gg a_{3})\\) then, it is only for vanishingly small incident energies that the finiteness of the slab becomes apparent. For any macroscopic \\(d\\) this will be physically irrelevant. For microscopic \\(d(\\ll a_{3})\\) however, this window in energy over which the thinness of the slab makes an appreciable difference can be larger and even prevail for all energies. To continue our illustrative example we pick the microscopic value of \\(d=10\\) nm. This makes \\(a_{4}\\simeq 800\\)nm. The 'thin slab' curve of Figure 3 shows that the sticking probabilities are substantially reduced and the onset of Quantum Reflection occurs at a much higher energy. As a benchmark case, we also include what will likely be the physically limiting case for a continuous film of \\(d=1\\) nm. This further reduces the sticking probabilities for a fixed temperature by a factor of \\(\\sqrt{10}\\), because the important quantity \\(a_{4}\\) is reduced by this much. (Eq.(16)). The transition temperature appears to have increased by 3 orders of magnitude versus the semi-infinite case. ### Semi-infinite slab (Casimir Regime) As the incoming energy \\(\\epsilon\\) tends to 0, we have seen that the turn-around region from which the atom 'quantum reflects' moves progressively further away from the slab. But at large distances, however, it is well known that the interaction potential itself takes on a different form due to Casimir effects. In particular, a semi-infinite dielectric slab (dielectric constant \\(\\epsilon_{s}\\)) has an interaction potential with an atom of polarizability \\(\\alpha\\) given by \\[V_{\\rm slab-atom}(x) = \\frac{-3}{8\\pi}\\frac{\\hbar c\\alpha}{x^{4}}\\frac{\\epsilon_{s}-1}{ \\epsilon_{s}+37/23}\\hskip 42.679134ptx\\rightarrow\\infty \\tag{17}\\] \\[= \\frac{-235(eV-\\mbox{\\AA})\\alpha}{x^{4}}\\frac{\\epsilon_{s}-1}{ \\epsilon_{s}+37/23}\\hskip 42.679134ptx\\rightarrow\\infty \\tag{18}\\] Even for sufficiently large \\(x\\), the form above is not exact but a good approximation found in Ref. [7]. Our purpose here is only to estimate the various numbers to see their relevance. It will suffice to put \\(\\alpha_{Ne}=0.39\\mbox{\\AA}^{3}\\) and the last factor involving \\(\\epsilon_{s}\\) is replaced by 1 since most solids and liquids have \\(\\epsilon_{s}\\) substantially greater than 1. This gives a \\(c_{4}^{(C)}\\) coefficient of \\(9\\times 10^{4}\\)meVA\\(\\mbox{\\AA}^{4}\\) and hence a resulting \\(a_{4}^{(C)}=93\\) nm. The superscript 'C' reminds us it is due to the Casimir interaction which is valid only for large enough \\(x\\). To estimate the distance beyond which the Casimir form itself is valid, we use the statement from Ref. Figure 3: Sticking probabilities vs temperature of incident Ne atoms on SiN. The broken line indicate the inclusion of the very long range Casimir forces (see text). The large dot demarcates the regions of threshold and post-threshold, using the criterion suggested at the end of Section V[8]: 'Within a factor of 2, the van der Waals potential is correct at distances less than \\(0.12\\lambda_{tr}\\), while the Casimir potential is correct at distances at longer range.' \\(\\lambda_{tr}=[1,240\\,{\\rm nm}](\\frac{eV}{\\Delta E})\\) here is the wavelength associated with the transition between the ground and excited state that gives the atom its polarizability. \\(\\Delta E\\) is the transition energy. Knowing this much we may deduce the qualitative features of the sticking probability curve the arguments being similar to the cases above. For this Casimir case and the one below, however, there is a caveat to all this. The exact manner in which the potential changes its near range form to its long-range Casimir form can certainly affect the sticking probabilities at the intermediate energy where it makes this transition. Some numerical experimentation choosing arbitrary forms of the potential having the correct short range and long-range behavior, confirms this. Therefore the curves in figure 3 involving Casimir forces are only quantitatively and _not_ quantitatively correct. ### Thin Slab (Casimir Regime) Even for a thin slab we expect that the distance at which the Casimir interaction is valid remains the same as for a semi-infinite slab made of the same material. At these distances if \\(x\\gg d\\) is also valid, then one may expect the surface atom interaction to behave like \\[\\frac{-c_{4}^{(C)}}{x^{4}}-\\frac{-c_{4}^{(C)}}{(x+d)^{4}}\\simeq\\frac{-4dc_{4}^ {(C)}}{x^{5}} \\tag{19}\\] The length scale \\[a_{5}^{(C)}=a_{4}^{(C)}\\left[\\frac{4d}{a_{4}^{(C)}}\\right]^{1/3} \\tag{20}\\] associated with this \\(c_{5}=4dc_{4}^{(C)}\\) coefficient makes \\(a_{5}^{(C)}=717{\\rm nm}.\\) Figure 3 shows a slight decrease in the sticking probabilities, the effect being evidently less here than in the case of the thick slab. ### Hydrogen on 'thick' Helium Rather atypical, but extremely favourable parameters (\\(c_{3}=18\\) meV \\(\\AA^{3}\\)) are found in the case of Hydrogen atoms incident on bulk liquid Helium. Evidence for quantum reflection was experimentally seen in this system. [9] A comparison with the parameters used in our example of Ne on SiN: \\[m_{\\rm Ne}/m_{H}=20.03\\;\\;{\\rm and}\\;\\;c_{3}^{({\\rm Ne-SiN})}/c_{3}^{({\\rm H- He})}=220/18. \\tag{21}\\] With the use of Eq. (11), we see that the sticking probabilities for this case are in fact the same curves as in Figure 3 except shifted to the right in temperature by about 6.1 orders of magnitude. This puts it exactly in the milli-Kelvin regime where sticking probabilities of about 0.01 to 0.03 were observed as temperatures ranged from about 0.3 mK to 5 mK. [9] However, the sticking probabilities predicted by the'semi-infinite slab\\({}^{(C)}\\)' curve of Fig. 3 are about a factor 2.5 too large, but we feel there is good reason for this. We already mentioned the qualitative manner in which the Casimir forces were included but it seems that a greater error is caused for another reason. The length scale \\(a_{3}=17\\) nm for H-He is so small that the WKB error is close in (see Fig. 1) where the interaction potential is not exactly of the form \\(\\sim 1/x^{3}\\). Practically speaking this means that the region over which we must integrate Eq.(7) must include points close to the slab to get some convergence and thus we are violating the assumption that the potential is \\(\\sim 1/x^{3}\\) there. This problem would not plague the Ne-SiN case too much, because the length scale there is substantially bigger. For H-He we must include some short-range information to get an improvement. Still it is the long-range forces that are mostly responsible. Ref [10, 11] and others have modeled this close range behaviour and obtained better agreement; the improvement coming from explicit consideration of the bound states supported by the close range potential. These appear in the potential matrix elements of perturbation theory. ## VI Relation to Threshold Behavior We now wish to take a broader view of the quantum reflection behaviour at threshold (\\(k\\to 0\\)), and the sticking that sets in as the energy is increased - a Post Threshold behavior. In particular we want to make connection to, and extend the well-known threshold behaviors of inelastic rates which were first stated most generally by Wigner in Ref [12]. For example, Wigner showed that the exothermic excitation rates for collisions between two bodies with bound internal degrees of freedom tend to a constant value as their relative translational energy tends to 0, provided there is no resonance at the 0 translational energy threshold. Equivalently, the exothermic inelastic cross-section diverges as \\(1/v\\), a fact known in the still older literature as the '\\(1/v\\) law'. \\(v\\) is the relative velocity of the collision. Notice especially the proviso in the statement above, that there be no resonance at the threshold energy; suggesting that the many resonances between 0 and \\(\\epsilon\\) provided by a many body target could make the law inoperative. But the entire thrust of Part 1 was to establish quite generally that this many-resonance regime was precisely the one for which the old '\\(1/v\\) law' is reinstated. Here we re-examine the Wigner behaviour from a different point of view using our understanding of quantum reflection. In addition to furthering an intuitive understanding of the Wigner behaviour, viewing things in this way will lead naturally to predicting a generic post threshold behavior (e.g. the 1/v law is replaced by a \\(1/v^{2}\\) law) and an understanding of when the sticking sets back in as \\(\\epsilon\\) is increased. The reader will have noted that we have shifted our attention to a three dimensional geometry of incidence on a localized cluster instead of the one dimensional case of incidence on a slab. So long as the target dimensions are dwarfed by the incidence wavelength we will find that both problems are effectively one dimensional due to the fact that it is only the s-wave which can penetrate the interaction region. For clarity we will deal with both cases separately. ### Threshold and Post-Threshold Inelastic Cross-sections The starting point is the template provided by the golden rule \\[d\\sigma_{e\\to c}\\propto{1\\over k}\\rho(E_{c})\\left|\\int\\limits_{\\rm lil \\ \\vec{r}}\\ d^{3}r\\ \\phi^{(-)}_{c,\\vec{k}_{c}}(\\vec{r})\\ U_{ce}(\\vec{r})\\phi^{(+)}_{e,\\vec{k}}( \\vec{r})\\right|^{2} \\tag{22}\\] for the differential cross-section for inelastic transitions from internal state \\(\\Omega_{e}(u)\\) to \\(\\Omega_{c}(u)\\) where \\(\\vec{k}\\) and \\(\\vec{k}_{c}\\) are the incoming and outgoing directions of the incident atom. We describe briefly how Eq. (22) is arrived at. For each internal state \\(\\Omega_{c}(u)\\) (\\(c=1,2,\\cdots n\\)) that we may imagine freezing the target in (\\(u\\) incorporates all the target degrees of freedom), there is some effective potential felt by the incoming atom. These potentials are just the diagonal elements of the complete interaction potential \\(U(x,u)\\) in the \\(\\Omega_{c}(u)\\) basis, which if present all by themselves (off-diagonal elements 0) could only cause an elastic collision to occur. It is the off-diagonal elements that may be thought of as causing the inelastic transitions. Treating them as a perturbation on the elastic scattering wavefunctions we use the Golden Rule to obtain Eq.(22). \\(\\rho(E)\\) is the energy density of states of the free atom. \\(\\phi^{(+)}_{e,\\vec{k}}(\\vec{r})\\) is the entrance channel wavefunction and \\(\\phi^{(-)}_{c,\\vec{k}_{c}}(\\vec{r})\\) is the final channel wavefunction. They are both exact elastic scattering wavefunctions in the potentials \\(U_{ee}(\\vec{r})\\) and \\(U_{cc}(\\vec{r})\\) respectively. The factor of \\(1/k\\) divides the Golden Rule rate by the flux to get the probability. Now all the \\(k\\) dependence of \\(d\\sigma_{e\\to c}\\) and hence \\(\\sigma_{e\\to c}\\) is due to 1) the factor \\(1/k\\) and 2) The sensitive \\(k\\)-dependence of the amplitude of the entrance channel wavefunction inside the interaction region over which the overlap integral of Eq.(22) takes place. This is simply because the incoming amplitude is more reflected away by the potential as \\(k\\to 0\\) resulting in the interior amplitude being suppressed by a factor of \\(k\\) as compared to what one would expect classically. #### ii.1.1 Incidence on a Slab For this one-dimensional situation we speak of an inelastic probability instead of a cross-section, but otherwise Eq.(22) remains entirely valid here also with the obvious modifications. For \\(k\\to 0\\), when WKB is invalid, we established quite generally [2] that the entrance channel wavefunction \\(\\phi_{e}(x)\\) when normalized to have a fixed incoming flux, had its amplitude inside the interaction region behaving like \\[\\phi_{e}(x_{\\rm inside})\\sim k\\hskip 56.905512pt{\\rm Threshold} \\tag{23}\\] Now the change from quantum reflection at threshold to sticking at post threshold (see Fig.2) begins to a occur at those energies at which the WKB wavefunctions - which show no quantum reflection - may be increasingly trusted. At these energies where WKB is valid we may simply use the well-known WKB amplitude factor \\(1/\\sqrt{k(x)}\\), to conclude that \\[\\phi_{e}(x_{\\rm inside})\\sim\\sqrt{k}\\hskip 56.905512pt{\\rm Post-Threshold}. \\tag{24}\\] The probability density of being found inside then behaves like \\(k^{2}\\) at threshold (quantum reflection) and like \\(k\\) at post threshold (no quantum reflection) respectively. It is quite natural that the probability density inside the interaction region is smaller compared to the outside by a factor of \\(k\\), even when there is no quantum reflection. This is simply a kinematical effect: where the particle is moving faster, it is less likely to be found by a factor inversely proportional to its velocity there. Classically what is unexpected is that for small enough \\(k\\) near threshold, the probabilities inside are _further_ suppressed by a factor of \\(k\\). Quantum reflection of the amplitude from the region around \\(|V(x)|=\\epsilon\\) (section II), goes hand in hand with the quantum suppression of the amplitude within this region. So finally including this \\(k\\)-dependence of the amplitude of \\(\\phi_{e}(x)\\) found in equations (23) and (24) we get \\[P_{e\\to c} \\propto k\\hskip 56.905512pt{\\rm Threshold}\\] \\[P_{e\\to c} \\propto const.\\hskip 56.905512pt{\\rm Post-Threshold} \\tag{25}\\] #### ii.1.2 Incidence on a cluster Since for large wavelengths only the s-wave interacts with the cluster it is clear that the problem may be reduced in the usual manner to a one dimensional problem again. Therefore for a unit _s-wave flux_ the inelastic probabilities will behave as before as in equations (25), but what is really relevant is a unit _plain wave_ flux which provides a s-wave flux of \\(\\pi/k^{2}\\). i.e. Even though the problem is one-dimensional in the radial co-ordinate, the required normalization for the incoming flux is not fixed to be a constant as before, but is now required to grow as \\(\\sim 1/k^{2}\\), in order to correctly account for the increasing (as \\(k\\to 0\\)) range of impact parameters that all 'count as' s-wave. Thus we have simply to multiply the one-dimensional probabilities of equations (25) by this factor of \\(1/k^{2}\\), and conclude that the inelastic cross-sections for this cluster geometry behave like \\[\\sigma_{e\\to c} \\propto \\frac{1}{k}\\qquad\\qquad\\qquad\\qquad\\mbox{Threshold}\\] \\[\\sigma_{e\\to c} \\propto \\frac{1}{k^{2}}\\qquad\\qquad\\mbox{Post}-\\mbox{Threshold} \\tag{26}\\] The Threshold result of Eq. (26) is just the Wigner '1/v law' we spoke of in section VI. But now we can say more. As the incoming wavelength \\(\\lambda\\) increases, we first witness for large enough \\(\\lambda\\) a quadratic dependence to the exothermic cross-section (\\(\\sigma\\propto\\lambda^{2}\\)). It is only at still larger wavelengths that this dependence eventually changes over to a linear one (\\(\\sigma\\propto\\lambda\\)). This happens when the sticking yields to the quantum reflection. This energy is mostly determined by the long range form of the potential, and has nothing to do with the bound state energies or any other details involving the interaction potential. ## VII Conclusion Examining the WKB error term provided a quick and easy way to estimate the threshold temperatures required to observe quantum reflection. It became transparent that only power laws dying faster than \\(-1/x^{2}\\) were capable of acting as quantum reflectors. The validity of WKB at higher temperatures heralded a post-threshold behavior in which the atom sticks. Even for other geometries such as incidence on a localised three dimensional cluster, a WKB analysis together with the Fermi Golden Rule provided a simple understanding of this threshold and post-threshold behavior in terms of inelastic processes being shut off due to a reflection of the incoming amplitude. The extremely long incoming wavelength is invariably impedance mismatched (for potentials shorter ranged than \\(1/r^{2}\\)) by the abrupt change of wavelength in the interaction region, and is therefore reflected. It should be clear that even a repulsive interaction will obviously provide such a mismatch so that the Wigner behavior, or quantum reflection, is quite general; though of course most dramatic if the potential is attractive as we have been considering throughout. This effect of quantum reflection/suppression, which is ultimately responsible for the threshold behaviour, is dynamical in that it is caused by the presence of the interaction potential. We feel that the original derivation by Wigner that focuses on the \\(k\\to 0\\) behaviour of the free space wave functions tends to obscure this physical origin of threshold behaviour. The golden rule approach makes it more explicit and especially paves the way for predicting the Post-Threshold behaviour. ###### Acknowledgements. A.M. is most grateful to Michael Haggerty for his kind help and advice and for always being available to discus things with. A.M. thanks Alex Barnett for pointing him to the references on the Casimir interactions. This work was supported by the National Science Foundation through a grant for the Institute for Theoretical Atomic and Molecular Physics at Harvard University and Smithsonian Astrophysical Observatory:National Science Foundation Award Number CHE-0073544. This work was also supported by the National Science Foundation by grants PHY-0071311 and PHY-9876927. ## References * [1] paper1 * [2] We gave the heuristic elementary reason for the suppressed behaviour of Eq. (23) in Sec III of paper I,where we then attempted to rigorously justify it. * [3] J. E. Lennard-Jones _et. al._, _Proc. R. Soc._ London, Ser. A **156** 6, (1936); Ser. A **156** 36, (1936). * [4] G. Vidali, G. Ihm, Hye-Young Kim, and Milton Cole, _Surface Science Reports_**12**, 133-181 (1991) * [5] R. E. Grisenti, W. Schoelkopf, J. P. Toennies, G. C. Hegerfeldt, and T. K hler _Phys. Rev. Lett._**83**, 1755 (1999) * [6] R.Cote et. al. _Phys. Rev. A_**56**, 1781 (1997) * [7] L. Spruch, Y. Tikochinsky _Phys. Rev._ A **48**, 4213 (1997) * [8] E.A. Hinds V. Sandoghdar, _Phys. Rev._ A **43**, 398 (1991) in particular section II. * [9] Ite A. Yu, John M. Doyle, et. al. _Physical Review Letters_ Vol. 71(10), 1589 (September 1993). * [10] C. Carraro and M. Cole _Physical Review B._ Vol. 45(22), 1589 (June 1992). * [11] T.W. Hijmans and J.T.M. Walraven and G.V. Shlyapnikov _Physical Review B._ Vol. 45(5), 2561 (February 1992). * [12] E. P. Wigner, _Physical Review_, Vol. 73, 1002 (1948)
In paper I [1] we showed that under very general circumstances, atoms approaching a surface will not stick as its incoming energy approaches zero. This is true of either warm or cold surfaces. Here we explore the transition region from non sticking to sticking as energy is increased. The key to understanding the transition region is the WKB approximation and the nature of its breakdown. Simple rules for understanding the rollover to higher energy, post-threshold behavior, including analytical formulae for some asymptotic forms of the attractive potential are presented. We discuss a practical example of atom-surface pair in various substrate geometries. We also discuss the case of low energy scattering from clusters. pacs: 03.75.Fi, 03.75.-w, 03.75.Jk
Write a summary of the passage below.
arxiv-format/0011004v1.md
# Directed Flow of Baryons in Heavy-Ion Collisions Yu.B. Ivanov\\({}^{a,b}\\), E.G. Nikonov\\({}^{a,c}\\), W. Norenberg\\({}^{a}\\), A.A. Shanenko\\({}^{c}\\), and V.D. Toneev\\({}^{a,c}\\) \\({}^{a}\\) Gesellschaft fur Schwerionenforschung mbH, Planckstr. 1, 64291 Darmstadt, Germany \\({}^{b}\\) Kurchatov Institute, Kurchatov sq. 1, Moscow 123182, Russia \\({}^{c}\\) Joint Institute for Nuclear Research, 141980 Dubna, Moscow Region, Russia ## I Introduction Collective flows of various types (radial, directed, elliptic, ) observed experimentally in heavy-ion collisions reveal a space-momentum correlated motion of strongly interacting nuclear matter. This collective motion is essentially caused by the pressure gradients arising during the time evolution in the collision, and hence opens a promising way for obtaining information on the equation of state (EoS) and, in particular, on a possible phase transition. Recently, this feature has stimulated a large number of experimental and theoretical investigations on flow effects (cf. review articles [1, 2]). Manifestations of the deconfinement phase transition have been considered already some time ago by Shuryak, Zhirov [3] and van Hove [4]. Since a phase transition slows down the time evolution of the system due to _softening_ of the EoS, the authors expect around some critical incident energy a remarkable loss of correlation between the observed particle momenta and the reaction plane, and hence a reduction of the directed flow. Assuming a first-order phase transition Hung, Shuryak [5] and Rischke et al. [6] have recently obtained quantitative predictions for heavy-ion collisions. For an expanding fireball Hung and Shuryak expect the _softest point_ effect around \\(E_{lab}=30\\) A\\(\\cdot\\)GeV. Within a one-fluid hydrodynamic model Rischke et al. show that the excitation function of the directed flow exhibits a deep minimum near \\(E_{lab}=6\\) A\\(\\cdot\\)GeV. However, preliminary experimental results [7] in this energy range do not confirm these predictions. In the following, we report on a study of the directed flow within a two-fluid hydrodynamic model [8] for the statistical mixed-phase EoS [9, 10] which is adjusted to available lattice QCD data. ## II Equation of State Within the Mixed-Phase Model Our consideration is essentially based on the recently proposed Mixed-Phase (MP) model [9, 10] which is consistent with available QCD lattice data [11]. The underlying assumption of the MP model is that unbound quarks and gluons _may coexist_ with hadrons forming a _homogeneous_ quark/gluon-hadron phase. Since the mean distance between hadrons and quarks/gluons in this mixed phase may be of the same order as that between hadrons, the interaction between all these constituents (unbound quarks/gluons and hadrons) plays an important role and defines the order of the phase transition. Within the MP model [9, 10] the effective Hamiltonian is expressed in the quasiparticle approximation with density-dependent mean-field interactions. Under quite general requirements of confinement for color charges, the mean-field potential of quarks and gluons is approximated by \\[U_{q}(\\rho)=U_{g}(\\rho)=\\frac{A}{\\rho^{\\gamma}}\\ ;\\ \\ \\ \\gamma>0 \\tag{1}\\] with _the total density of quarks and gluons_ \\[\\rho=\\rho_{q}+\\rho_{g}+\\sum_{j}\\ \ u_{j}\\rho_{j}\\,\\] where \\(\\rho_{q}\\) and \\(\\rho_{g}\\) are the densities of unbound quarks and gluons outside of hadrons, while \\(\\rho_{j}\\) is the density of hadron type \\(j\\) and \\(\ u_{j}\\) is the number of valence quarks inside. The presence of the total density \\(\\rho\\) in (1) implies interactions between all components of the mixed phase. The approximation (1) mirrors two important limits of the QCD interaction. For \\(\\rho\\to 0\\), the interaction potential approaches infinity, _i.e._ an infinite energy is necessary to create an isolated quark or gluon, which simulatesthe confinement of color objects. In the other extreme case of large energy density corresponding to \\(\\rho\\to\\infty\\), we have \\(U_{q}=U_{g}=0\\) which is consistent with asymptotic freedom. The use of the density-dependent potential (1) for quarks and the hadronic potential, described by a modified non-linear mean-field model [12], requires certain constraints to be fulfilled, which are related to thermodynamic consistency [9, 10]. For the chosen form of the Hamiltonian these conditions require that \\(U_{g}(\\rho)\\) and \\(U_{q}(\\rho)\\) do not depend on temperature. From these conditions one also obtains a form for the quark-hadron potential [9]. A detailed study of the pure gluonic \\(SU(3)\\) case with a first-order phase transition allows one to fix the values of the parameters as \\(\\gamma=0.62\\) and \\(A^{1/(3\\gamma+1)}=250\\) MeV. These values are then used for the \\(SU(3)\\) system including quarks. As is shown in Fig.1 for the case of quarks of two light flavors at zero baryon density (\\(n_{B}=0\\)), the MP model is consistent with lattice QCD data providing a continuous phase transition of the cross-over type with a deconfinement temperature \\(T_{dec}=153\\) MeV. For a two-phase approach based on the bag model a first-order deconfinement phase transition occurs with a sharp jump in energy density \\(\\varepsilon\\) at \\(T_{dec}\\) close to the value obtained from lattice QCD. Though at a glimpse the temperature dependences of the energy density \\(\\varepsilon\\) and pressure \\(p\\) for the different approaches presented in Fig.1 look quite similar, there are large differences revealed when \\(p/\\varepsilon\\) is plotted versus \\(\\varepsilon\\) (cf. Fig.2, left panel). The lattice QCD data differ at low \\(\\varepsilon\\), which is due to difficulties within the Kogut-Susskind scheme [14] in treating the hadronic sector. A particular feature in the MP model is that, for \\(n_{B}=0\\), the _softest point_ of the EoS, defined as a minimum of the function \\(p(\\varepsilon)/\\varepsilon\\)[5], is not very pronounced and located at comparatively low values of the energy density: \\(\\varepsilon_{SP}\\approx 0.45\\) GeV/fm\\({}^{3}\\), which roughly agrees with the lattice QCD value [13]. This value of \\(\\varepsilon\\) is close to the energy density inside the nucleon, and hence, reaching this value indicates that we are dealing with a single _big hadron_ consisting of deconfined matter. In contradistinction, the bag-model EoS exhibits a very pronounced softest point at large energy density \\(\\varepsilon_{SP}\\approx 1.5\\) GeV/fm\\({}^{3}\\)[5, 6]. The MP model can be extended to baryon-rich systems in a parameter-free way [9, 10]. As demonstrated in Fig.2 (right panel), the softest point for baryonic matter is gradually washed out with increasing baryon density and vanishes for \\(n_{B}\\gtrsim 0.4\\)\\(n_{0}\\) (\\(n_{0}\\) is normal nuclear matter density). This behavior differs drastically from that of the two-phase bag-model EoS, where \\(\\varepsilon_{SP}\\) is only weakly dependent on \\(n_{B}\\)[5, 6]. It is of interest to note that the interacting hadron gas model has no softest point at all and, in this respect, its thermodynamic behavior is close to that of the MP model at high energy densities [10]. These differences between the various models should manifest themselves in the dynamics discussed below. ## III Two-fluid hydrodynamic model In contrast to the one-fluid hydrodynamic model, where local instantaneous stopping of projectile and target matter is assumed, a specific feature of the dynamical two-fluid description is a finite stopping power. Experimental rapidity distributions in nucleus-nucleus collisions support this specific feature of the two-fluid model. In accordance with [8], the total baryonic current and energy-momentum tensor are written as \\[J^{\\mu} = J_{p}^{\\mu}+J_{t}^{\\mu}\\ \\, \\tag{2}\\] \\[T^{\\mu\ u} = T_{p}^{\\mu\ u}+T_{t}^{\\mu\ u}\\ \\, \\tag{3}\\] Figure 2: The \\((\\varepsilon,p/\\varepsilon)\\)-representation of the EoS for the two-flavor \\(SU(3)\\) system at various baryon densities \\(n_{B}\\). Notation of data points and lines is the same as in Fig.1. Figure 1: The reduced energy density \\(\\varepsilon/\\varepsilon_{SB}\\) and pressure \\(p/p_{SB}\\) (the \\(\\varepsilon_{SB}\\) and \\(p_{SB}\\) are corresponding Stephan-Boltzmann quantities) of the \\(SU(3)\\) system with two light flavors for \\(n_{B}=0\\) calculated within the MP (solid lines) and bag (dashed lines) models. Circles and squares are lattice QCD data obtained within the Wilson [13] and Kogut–Susskind [14] schemes, respectively. where the baryonic current \\(J^{\\mu}_{\\alpha}=n_{\\alpha}u^{\\mu}_{\\alpha}\\) and energy-momentum tensor \\(T^{\\mu\ u}_{\\alpha}\\) of the fluid \\(\\alpha\\) are initially associated with either target (\\(\\alpha=t\\)) or projectile (\\(\\alpha=p\\)) nucleons. Later on - while heated up - these fluids contain all hadronic and quark-gluon species, depending on the model used for describing the fluids. The twelve independent quantities (the baryon densities \\(n_{\\alpha}\\), 4-velocities \\(u^{\\mu}_{\\alpha}\\) normalized as \\(u_{\\alpha\\mu}u^{\\mu}_{\\alpha}=1\\), as well as temperatures and pressures of the fluids) are obtained by solving the following set of equations of two-fluid hydrodynamics [8] \\[\\partial_{\\mu}J^{\\mu}_{\\alpha} = 0\\ \\, \\tag{4}\\] \\[\\partial_{\\mu}T^{\\mu\ u}_{\\alpha} = F^{\ u}_{\\alpha}\\ \\, \\tag{5}\\] where the coupling term \\[F^{\ u}_{\\alpha}=n^{s}_{p}n^{s}_{t}\\left\\langle V_{rel}\\int d\\sigma_{NN\\to NX}( s)\\;(p-p_{\\alpha})^{\ u}\\right\\rangle \\tag{6}\\] characterizes friction between the counter-streaming fluids. Here, \\(n^{s}_{\\alpha}\\) and \\((p-p_{\\alpha})\\) denote respectively the scalar density of the fluid and the 4-momentum transfer gained by a particle of the fluid \\(\\alpha\\) after collision with a particle of the counter-streaming fluid. The cross sections \\(d\\sigma_{NN\\to NX}\\) take into account all elastic and inelastic interactions between the constituents of different fluids at the invariant collision energy \\(s^{1/2}\\) with the local relative velocity \\(V_{rel}=[s(s-4m_{N}^{2})]^{1/2}/2m_{N}^{2}\\). The average in (6) is taken over all particles in the two fluids which are assumed to be in local equilibrium intrinsically [8]. The set of Eqs. (4) and (5) is closed by an EoS, which is naturally the same for both colliding fluids. The friction term \\(F^{\ u}_{\\alpha}\\) in Eq. (5) originates from both elastic and inelastic \\(NN\\) collisions. The latter give rise to a direct emission of mesons in addition to the thermal mesons in the fluids. In the present version only for the pions the direct emission is included by the additional equations \\[\\partial_{\ u}J^{\ u}_{\\pi} = n^{s}_{p}n^{s}_{t}\\left\\langle V_{rel}\\int d\\sigma_{NN\\to\\pi X} \\right\\rangle\\ \\, \\tag{7}\\] \\[\\partial_{\\mu}T^{\\mu\ u}_{\\pi} = n^{s}_{p}n^{s}_{t}\\left\\langle V_{rel}\\int d\\sigma_{NN\\to\\pi X}\\;p ^{\ u}_{\\pi}\\right\\rangle\\ \\, \\tag{8}\\] where \\(p_{\\pi}\\) is the 4-momentum of an emitted direct pion. These equations together with (5) provide the total energy-momentum conservation \\[\\partial_{\\mu}(T^{\\mu\ u}_{\\pi}+T^{\\mu\ u}_{p}+T^{\\mu\ u}_{t})=0\\ . \\tag{9}\\] It is assumed [8] that in the subsequent evolution these direct pions interact neither with the fluids nor with each other. This is a reasonable assumption at relativistic energies, simulating a long formation time of these direct pions. At moderate energies, where the latter argument does not hold in general, the number of direct pions is negligible compared to the number of thermal pions. For the calculation of the friction force (6), approximations of \\(N\\)-\\(N\\) cross-sections are used. It was found [15] that a part of the friction term, which is related to the transport cross-section, may be well parametrized by an effective deceleration length \\(\\lambda_{\\rm eff}\\) with a constant value \\(\\lambda_{\\rm eff}\\approx 5\\) fm. However, there are reasons to consider \\(\\lambda_{\\rm eff}\\) as a phenomenological parameter, as it was pointed out in [8]. First, the value of \\(\\lambda_{\\rm eff}\\) is highly sensitive to the precise form of parameterization of the free cross-sections which, in addition, may be essentially modified by in-medium effects. Furthermore, the model neglects the interactions of direct pions both with each other and with baryons, as well as the direct emission of other mesons which are produced quite abundantly at SPS energies. Due to all these effects the stopping power at SPS energies is somewhat underestimated [8]. This shortcoming of the model is cured by an appropriate choice of the \\(\\lambda_{\\rm eff}\\) value as \\[\\lambda_{\\rm eff}=a\\;\\exp(-b\\sqrt{s})\\] with \\(a=6.6\\) fm and \\(b=0.106\\) GeV\\({}^{-1}\\) adjusted to the rapidity distributions of nucleons and pions in central Au+Au collisions at AGS and SPS energies. Following the original paper [8], it is assumed that a fluid element decouples from the hydrodynamic regime, when its baryon density \\(n_{B}\\) and densities in the eight surrounding cells become smaller than a fixed value \\(n_{f}\\). A value \\(n_{f}=0.8n_{0}\\) was used for this local freeze-out density which corresponds to the actual density of the frozen-out fluid element of about \\(0.5n_{0}\\) to \\(0.7n_{0}\\). ## IV Collective flow from heavy-ion collisions For central nucleus-nucleus collisions only the isotropic transverse expansion, or transverse radial flow, develops due to the azimuthal symmetry of a system. The presence of the reaction plane for non-central collisions destroys this symmetry and gives rise to various patterns of collective motion generated by compressed and excited nuclear matter created during the collision. For example, the directed (or sideward) flow characterizes the deflection of emitted hadrons away from the beam axis within the reaction plane. In particular, one defines the differential directed flow by the mean in-plane component \\(\\langle p_{x}(y)\\rangle\\) of the transverse momentum at a given rapidity \\(y\\). This deflection is believed to be quite sensitive to the _elasticity_ or _softness_ of the EoS. The \\(\\langle p_{x}(y)\\rangle\\) distributions of baryons are shown in Fig.3 for Au+Au collisions at \\(E_{lab}=10\\) A\\(\\cdot\\)GeV calculated for different EoS at an impact parameter \\(b=3\\) fm. In general, the characteristic \\(S\\)-shape of the distribution is reproduced, demonstrating a definite anti-correlation between nucleons bounced-off from the target and projectile regions. One should keep in mind that the protons bound in observed complex particles (_e.g._, in deuterons) are not excluded in our calculations. Therefore, all hydrodynamic results should be compared to the triangle points in Fig.3, where nucleons from complex particles do contribute. The MP and interacting hadronic models1 give similar results, both getting into error bars of these triangle points, though the flow in the MP model is slightly lower due to softening of EoS near the crossover phase transition. This softening is stronger for the bag-model EoS. However, one should note that different versions of Footnote 1: The interaction in the hadron model is taken into account in the same manner as that in the hadronic sector of the MP model [9, 10]. However, one should note that different versions of Figure 3: Differential directed flow of nucleons in the reaction plane as a function of rapidity in semi-central (the trigger transverse energy \\(E_{T}=(200-230)\\) GeV) Au + Au collisions at the energy 10 A\\(\\cdot\\)GeV. Three curves are calculated within relativistic two-fluid hydrodynamics for an impact parameter \\(b=3\\) fm and different EoS: for the MP model (solid line), interacting hadron gas model (dashed) and two-phase bag model (dot-dashed). Circles are experimental points for identified protons, triangles correspond to a nucleon flow estimate based on the measurement of \\(E_{T}\\) and the number of charged particles \\(N_{C}\\)[16]. Experimental points marked by full symbols are measured directly, open ones are obtained by reflecting at the mid-rapidity point. It is of interest to mention that the calculated value of \\(F_{y}\\) for the baryon flow becomes negative (antilow) for beam energies \\(\\gtrsim 100\\) A\\(\\cdot\\)GeV, while the experiment [22] gives a small but positive value even at 158 A\\(\\cdot\\)GeV. The reason of this authority is a wiggle in the \\(\\langle p_{x}(y)\\rangle\\) distribution arising in hydrodynamic results within a narrow mid-rapidity interval \\(|\\delta y|\\lesssim 1\\) due to a peculiar interplay between the transverse radial and directed flows. The possibility of such an effect was noticed in [23] some time ago and later also observed in the UrQMD transport calculations [24]. However, actual measurements have been taken at larger rapidities and then extrapolated into this unmeasured region [22]. Therefore, more accurate data in the mid-rapidity region are necessary to clarify this behavior. The directed flow can be characterized by another quantity which is less sensitive to possible rapidity fluctuations of the in-plane momentum. Such a quantity is the average directed flow which is defined by \\[\\langle P_{x}\\rangle=\\frac{\\int dp_{x}dp_{y}dy\\ p_{x}\\ \\left(E\\frac{d^{3}N}{ dp^{3}}\\right)}{\\int dp_{x}dp_{y}dy\\ \\left(E\\frac{d^{3}N}{ dp^{3}}\\right)}\\ \\, \\tag{11}\\] where the integration in the c.m.system runs over the rapidity region \\([0,y_{cm}]\\). The calculated excitation functions for the average directed flow of baryons within different models are shown in Fig.5. Conventional (one-fluid) hydrodynamics for pure hadronic matter [6] results in a very large directed flow due to the inherent instantaneous stopping of the colli Figure 4: Excitation function of the slope parameter \\(F_{y}\\) for baryons from Au + Au collisions within two-fluid hydrodynamics for the MP EoS (upper panel) and within different transport simulations (lower panel). Open symbols are experimental points for identified protons (see data collection in [2, 17, 18]), filled circles, triangles and squares correspond to the flow parameter measured for intermediate mass fragments [17] and for light particles \\(p,d,\\alpha\\)[19, 20]. The results of transport calculations for three different codes are given by the thin solid (RQMD), dashed (ARC) and dot-dashed (ART) lines (cited according to [18]). The solid line (RBUU) is taken from [21] Figure 5: The excitation function of the average directed flow for baryons from Au + Au collisions. Two-fluid hydrodynamics with the MP EoS at the impact parameter 3 fm is compared with the corresponding results of one-fluid [6] (upper panel) and three-fluid (lower panel) [25] hydrodynamics with the bag-model EoS. One-fluid calculations both with and without the phase transition (PT) are displayed. neous stopping is unrealistic at high beam energies. If the deconfinement phase transition, based on the bag-model EoS [6], is included into this model, the excitation function of \\(\\left\\langle P_{x}\\right\\rangle\\) exhibits a deep minimum near \\(E_{lab}\\approx 6\\) A-GeV, which manifests the softest-point effect of the bag-model EoS depictured in the right panel of Fig.2. The result of two-fluid hydrodynamics with the MP EoS noticeably differs from the one-fluid calculations. After a maximum around 1 A-GeV, the average directed flow decreases slowly and smoothly. This difference is caused by two reasons. First, as follows from Fig.2, the softest point of the MP EoS is washed out for \\(n_{B}\\gtrsim 0.4\\). The second reason is dynamical: the finite stopping power and direct pion emission change the evolution pattern. The latter point is confirmed by comparison to three-fluid calculations with the bag EoS [25] plotted in the lower panel of Fig.5. The third pionic fluid in this model is assumed to interact only with itself neglecting the interaction with baryonic fluids. Therefore, with regard to the baryonic component, this three-fluid hydrodynamics [25, 26] is completely equivalent to our two-fluid model and the main difference is due to the different EoS. As seen in Fig.5, the minimum of the directed flow excitation function, predicted by the one-fluid hydrodynamics with the bag EoS, survives in the three-fluid (nonunified) regime but its value decreases and its position shifts to higher energies. If one applies the _unification procedure_ of [25], which favors fusion of two fluids into a single one, and thus making stopping larger, three-fluid hydrodynamics practically reproduces the one-fluid result and predicts in addition a bump at \\(E_{lab}\\approx 40\\) A-GeV. ## V Conclusions We have studied relativistic nuclear collisions within 3D two-fluid hydrodynamics combined with different EoS, including that of the statistical mixed-phase model of the deconfinement phase transition, developed in [9, 10]. It has been shown that the directed flow excitation functions \\(F_{y}\\) and \\(\\left\\langle P_{x}\\right\\rangle\\) for baryons are sensitive to the EoS, but this sensitivity is significantly masked by nonequilibrium dynamics of nuclear collisions. Nevertheless, the results indicate that the widely used two-phase EoS, based on the bag model [5, 6] and giving rise to a first-order phase transition, seems to be inappropriate. The neglect of interactions near the deconfinement temperature results in an unrealistically strong softest-point effect within this two-phase EoS. In fact, its prediction of a minimum in \\(\\left\\langle P_{x}\\right\\rangle(E_{lab})\\) near \\(E_{lab}\\approx 6\\) A-GeV has not been confirmed experimentally [7]. However, accurate experimental investigations of the differential directed flow and flow excitation functions in the energy region between AGS and SPS are still highly demanded not only in searching for a shifted minimum of \\(\\left\\langle P_{x}\\right\\rangle(E_{lab})\\), but also in clarifying the physics of a possible negative slope (antiflow) of the baryonic directed flow \\(F_{y}\\). This antiflow is particularly sensitive to the EoS. While for the EoS in the MP model the antiflow is predicted at incident energies only above 100 A-GeV, it occurs already at 8 A-GeV, when the bag EoS is used [25]. In this respect a dramatic phenomenon of the _cracked nut_ proposed recently as a hydrodynamic signature of the QCD phase transition at RHIC and LHC energies [27, 28] looks questionable. The authors argue that the softest point in the EoS may lead to the development of two shells at the edge of the almond-like initial fireball, which are then cracked by internal pressure and separated, resulting in a specific flow pattern. However, this speculation was based on the bag-model EoS. The application of the QCD-consistent EoS of the MP model to this problem would be interesting. The directed flow is the first coefficient in the Fourier decomposition of the azimuthal momentum distribution of particles [29]. The second coefficient, the elliptic flow, is expected to be more sensitive to the EoS and some hints of the phase transition have been indicated by an analysis of the measured excitation function for the elliptic flow (see review articles [1, 2]). The study of the elliptic flow within two-fluid hydrodynamics for the mixed-phase model EoS is in progress. ###### Acknowledgements. We are grateful to V. Russkikh for making his hydrodynamic code available to us. We thank P. Danielewicz, B. Friman and E. Kolomeitsev for useful discussions. Yu.B.I., E.G.N. and V.D.T. gratefully acknowledge the hospitality at the Theory Group of GSI, where this work has been done. This work was supported in part by DFG (project 436 RUS 113/558/0) and RFBR (grant 00-02-04012). Yu.B.I. was partially supported by RFBR grant 00-15-96590. ## References * [1] P. Braun-Munzinger and J. Stachel, Nucl. Phys. **A638**, 3c (1998). * [2] P. Danielewicz, Nucl. Phys. **A661**, 82c (1999). * [3] E. Shuryak and O.V. Zhirov, Phys. Lett. B **89**, 253 (1979). * [4] L. van Hove, Z. Phys. C **21**, 93 (1983). * [5] C.M. Hung and E.V. Shuryak, Phys. Rev. Lett. **75**, 4003 (1995); Phys. Rev C **57**, 1891 (1998). * [6] D.H. Rischke, Y. Pursun, J.A. Maruhn, H. Stocker, and W. Greiner, Heavy Ion Phys. **1**, 309 (1996); D.H. Rischke, Nucl. Phys. **A610**, 88c (1996). * [7] H. Liu for the E895 Collaboration, Nucl. Phys. **A638**, 451c (1998). * [8] I.N. Mishustin, V.N. Russkikh, and L.M. Satarov, Nucl. Phys. **A494**, 595 (1989); Yad. Fiz. **54**, 429 (1991) (translated as Sov. J. Nucl. Phys. **54**, 260 (1991)). * [9] E.G. Nikonov, A.A. Shanenko, and V.D. Toneev, Heavy Ion Phys. **8**, 89 (1998); nucl-th/9802018. * [10] V.D. Toneev, E.G. Nikonov, and A.A. Shanenko, in _Nuclear Matter in Different Phases and Transitions_, eds. J.-P. Blaizot, X. Campi, and M. Ploszajczak, Kluwer Academic Publishers (1999), pp.309-320; Preprint GSI 98-30, Darmstadt, 1998. * [11] See e.g. F. Karsch, Talk given at the _Strong and Electroweak Matter '98_, December 1998, Copenhagen, hep-lat/9903031. * [12] J. Zimanyi _et al._, Nucl. Phys. **A484**, 647 (1988). * [13] K. Redlich and H. Satz, Phys. Rev. D **33**, 3747 (1986). * [14] C. Bernard _et al._, Nucl. Phys. (Proc. Suppl.) **B47**, 499 (1996); _ibid_ 503. * [15] L.M. Satarov, Sov. J. Nucl. Phys. **52**, 264 (1990). * [16] J. Barrette _et al._, Phys. Rev. C **56**, 3254 (1997). * [17] W. Reisdorf and H.G. Ritter, Ann. Rev. Nucl. Part. Sci. **47**, 663 (1997). * [18] N.N. Ajitanand _et al._, Nucl. Phys. **A638**, 451c (1998). * [19] N. Herrmann, FOPI Collaboration, Nucl. Phys. **A610**, 49c (1996). * [20] J. Chance _et al._ (The EOS Collaboration), Phys. Rev. Lett. **78**, 2535 (1997). * [21] P.K. Sahu, W. Cassing, U. Mosel, and A. Ohnishi, Nucl. Phys. **A672**, 376 (2000). * [22] H. Appelshauser _et al._ (NA49 Collaboration), Phys. Rev. Lett. **80**, 4136 (1998). * [23] S.A. Voloshin, Phys. Rev. C **55**, 1632 (1997). * [24] S. Soff, S.A. Bass, M. Bleicher, H. Stocker, and W. Greiner, nucl-th/9903061. * [25] J. Brachmann _et al._, Phys. Rev. C **61**, 024909 (2000). * [26] J. Brachmann _et al._, Nucl. Phys. **A619**, 391 (1997). * [27] E.V. Shuryak, Phys. Rev. D **60**, 115014 (1999); D. Teaney and E.V. Shuryak, Phys. Rev. Lett. **83**, 4951 (1999). * [28] P.F. Kolb, J. Sollfrank, and U. Heinz, Phys. Lett. **B459**, 667 (1999). * [29] A.M. Poskanzer and S.A. Voloshin, Phys. Rev. C **58**, 1671 (1998).
The collective motion of nucleons from high-energy heavy-ion collisions is analyzed within a relativistic two-fluid model for different equations of state (EoS). As function of beam energy the theoretical slope parameter \\(F_{y}\\) of the differential directed flow is in good agreement with experimental data, when calculated for the QCD-consistent EoS described by the statistical mixed-phase model. Within this model, which takes the deconfinement phase transition into account, the excitation function of the directed flow \\(\\langle P_{x}\\rangle\\) turns out to be a smooth function in the whole range from SIS till SPS energies. This function is close to that for pure hadronic EoS and exhibits no minimum predicted earlier for a two-phase bag-model EoS. Attention is also called to a possible formation of nucleon antiflow (\\(F_{y}<0\\)) at energies \\(\\gtrsim 100\\) A\\(\\cdot\\)GeV. pacs: PACS numbers: 24.85.+p, 12.38.Aw, 12.38Mh, 21.65.+f, 64.60.-i, 27.75.+r [ ]
Summarize the following text.
arxiv-format/0011058v2.md
# Flow at the SPS and RHIC as a Quark-Gluon Plasma Signature D. Teaney\\({}^{1}\\), J. Lauret\\({}^{2}\\), E.V. Shuryak\\({}^{1}\\) \\({}^{1}\\) Department of Physics and Astronomy, \\({}^{2}\\) Department of Chemistry, State University of New York at Stony Brook, NY 11794-3800, November 4, 2021 ###### pacs: 1. By colliding heavy nuclei at the SPS and RHIC accelerating facilities, physicists hope to excite hadronic matter into a new phase consisting of deconfined quarks and gluons - the Quark Gluon Plasma(QGP) [2]. After the collision, the produced particles move collectively or \\(flow\\) and this flow may quantify the effective Equation of State(EoS) of the matter. In central PbPb collisions at the SPS, a strong radial flow is observed [3]. The matter develops a collective transverse velocity approaching (1/2)c. In non-central collisions, a radial and an _elliptic_ flow are observed [4, 5, 6]. Since in non-central collisions the initial nucleus-nucleus overlap region has an elliptic shape, the initial pressure gradient is larger along the impact parameter and the matter moves preferentially in this direction [7]. The phase transition to the QGP influences both the radial and elliptic flows. QCD lattice simulations show an approximately 1st order phase transition [8]. Over a wide range of energy densities \\(e=.5-1.4\\,GeV/fm^{3}\\), the temperature and pressure are nearly constant. Over this range then, the ratio of pressure to energy density \\(p/e\\), decreases and reaches a minimum at a particular energy density known as the _softest point_, \\(e_{sp}\\approx 1.4\\,GeV/fm^{3}\\)[9]. When the initial energy density is close to \\(e_{sp}\\), the small pressure (relative to \\(e\\)) cannot effectively accelerate the matter. However, when the initial energy density is well above \\(e_{sp}\\), \\(p/e\\) approaches 1/3, and the larger pressure drives collective motion [9, 10]. At a time of \\(\\sim 1\\,fm/c\\), the energy densities at the SPS(\\(\\sqrt{s}_{NN}=17\\,GeV\\)) and RHIC (\\(\\sqrt{s}_{NN}=130\\,GeV\\)) are very approximately 4 and \\(7\\,GeV/fm^{3}\\) respectively [11, 12]. Based on these experimental estimates, the hard QGP phase is expected to live significantly longer at RHIC than at the SPS. The final flows of the produced particles should reflect this difference. In this paper we pose the question: Can both the radial and elliptic flow at the SPS and RHIC be described by a single effective EoS? Since the various hadron species have different elastic cross sections, they freezeout (or decouple) from the hot fireball at different times [13]. Because flow builds up over time, it is essential to model this differential freezeout. It was ignored in previous hydrodynamic simulations of non-central heavy ion collisions and elliptic flow was over-predicted flow by a factor of two [14, 15]. 2. The Hydro to Hadrons(H2H) model will be described in detail elsewhere [16]. Other authors have previously constructed a similar model for central collisions [17]. The model evolves the QGP and mixed phases as a relativistic fluid, but switches to a hadronic cascade (RQMDv2.4 [1]) at the beginning of the hadronic phase to model differential freezeout. The computer code consists of three distinct components. Assuming Bjorken scaling, the first component solves the equations of relativistic hydrodynamics in the transverse plane [7] and constructs a switching surface at a temperature, \\(T_{switch}=160\\,MeV\\). The second component generates hadron on the switching surface using the Cooper-Frye formula [18] with a theta function rejecting backward going particles [19, 20]. Finally, the third component (RQMD) sequentially re-scatters the generated hadrons until freezeout. For the hydrodynamic evolution, a family of EoSs was constructed with an adj Figure 1: The pressure versus the energy density(\\(\\epsilon\\)) for different EoSs (see text). EoSs with Latent Heats \\(0.4\\,GeV/fm^{3}\\), \\(0.8\\,GeV/fm^{3}\\), are labeled as LH4, LH8, etc. (see Fig. 1). LH\\(\\infty\\) is considered as a limiting case, mimicking non-equilibrium phenomena [22]. The hadron phase exists up to a critical temperature of \\(T_{c}=165\\,MeV\\), and consists of ideal gas mixture of the meson pseudoscalar and vector nonets and the baryon octet and decuplet. The hadron phase is followed by a mixed phase with a specified LH, which is finally followed by a QGP phase with \\(C_{s}^{2}=1/3\\). In addition, a Resonance Gas(RG) EoS was constructed with a constant speed of sound above the hadron phase. 3. _Radial flow_ is quantified experimentally by slope parameters, \\(T_{slope}\\); the momentum spectrum of each particle is fit to the form \\(dN/dM_{T}^{2}\\,dy|_{y=0}=C\\,e^{-M_{T}/T_{slope}}\\) where \\(M_{T}^{2}=P_{T}^{2}+m^{2}\\). \\(T_{slope}\\) incorporates random thermal motion and the collective transverse velocity. In Fig. 2, the pion and proton slope parameters are plotted as functions of the total charged particle multiplicity in the collision. Look first at the leftmost points at SPS multiplicities and compare the model and experimental slopes: The proton slope data favor a relatively hard EoS - LH8 or harder. A direct comparison of the model to published spectra [23] supports this claim [16]. A RG EoS can also reproduce the proton flow. A similar analysis of elliptic flow (shown and quantified below) favors a relatively soft EoS - LH8 or softer. With some caveats, LH8 represents a happy middle which can reproduce both the radial and elliptic flow at the SPS. Look now at the energy/multiplicity dependence of the slopes. For all EoSs, \\(T_{slope}\\) increases with the collision energy [10, 17]. For a soft EoS (e.g. LH\\(\\infty\\)) the increase is small and for a hard EoS (e.g. LH8) the increase is large. At RHIC multiplicities, the difference between the slope parameters is large and easily experimentally observable. 4. _Elliptic flow_ is quantified experimentally by the elliptic flow parameter, \\(v_{2}=\\langle\\cos(2\\Phi)\\rangle\\); here \\(\\Phi\\) is the angle around the beam measured relative to the impact parameter and \\(\\langle\\,\\rangle\\) denotes an average over the single particle distribution, \\(\\frac{dN}{dP_{T}d\\Phi}\\). \\(v_{2}(P_{T})\\) is found by holding \\(P_{T}\\) constant in while averaging \\(\\cos(2\\Phi)\\) over \\(\\frac{dN}{dP_{T}d\\Phi}\\). \\(v_{2}\\) measures the response of the fireball to the spatial deformation of the overlap region, which is usually quantified in a Glauber model [24] by the eccentricity \\(\\epsilon=\\langle\\langle y^{2}-x^{2}\\rangle\\rangle/\\langle\\langle x^{2}+y^{2}\\rangle\\rangle\\). Since the response\\((v_{2})\\) is proportional to the driving force\\((\\epsilon)\\), the ratio \\(v_{2}/\\epsilon\\) is used to compare different impact parameters and nuclei [25, 26]. In Fig. 3(a), the number elliptic flow \\((v_{2})\\) is plotted as a function of charged particle multiplicity at an impact parameter of 6 fm. Before studying the energy dependence, look at the magnitude of the elliptic flow at the SPS. For LH8, the stars show the pion \\(v_{2}\\) when the matter is evolved as a fluid until a decoupling temperature of \\(T_{f}=120\\,MeV\\); they illustrate the excessive elliptic flow typical of pure hydrodynamics. Once a cascade is included, LH8 (the squares) is only \\(\\approx 20\\%\\) above the data - a substantial improvement. Typically in hydrodynamic calculations, the freezeout temperature \\(T_{f}\\) is adjusted to fit the proton \\(P_{T}\\) spectrum. However, protons are driven by a pion \"wind\" and decouple from the fireball \\(5\\,fm/c\\) after the pions on average. This pion wind accounts for the strong proton flow at the SPS and is not described by ideal hydrodynamics [17, 16]. In order to match the observed proton flow, hydrodynamic calculations must decouple at low freezeout temperatures, \\(T_{frz}\\approx 120\\,MeV/c\\). This low temperature has two consequences for elliptic flow: First the reduction of elliptic flow due to resonance decays is small \\(\\approx 15\\%\\), compared to \\(\\approx 30\\%\\) in the H2H model. Second, compared to a cascade, the hydrodynamics generates twice as much elliptic flow during the late cool hadronic stages of the evolution. By including the pion \"wind\", and more generally by decoupling differentially, we can simultaneously describe the radial and elliptic flow data at the SPS. The energy dependence of \\(v_{2}\\) is the central issue. As seen in Fig. 3, the H2H model predicts an increase in elliptic flow by a factor \\(\\approx 1.4\\) and is in reasonable agreement with SPS and RHIC flow data. This result was presented prior to the publication of RHIC data [20]. In contrast, UrQMD, a hadronic cascade based on string dynamics, predicts a decrease by a factor of \\(\\approx 2\\)[27]. This is because the UrQMD string model has a supersoft EoS at high energies [28]. For pure hydrodynamics as illustrated by the stars, \\(v_{2}\\) is approximately constant [14] (but see [15]). For HIJING [29], a model which considers only the initial parton collisions, \\(v_{2}\\) is \\(\\approx 0\\)[30]. The first RHIC data clearly contradict these models. The increase in \\(v_{2}\\) is now used to constrain the EoS of the excited matter. The QCD phase diagram has two Figure 2: The transverse mass slope\\((T_{slope})\\) as a function of the total charged particle multiplicity in PbPb collisions at an impact parameter of b=6 fm (see also [10, 17]). For consistency with the elliptic study in Fig. 3, we show b=6 fm although the NA49 data points [21] are for the 5% most central events, or b\\(<\\)3.5 fm. For all EoSs at the SPS, the proton slope parameters at b=6 fm are \\(\\approx 7\\,MeV\\) smaller than at b=0 fm, as for the b=0 LH8 curve. The difference is negligible for \\(\\pi^{-}\\). distinguishing features. It is soft at low energy densities and subsequently hard at high energies. A RG EoS(the open squares) has no softness and the elliptic flow is clearly too strong both at the SPS and RHIC. The entire family of EoSs, LHS through LH\\(\\infty\\), reproduces the elliptic flow data in both energy regimes. Counter-intuitively, as the latent heat is increased, \\(v_{2}\\) first decreases and then increases. In the final count, LH8 and LH\\(\\infty\\) have roughly the same \\(v_{2}\\). However, they develop the \\(v_{2}\\) in different ways. For LHS, the EoS shifts from hard to soft and the early pressure starts an early elliptic expansion. For LH\\(\\infty\\), the EoS is just soft and the elliptic expansion stalls. However because the expansion is stalled, the LH\\(\\infty\\) collision lifetime(\\(\\approx 13\\,fm/c\\) at RHIC) is significantly longer than the LH8 lifetime(\\(\\approx 9\\,fm/c\\) at RHIC) [9]. Over the long LH\\(\\infty\\) lifetime, \\(v_{2}^{LH\\infty}\\) steadily grows and is finally comparable to \\(v_{2}^{LHS}\\). As the latent heat is increased from LH8 to LH16, the EoS becomes becomes softer and \\(v_{2}\\) at first decreases. However, as the EoS is made softer still, the lifetime increases and \\(v_{2}\\) rises again. 5._Impact Parameter Dependence._ In Fig. 4, \\(v_{2}\\) for LH8 as a function of the number participants(\\(N_{p}\\)) is compared to data. Different EoSs show a similar participant (or b) dependence. The agreement is good at RHIC where the multiplicity is high. For ideal hydrodynamics, \\(v_{2}\\propto\\epsilon\\propto(N_{p}^{max}-N_{p})\\)[7]. In the low density limit, since the response is proportional to the number of collisions, \\(v_{2}\\propto\\epsilon\\frac{dN}{dy}\\propto(N_{p}^{max}-N_{p})N_{p}\\). Therefore, \\(v_{2}\\) has a different \\(N_{p}\\)(or b) dependence in the hydrodynamic and low density limits [25, 26]. At RHIC, except in very peripheral collisions, the \\(N_{p}\\) dependence is clearly linear and strongly supports the hydrodynamic limit [6]. At the SPS, the \\(N_{p}\\) dependence may not be clearly linear, but it also does not follow the low density limit. Two-pion correlations, may change the data analysis [32], reduce \\(v_{2}\\) in the periphery and improve the low density agreement. Finally in Fig. 5, \\(v_{2}\\) is studied both as a function of transverse momentum and impact parameter. For both LH8 and LH\\(\\infty\\), the calculation produces too much elliptic flow in peripheral collisions(45-85%), and too \\(little\\) elliptic flow in the most central collisions(0-11%). The \\(P_{T}\\) dependence of \\(v_{2}\\) also clarifies the difference between and LH8 and LH\\(\\infty\\): LH\\(\\infty\\), a super soft EoS, generates elliptic flow only at low momentum while LH8, a hard EoS, generates elliptic at high momentum. 6._Summary and Discussion._ By incorporating differential freezeout, the Hydro to Hadrons(H2H) model simultaneously reproduces the radial and elliptic flow at the SPS and RHIC. At the SPS, the radial flow demands an EoS with a latent heat \\(LH\\gtrsim 0.8\\,GeV/fm^{3}\\), while elliptic flow demands an EoS with a latent heat \\(LH\\lesssim 0.8\\,GeV/fm^{3}\\). Further, in contrast to string and collision-less parton models, the increase in \\(v_{2}\\) is naturally explained using hydrodynamics. This challenges the prevailing view [6, 26] that the SPS is in the low density regime and that the increase in \\(v_{2}\\) represents a transition Figure 4: \\(v_{2}\\) versus the number of participants(\\(N_{p}\\)) relative to the maximum. The model and the NA49 \\(v_{2}\\) values [5] at the SPS are for \\(\\pi^{-}\\). The NA49 data are mapped from b to participants using [31]. The model and the STAR \\(v_{2}\\) values [6] at RHIC are for charged particles. The model does not include weak decays. The number of charged particles is assumed proportional to \\(N_{p}\\). Figure 3: (a) The number elliptic flow parameter \\(v_{2}\\) as a function of the charged particle multiplicity in PbPb collisions at a an impact parameter of b=6 fm. At the SPS, the NA49 \\(v_{2}\\) data point is extrapolated to b=6 fm using Fig. 3 in [5]. At RHIC, the STAR \\(v_{2}\\) data point is extrapolated to \\(N_{ch}/N_{ch}^{max}\\) = 0.545 (b=6 fm in AuAu) using Fig. 3 in [6]. The comparison to data is a little unfair: For the model, \\(v_{2}\\) is calculated using all pions in PbPb collisions. For the NA49 data, \\(v_{2}\\) is measured using only \\(\\pi^{-}\\) in PbPb (a -3% correction to the model). For the STAR data, \\(v_{2}\\) is measured using charged hadrons in AuAu (a +5% correction to the model). to the hydrodynamic regime. However, the increase in \\(v_{2}\\) does not uniquely signal the asymptotic QGP pressure. Indeed, at RHIC collision energies, a very soft EoS can have the same \\(v_{2}\\) as an EoS with a well developed QGP phase. This EoS is not academic since softness can mimic non-equilibrium phenomena [22]. To reveal the underlying EoS and the burgeoning QGP pressure, the collision energy should be scanned from the SPS to RHIC. If the prevailing low density view of the SPS is correct, a transition in the b dependence of elliptic flow should be observed over the energy range [25, 26]. In addition, for different EoSs, \\(v_{2}\\) depends differently on collision energy and transverse momentum (Fig. 3 and Fig. 5). Taken with the radial flow (Fig. 2), this experimental information would help settle the EoS of hot hadronic matter. **Acknowledgments.** The work is partly supported by the US DOE grant No. DE-FG02-88ER40388 and grant No. DE-FGO2-87ER 40331. **Note Added**. After the submission of this work, the STAR and PHENIX collaborations reported proton at anti-proton spectra [33]. The preliminary spectra favor LH8-LH16 and disfavor LH\\(\\infty\\)[35]. ## References * [1] H. Sorge, Phys. Rev. **C 52**, 3291 (1995). * [2] e.g., E.V. Shuryak, Phys. Rept. **61**, 71 (1980); L. McLerran, Rev. Mod. Phys. **58**, 1021 (1986). * [3] e.g., R. Stock, in QM '99, Nucl. Phys. **A661**, 419c (1999). * [4] e.g., E895 Collaboration, C. Pinkenburg _et al._ Phys. Rev. Lett. **83**, 1295 (1999); NA49 Collaboration, H. Appelshauser _et al._, Phys. Rev. Lett. **80**, 4136 (1998). * [5] A.M. Poskanzer and S.A. Voloshin for the NA49 Collaboration, Nucl. Phys. **A661**, 341c (1999). * [6] STAR Collaboration, K.H. Ackermann _et al._ ; nucl-ex/0009011. * [7] J.-Y. Ollitrault, Phys. Rev. **D46**, 229 (1992) * [8] e.g., M. Oevers, F. Karsch, E. Laermann, and P. Schmidt, Nucl. Phys. Proc. Suppl. **73**, 465 (1999). * [9] C.M. Hung, E.V. Shuryak, Phys. Rev. Lett. **75**, 4003 (1995); D. H. Rischke and M. Gyulassy, Nucl. Phys. **A608**, 479 (1996). * [10] M. Kataja _et al._, Phys. Rev. **D34**, 794 and 2755 (1986). * [11] NA49 Collaboration, T. Alber _et al._, Phys. Rev. Lett. **75**, 3814 (1995). * [12] PHOBOS Collaboration, B.B. Back _et al_, Phys. Rev. Lett. **85**, 3100 (2000) ; hep-ex/0007036 * [13] H. Sorge, Phys. Rev. Lett. **81**, 5764 (1998). * [14] P. Kolb, J. Sollfrank, U. Heinz, Phys. Lett. **B459**, 667 (1999). * [15] P. Kolb, J. Sollfrank, U. Heinz, preprint hep-ph/0006129. * [16] D. Teaney, J. Lauret, and E.V. Shuryak, in progress. * [17] S. Bass and A. Dumitru, Phys. Rev. **C61**, 064909 (2000). * [18] F. Cooper and G. Frye, Phys. Rev. **D10**, 186 (1974) * [19] For discussion see, Cs. Anderlik _et al._, Phys. Rev. **C59**, 388 (1999) and references therein. * [20] E.V. Shuryak in QM '99, Nucl. Phys. **A661**, 119c (1999); D. Teaney _et al._, talk at RHIC2000, Park City, Utah, March 10-15 (2000); [http://theo08.nscl.msu.edu/RHIC2k/proceedings.htm](http://theo08.nscl.msu.edu/RHIC2k/proceedings.htm). * [21] G. Roland for the NA49 Collaboration, Nucl. Phys. **A638**, 91c (1999). * [22] H. Sorge, Phys. Lett. **B402**, 251 (1997). * [23] NA49 Collaboration, H. Appelshauser _et al._, Phys. Rev. Lett. **82**, 2471 (1999). * [24] P. Jacobs and G.Cooper, STAR SN402(1999). * [25] H. Heiselberg and A.-M. Levy, Phys. Rev. **C59**, 2716 (1999). * [26] S.A. Voloshin and A.M. Poskanzer, Phys. Lett. **B474**, 27 (2000). * [27] M. Bleicher and H. Stocker, preprint hep-ph/0006147. * [28] M. Belacem _et al._, Phys. Rev. **C58**, 1727 (1998). * [29] M. Gyulassy and X.N. Wang, Comp. Phys. Comm. **83**, 307 (1994); Phys. Rev. **D44**, 3501 (1991). * [30] R.J.M Snellings, A.M. Poskanzer, S.A. Voloshin, STAR Note SN0388 (1999), preprint nucl-ex/9904003. * [31] G. Cooper for the NA49 Collaboration, Nucl. Phys. **A661**, 362c (1999). * [32] P.M. Dinh, N. Borghini, and J.-Y. Ollitrault, Phys. Lett. **B477**, 51 (2000). * [33] Quark Matter 2001, to be published, Stony Brook, NY, January 15-20 (2001) ; [http://www.rhic.bnl.gov/qm2001/program.html](http://www.rhic.bnl.gov/qm2001/program.html) * [34] R. Snellings for the STAR Collaboration, at Quark Matter 2001 [33]. * [35] D. Teaney, at Quark Matter 2001 [33]. Figure 5: Elliptic flow of charged pions as a function of \\(P_{T}\\) and centrality for AuAu collisions at RHIC. For centrality selections The percentages shown 0-11%, 11-45% and 45-85%, indicate the fraction of the total geometric cross section for three centrality selections 0 fm\\(<\\)b\\(<\\)4.2 fm, 4.2 fm\\(<\\)b\\(<\\)8.4 fm and 8.4 fm\\(<\\)b\\(<\\)11.6 fm. The preliminary data points were presented in [33, 34]. The model curves were found by parameterizing the model data points and averaging over the specified impact range with the geometric weight, 2\\(\\pi\\)_b_ _db_.
Radial and elliptic flow in non-central heavy ion collisions can constrain the effective Equation of State(EoS) of the excited nuclear matter. To this end, a model combining relativistic hydrodynamics and a hadronic transport code(RQMD [1]) is developed. For an EoS with a first order phase transition, the model reproduces both the radial and elliptic flow data at the SPS. With the EoS fixed from SPS data, we quantify predictions at RHIC where the Quark Gluon Plasma(QGP) pressure is expected to drive additional radial and elliptic flow. Currently, the strong elliptic flow observed in the first RHIC measurements does not conclusively signal this nascent QGP pressure. Additional measurements are suggested to pin down the EoS.
Summarize the following text.
arxiv-format/0012032v1.md
# Electro-optical Measurements of Ultrashort 45 MeV Electron Beam Bunches T. Tsang, V. Castillo, R. Larsen, D. M. Lazarus, D. Nikas, C. Ozben, Y. K. Semertzidis, and T. Srinivasan-Rao Brookhaven National Laboratory, Upton, NY 11973 L. Kowalski Montclair State University, Upper Montclair, NJ 07043 ## 1 Introduction With the advance of electron and particle accelerators, the particle bunch duration has dropped to the femtosecond time scale. Various techniques have been proposed to measure such ultrashort bunch lengths. One of the techniques relies on non-coherent transition radiation where visible photons are collected and measured with a streak camera.[1] Although such a technique can yield bunch length information, it is an invasive technique and the resolution is sensitive to the photon collection system. Recently, the development of the electro-optical probe based on the linear Pockels effect has revolutionized the noninvasive measurements of small electronic signal propagation on integrated circuits,[2, 3] dc and ac high voltages,[4, 5, 6] lightning detectors,[7] terahertz electromagnetic field imaging,[8]and electron beam measurements of long pulse[9] and short pulse duration.[10, 11, 12, 13, 14] EO sensors that use fibers for input and output coupling provide excellent electromagneticisolation and large frequency response, limited essentially by the fibers and the velocity mismatch of the electrical and optical waves to picosecond or sub-picosecond time resolution. In this work, we show that the fast component of the EO modulation is due only to the transient electric field induced by the passage of an ultrashort relativistic electron bunch. No cavity mode [14] was observed. We examine the dependence of the EO modulation with charge and its position. Finally, we present a detection-limited temporal shape of EO signal and then draw our conclusions regarding the electron bunch length. The optical probe is based on the principle of the linear electro-optical effect - Pockels effect. When an electric field is applied to a birefringent crystal, the refractive index ellipsoid is modulated and an optical phase shift is introduced. To probe the phase shift, an optical beam polarized at \\(45^{\\rm o}\\) to the z-axis of the EO crystal is propagated along the y-axis of the crystal. This phase retardation is converted to an intensity modulation by a \\(\\frac{\\lambda}{4}\\) waveplate followed by an analyzer (crossed polarizer). The intensity of light \\(I(t)\\) transmitted through the analyzer can be described by[15] \\[I(t)=I_{o}[\\eta\\ +\\ \\sin^{2}(\\Gamma_{o}\\ +\\ \\Gamma_{b}\\ +\\ \\Gamma(t))], \\tag{1}\\] where \\(I_{o}\\) is the input light intensity, \\(\\eta\\) contains the scattering contribution of the EO crystal and the imperfection of the polarizer and other optics which is typically much less than 1, \\(\\Gamma_{o}\\) contains the residual birefringence of the crystal, \\(\\Gamma_{b}\\) is the optical bias of the system which is set at \\(\\frac{\\pi}{4}\\), and \\(\\Gamma(t)\\) is the optical phase induced by the electric field imparted on the crystal. When \\(\\Gamma_{o}\\ +\\Gamma_{b}\\simeq\\frac{\\pi}{4}\\), Eq.(1) can be written as \\[\\frac{I(t)}{I_{o}}\\simeq(\\eta+\\frac{1}{2})+[\\frac{1}{2}{\\rm sin}(2\\Gamma(t))] \\tag{2}\\] where the first term is the unmodulated dc light level which is approximately equal to half of the input light intensity, and the second term is the EO modulation. For a weak modulation, i.e. \\(2\\Gamma(t)\\ll 1\\), the EO component can be written as \\[[\\frac{I(t)}{I_{o}}]_{\\rm EO}\\ \\simeq\\ \\Gamma(t). \\tag{3}\\] The normalized light output is approximately linear in the time-dependent optical phase. The transient optical phase shift is linearly proportional to the time-dependent field \\(E_{z}(t)\\) traversing the optical axis of the EO crystal and can be expressed as, \\[\\Gamma(t)\\ =\\ \\frac{1}{2}({n_{e}}^{3}r_{33}-{n_{o}}^{3}r_{13})\\frac{2\\pi LE_{ z}(t)}{\\lambda}, \\tag{4}\\] with \\(L\\) the effective length of the crystal, \\(n_{e}\\) and \\(n_{o}\\) the extraordinary and ordinary indices of refraction, \\(r_{33}\\) and \\(r_{13}\\) the electro-optical coefficients, and \\(E(t)\\) the transient electric field in vacuum directed, along the optic axis (z-axis), induced by the passage of the relativistic electron beam. This relationship holds when the duration of the electric field is greater than or equal to the time needed by the laser light to traverse the entire length \\(L\\) of the EO crystal. The electric field induced by a nonrelativistic electron beam is radially symmetric. However a 45 MeV relativistic beam produces an anisotropically directed radial field orthogonal to the electron beam direction and along the z-axis of the EO crystal.[16] The traverse field strength \\(E_{z}\\) is given by \\[E_{z}(t)=\\frac{1}{4\\pi\\epsilon_{o}}\\frac{\\gamma\\ N_{e}\\ q\\ T(t)}{\\epsilon\\ r^{2}}, \\tag{5}\\] with \\(\\gamma\\) the relativistic Lorentz factor, \\(N_{e}\\) the number of electrons in the beam, \\(q\\) the electron charge, \\(T(t)\\) the temporal charge distribution, \\(\\epsilon_{o}\\) the permittivity of free space, \\(\\epsilon\\) the dielectric constant of the EO crystal in the z-axis direction, and \\(r\\) the radial distance of the electron beam from the axis of the optical beam. This electron beam field is present for a time [16] \\[\\Delta t=\\frac{r}{\\gamma\\upsilon}, \\tag{6}\\] with \\(\\upsilon\\) the electron beam velocity. In this experiment \\(\\Delta t\\) is approximately 100 fs which is much shorter than the electron bunch length of \\(\\sim 10\\) ps. Thus, for an uncompressed electron beam, ignoring any nonlinear beam dynamics, the electron bunch length measurement is not distorted. Also, in writing Eq.(6) one approximates the longitudinal size of the electron beam to be negligible compared to the laser beam width. When the longitudinal size of the electron beam is larger than the laser beam width, the electron charge that influences the optical phase of the laser field is only (to first order) the fraction of charge over the laser beam width. The actual strength of the electron beam field is thus lowered due to this geometrical factor. The effective length \\(L\\) of the crystal is the distance light travels inside the crystal during the time \\(\\Delta t\\). When the electron velocity \\(\\upsilon\\) approaches \\(c\\), \\(L\\) is given by \\[L=\\Delta t\\times\\frac{c}{n}\\simeq\\frac{r}{\\gamma n}, \\tag{7}\\] with \\(n\\) the index of refraction of the crystal at the laser wavelength. Substituting Eq.(7), Eq.(5), and Eq.(4) to Eq.(3) gives \\[[\\frac{I(t)}{I_{o}}]_{\\rm EO}\\ \\simeq\\ ({n_{e}}^{3}r_{33}-{n_{o}}^{3}r_{13}) \\frac{N_{e}\\ q\\ T(t)}{4\\ \\lambda\\ n\\ \\epsilon_{o}\\ \\epsilon\\ r}. \\tag{8}\\] The optical phase is modulated only during the time the electron beam field is present. However, the duration of the EO signal depends on both the electron temporal charge distribution \\(T(t)\\) and the length of the crystal since all the light will be influenced at the same time. Notice that Eq.(8) has no \\(\\gamma\\) dependence, it depends linearly on electron charge \\(N_{e}q\\), and it has a \\(\\frac{1}{r}\\) dependence and not \\(\\frac{1}{r^{2}}\\). Furthermore, the \\(\\frac{1}{\\epsilon}\\) dependence favors EO crystals with a small dielectric constant. ## 2 Experimental arrangement A vacuum compatible EO modulator setup was constructed using discrete optical components mounted on an aluminum bar anchored to a standard \\(2\\frac{3}{4}\\) inch O.D. vacuum flange, see Fig. 1(a). The complete setup was designed to fit into a conventional 1.37 inch I.D. 6-way cross, and a 45 MeV electron beam passes through the center of this vacuum beam pipe. The light source was a fiber-coupled, diode-pumped, solid-state, Nd:YAG laser (Coherent Laser Inc.), emitting 250 mW of CW optical power at a wavelength of 1.3 \\(\\mu\\)m. However, in most parts of the experiment, light intensity was attenuated by a factor of 3 using an air-space-gap fiber-optic coupler to avoid the saturation of the photoreceivers and to reduce the possible thermal loading of the EO setup. Active noise suppression electronics was incorporated in this laser to remove the relaxation noise. Beyond 5 MHz rf frequency, the laser noise was \\(\\sim 1\\) dB above the shot-noise. The polarization purity of the light source had an extinction ratio of \\(>10^{4}\\) at the output end of the polarization-maintaining (PM) fiber. The light was then coupled to a vacuum sealed PM fiber collimator where the output polarization was rotated to \\(+45^{\\rm o}\\) to the azimuthal. The polarization purity dropped to \\(\\sim 10^{2}\\) after one fiber coupling. A \\(90^{\\rm o}\\)-keyed fiber-optic coupler was used to rotate the input polarization to the EO crystal from \\(+45^{\\rm o}\\) to \\(-45^{\\rm o}\\) as indicated in Fig. 2(b). The collimated 0.4 mm diameter light beam was sent to the bottom half of the LiNbO\\({}_{3}\\) EO crystal mounted on a ceramic holder that has a clearance hole of 6.35 mm for the electron beam, see Fig. 1(b). The size of the EO crystal was 6.5 x 2.2 x 1 mm; the optical z-axis (extraordinary axis) was aligned azimuthally and the x-axis (ordinary axis) was parallel to the propagation direction of the electron beam. Fluorescent material was placed around the \\(45^{\\rm o}\\) facet of the ceramic for guiding the electron beam through the EO crystal. A CCD camera viewed the fluorescence due to the electron beam from directly above the setup. A \\(45^{\\rm o}\\) pop-up flag with the same fluorescent material was located 23 cm downstream of the crystal for precise electron beam location and profile measurements. Each electron beam profile was recorded and overlaid in Fig. 2(a). Three beam profiles where the electron beam traversed the EO crystal did not show up clearly on the flag and their beam position was estimated from the position dependence of the dipole pitching magnet current. A representative beam profile shows the electron beam cleared the top portion of the EO crystal. To linearize the modulation and balance the residual birefringence of the EO crystal, the \\(\\frac{\\lambda}{4}\\) waveplate was adjusted so that the EO system was optically biased at the quadrature point. Therefore, the resulting electric field-induced optical modulation constantly rode on a large dc light level. However, only the transient component of the optical signal was detected by the optical receiver with the corresponding dc level kept below saturation. An analyzer crossed at \\(-45^{\\rm o}\\) to the input polarization was positioned after the crystal. A vacuum sealed multimode (MM) fiber collimator collected the intensity light output after the analyzer. The light throughput of the complete EO setup was \\(\\sim 12\\%\\), with typically 5 mW of optical power received by the photoreceiver. The laser source was placed inside the concrete surrounded experimental hall near the EO setup to maintain the high quality of the PM light, but the output light was transmitted by a 40-meter long MM fiber to the optical receiver outside the experimental hall for detection and analysis. Light intensity output from the photoreceiver was sent to a digitizing oscilloscope, each signal trace was accumulated in 16 to 64 signal averages. During the course of the experiment, oscilloscopes with bandwidths of 1-GHz, 3-GHz, and 7-GHz were used in combination of either a 1-GHz of 12-GHz photoreceivers. The electron beam source is the 45 MeV electron beam at the Brookhaven NationalLaboratory Accelerator Test Facility (ATF). A drawing of the ATF layout and its beam lines is shown in Fig. 3. A 5 MeV electron beam from a rf photocathode gun was injected into a linac to boost its energy to 45 MeV. The final beam contained up to 0.6 nC charge in a focused beam diameter of \\(\\sim\\) 0.5 mm in 10 ps duration at a repetition rate of 1.5 Hz. It was scanned vertically over a range of a few mm from the bottom of the EO crystal to the top of the opening by adjusting the driving current of a dipole pitching magnet. A stable trigger signal synchronized to the electron beam was obtained from a stripline detector upstream, also depicted in Fig. 3. ## 3 Results The electron beam induced EO signal was confirmed by a few control experiments. (1) No photons with wavelengths other than the input laser were received by the photoreceiver. Such photons may originate from nonlinear optical processes as well as transition or Cerenkov radiation. (2) The signal vanished in the absence of the electron beam or the laser beam. (3) The signal polarity changed sign when the direction of the induced electrical field was reversed, or (4) when the input laser polarization was rotated by 90\\({}^{\\rm o}\\). The results of (3) are shown in Fig. 4(a) where the electron beam was steered above or below the EO crystal inducing opposite electric fields at the path of the laser light causing reversal of the signal polarity. Figure 4(b) also shows similar polarity flip when the input polarization was changed from \\(+45^{\\rm o}\\) to \\(-45^{\\rm o}\\) by using a 90\\({}^{\\rm 0}\\)-keyed fiber-optic coupler. We note that the polarization of the input light is maintained when it is coupled either to the fast-axis or the slow-axis of a PM fiber. The insets of Figs. 4(a), and 4(b), where I(t) in Eq.(1) is plotted, illustrate the simple intuitive explanation of these changes in EO signal polarities. When one operates the EO device on the positive slope of its response function, the polarity of the modulated signal follows the input. However, when the operation moves to the negative slope of its response function, that is equivalent to switching the input polarization from \\(+45^{\\rm o}\\) to \\(-45^{\\rm o}\\), as shown in Fig. 2(b), the polarity of the modulated signal becomes opposite to the input. It is important to point out that all signal traces with negative (positive) polarity correspond to light intensity drop (increase). The polarity reversal gives conclusive evidence of the signal being electro-optical in origin. The EO signal dependence on electron beam charge was also investigated. The electron beam charge was varied by adjusting the UV intensity irradiating the photocathode of the 5 MeV rf electron gun. The actual charge was measured by a Faraday cup before the linac and also by a stripline detector after the linac. Each horizontal error bar displayed in the inset is the difference between these two measuring devices, while each vertical error bar is the standard deviation of 6 sets of signal traces for each charge. The electron beam position was locked at -1.17 mm away from the laser beam path and it clearly passed below the EO crystal unobstructed. Individual signal traces for 5 different charge are shown in Fig. 5, and a linear \\(\\chi\\)-square fit to the signal strength is shown in the inset. A linear dependence of the EO signal with charge was established. EO signal dependence on electron beam position was also investigated. Figure 6(a) displays five signal traces when the electron beam was steered vertically toward but not traversing the EO crystal. Each electron beam position was indicated in Fig. 2(a) and their amplitudes is plotted against their corresponding distance from the center of the optical beam path in the inset. A \\(\\chi\\)-square fit of the data favors equally a \\(\\frac{1}{\\sqrt{r}}\\) or a \\(\\frac{1}{r+a}\\) dependence, where \\(a\\) is a constant equal to 1.75 mm. On the contrary, the same \\(\\chi\\)-square fit gives a much lower confidence level on a \\(\\frac{1}{r^{2}}\\) or a linear dependence. Therefore, we can conclude that the EO signal behaves very close to but not exactly as predicted in Eq.(8). The discrepancy needs further investigation. When the electron beam was close to the EO crystal, at beam position -0.64 mm, a distinctive positive signal with a long \\(\\sim 100\\) ns decay time superimposed on the negative EO modulation was observed. This observation is an indication of the electron beam partially impinging on the EO crystal. A partially blocked electron beam profile observed in Fig. 2(a) also supports this argument. To examine this further, the electron beam was steered to impinge on the EO crystal and traverse the optical beam path completely. Figure 6(b) shows these signal traces where the time and the amplitude of the signal have both been expanded. As the electron beam approached the optical beam path traversing the EO crystal, the strength of the positive signal increased and then became negative after passing the optical beam path. It is conceivable that the electron beam ionizes the LiNbO\\({}_{3}\\) creating electron-hole pairs. Since the mobility of ions is small compared to the electrons, a transient ion field remains which produces a EO signal opposite to that due to the electron beam field. It's decay time will be dictated by the electron-hole recombination time of the EO crystal.[17] Therefore, when the origin of the ion field is moved from below to above the laser beam path, that is from beam position -0.17 mm to +0.08 mm, the EO signal due to the ionization also changes polarity. Consequently it provides an unique method to locate the exact electron beam position with respect to the laser beam position. However, this ion field fails to diminish when the electron beam continues to move toward the top of the EO crystal as indicated by the data trace obtained at the 0.33 mm beam position in Fig. 6(b). In the present EO design, signal with negative polarity also corresponds to an intensity drop. Therefore, when the electron beam strikes the optical beam path, substantial temporal opacity may be created enhancing the actual strength of the ion field. Nonetheless, the ion field disappears and the electron beam field prevails when the electron beam clears the top of the EO crystal, as shown in Fig. 4(a). Since the optical modulation is of electro-optical origin which has a response faster than the electron pulse duration, the measured temporal duration is then limited mostly by the bandwidth of the measurement system and the modal dispersion of the 40-m long outgoing MM fiber. The latter effect was independently measured to have negligible temporal broadening on the time scale of interest in this experiment. Figure 7 shows the shortest signal pulse width of 70 ps recorded on a 7-GHz oscilloscope using a 12-GHz optical receiver. The instrument response of the same measurement system is displayed in dashed line. It is worth pointing out that the instrument response was obtained with a mode-locked IR laser pulse of \\(\\sim 15\\) ps duration. The risetime and the pulse width of both the instrument response and the EO signal traces are comparable, suggesting that the electron bunch can be inferred to be on the order of \\(\\sim 15\\) ps. Conclusions The effectiveness of a Pockels cell field sensor has been demonstrated for noninvasive measurement on the bunch length of an ultrashort relativistic electron beam. The signal strength is shown to increase linearly with the charge and decrease inversely with the distance between the laser and the electron beam. Currently the temporal resolution is limited primarily by the detection technique. Although these results are encouraging, at present the EO modulation is at best a few percent of the unmodulated dc light level. Methods to improve the strength of the EO signal and the signal-to-noise ratio are needed to make it more practical. Nevertheless, EO sensor is clearly an attractive candidate to explore the ultrashort particle bunch duration down to the sub-picosecond regime. Measurement of the EO signal using a 2 ps (or a 0.5 ps) resolution limited streak camera is currently underway. Using an upgraded pump-probe EO detection scheme and state-of-the-art ultrafast optical pulse measurement techniques such as frequency-resolved optical gating or spectral phase interferometry for direct electric-field reconstruction, relativistic femtosecond electron bunch may be measured. Furthermore, one can in principle construct a 2-dimensional ultrafast detector array based on the EO technique to measure the location, spatial, and temporal profile of the charged particle beam. Because the EO modulated signal polarity depends on the induced field direction, the technique is effective for both positive and negative charged particles, that is electrons as well as protons and ions. ## 5 acknowledgments We wish to acknowledge the support and encouragement of Xiejie Wang, Ilan Ben-Zvi, Vitaly Yakimenko, Howard Gordon, Mike Murtagh and Tom Kirk. The efforts of Victor Usack were essential to our progress. This manuscript has been written under contract DE-AC02-98CH10886 with the U.S. Department of Energy. ## References * [1] X.Z. Qui, X.J. Wang, K. Batchelor, and I. Ben-Zvi, Proceedings of the PAC'95, p.2530, 1995 * [2] J. A. Sheridan, D. M. Bloom, and P. M. Solomon, Opt. Lett. **20**, 584 (1995) * [3] D. R. Dykaar, R. F. Kopf, U. D. Keil, E. J. Laskowski, and G. J. Zydzik, Appl. Phy. Lett. **62**, 1733 (1993) * [4] J. C. Santos, M. C. Taplamacioglu, and K. Hidaka, Rev. of Sci. Instrum. **70**, 3271 (1999) * [5] A. H. Rose, S. M. Etzel, and K. Rochford, J. of Light. Tech. **17**, 1042 (1999) * [6] Y. Murooka and T. Nakano, Rev. of Sci. Instrum. **63**, 5582 (1992)* [7] W. J. Koshak and R. J. Solakiewicz, App. Opt. **38**, 4623 (1999) * [8] Z. Jiang and X. C. Zhang, Optics Express **5**, 243 (1999) * [9] M. A. Brubaker and C. P. Yakymyshyn, App. Opt. **39**, 1164 (2000) * [10] M. Geitz, G. Schmidt, P. Schmuser,a nd G. V. Walter, NIM **A445**, 343, (2000) * [11] D. Oepts, G. M. H. Knippels, X. Yan, A. M. MacLeod, W. A. Gillespie, and A. F. G. Van der Meer, Proceed. of the 21st Intern. FEL Conference, **II-57**, DESY, Hamburg, Germany, August 23-26, 1999 * 2 Apr 1999, e-Print Archive: hep-ex/0012014; Y. K. Semertzidis, V. Castillo, L. Kowalski, D. E. Kraus, R. Larsen, D. M. Lazarus, B. Magurno, D. Nikas, C, Ozben, T, Srinivasan-Rao, and T. Tsang, Nucl. Instrum. Meth. **A452**, 396-400 (2000), e-Print Archive: hep-ex/0012024 * [13] G. M. H. Knippels, X. Yan, A. M. MacLerd, W. A. Gillespie, M. Yasumoto, D. Oepts, and A. F. G. Van der Meer, Phys, Rev. Letts. **83**, 1578 (1999) * [14] M. J. Fitch, N. Barov, J. P. Carneiro, P. L. Colestock, H. T. Edwards, K. P. Koepke, A. C. Melissinos, and W. H. Harting, FNL report no. FERMILAB-TM-2096, Nov. 1999 * [15] A. Yariv, _Quantum Electronics_ 3rd Ed., John Wiley & Sons, New York, 1989, p315 * [16] J. D. Jackson, _Classical ELectrodynamics_ 2nd Ed., John Wiley & Son, New York, 1975, p.555 * [17] E. W. Taylor, J. Opt. Commun. **9**, 64 (1988) ## 6 Figure Captions Figure 1: Experimental setup, (a) showing PM optical fiber input on the right hand side, followed by the EO crystal and its holder, \\(\\lambda/4\\) waveplate, analyzer position at 45\\({}^{\\rm o}\\) crossed to the input polarization, and finally the signal collection multimode fiber, all mounted on an aluminum base plate anchored to a standard vacuum flange. (b) Expanded view of the ceramic holder for the EO crystal. Fluorescent material is placed at various locations of the ceramic holder for on-line guiding of electron beam to the EO crystal. Figure 2: (a) Schematic drawing of the EO crystal. A 6.35 mm diameter clearance hole on the ceramic holder is also shown. Electron beam propagates along the x direction into the paper. Several representative electron beam profiles and their locations with respect to the laser beam position at z=0 are overlaid to show the maneuver of the electron beam along the z-axis passing above, through, and below the EO crystal. Three electron beam positions that were blocked by the EO crystal did not show up clearly on the flag but is illustrated in the figure by their beam positions relative to the laser beam path. Their approximate positions were determined by the pitching current of the dipole magnet. (b) Schematic cross-sectional view of the EO crystal (1.0 x 6.5 x 2.2 mm). The electron beam propagates along the ordinary x-axis, and the transient electric field is induced along the extraordinary z-axis of the EO crystal. The laser propagates along the negative y-axis with a collimated beam diameter of 0.4 mm, its input polarization is oriented either at \\(+45^{\\rm o}\\) or \\(-45^{\\rm o}\\) to the extraordinary axis. Figure 3: Accelerator Test Facility (ATF) beam lines. Also indicated are the locations of the EO experiment setup at beam line #3 and the trigger signal extracted from a stripline detector at the linac section. Electron beam travels from right to left. Figure 4: (a) EO modulated signal when the electron beam passed unobstructed below (solid line) and above (dashed line) the EO crystal so that the induced electron beam field is reversed. (b) EO modulated signal when the input laser polarization was flipped from \\(+45^{\\rm o}\\) to \\(-45^{\\rm o}\\). Note that the time scale of the two plots are different because a 1-GHz and a 12-GHz photoreceiver was used in (a) and (b) respectively. A pictorial representation of the optical launching condition is illustrated in the in-set of each figure. Figure 5: Increase of the EO modulated signal with electron beam charge. The electron beam position is locked at -1.17 mm away from the laser beam path. The inset shows the dependence of the EO signal with charge, dashed line is the linear fit to the data. Figure 6: (a) Increase of the EO modulated signal with electron beam approaching the EO crystal from the negative z-axis. The inset shows the signal plotted against the distance of the electron beam away from the laser beam path. A \\(\\frac{1}{r+a}\\) and a \\(\\frac{1}{\\sqrt{r}}\\)dependence is also fitted into the data. (b) Same as (a) but when the electron beam was moved to irradiate on the EO crystal and traverse the laser beam path. See Figure 2 for detailed electron beam positions. Note that the signal trace of -0.64 mm is plotted on both figures for comparison and the data trace of 0.33 mm was divided by a factor of 4 to fit in the current vertical scale. Figure 7: Solid line - the EO signal detected by a 12 GHz photoreceiver on a 7 GHz digital oscilloscope. Dashed line - instrument response of the measurement system using a \\(\\sim 15\\) ps IR pulse. This figure \"TT_figure1_setup1.jpg\" is available in \"jpg\" format from: [http://arxiv.org/ps/hep-ex/0012032v1](http://arxiv.org/ps/hep-ex/0012032v1)Fig. 4Fig. 5. Fig. 6Fig. 7
We have measured the temporal duration of 45 MeV picosecond electron beam bunches using a noninvasive electro-optical (EO) technique. The amplitude of the EO modulation was found to increase linearly with electron beam charge and decrease inversely with distance from the electron beam. The risetime of the temporal signal was limited by our detection system to \\(\\sim 70\\mbox{ ps}\\). The EO signal due to ionization caused by the electrons traversing the EO crystal was also observed. It has a distinctively long decay time constant and signal polarity opposite to that due to the field induced by the electron beam. The electro-optical technique may be ideal for the measurement of bunch length of femtosecond, relativistic, high energy, charged, particle beams. pacs: 07.77.Ka, 33.55.Fi, 78.20.Jq
Summarize the following text.
arxiv-format/0012050v1.md
# Electro-optical measurements of ultrashort 45 MeV electron beam bunch D. Nikas\\({}^{*}\\),V. Castillo, L. Kowalski\\({}^{a}\\), R. Larsen, D. M. Lazarus, C. Ozben, Y. K. Semertzidis, T. Tsang, and T. Srinivasan-Rao Brookhaven National Laboratory, Upton, NY 11973, USA \\({}^{a}\\)Montclair State University, Upper Montclair, NJ 07043, USA E-mail address:[email protected] Tel: (631) 344-4717; Fax: (631) 344-5568 ## 1 Introduction Since the first EO observation[1] of charge particle beam we have constructed an optical probe based on the electro-optical Pockels effect. That is, when an electric field is applied to a birefringent crystal an optical phase shift is introduced between orthogonal components. To probe it, a laser beam polarized at 45\\({}^{\\rm o}\\) to the z-axis of the EO crystal is propagated along the y-axis of the crystal. This phase retardation is converted to an intensity modulation by a \\(\\frac{\\lambda}{4}\\) plate followed by an analyzer. The intensity of light \\(I(t)\\) exiting the analyzer can be described by[2] \\[I(t)=I_{o}[\\eta~{}+~{}\\sin^{2}(\\Gamma_{o}~{}+~{}\\Gamma_{b}~{}+~{}\\Gamma(t))], \\tag{1}\\] where \\(I_{o}\\) is the input light intensity, \\(\\eta\\) the imperfection of crystal, polarizer and other optics, \\(\\Gamma_{o}\\) is the crystal residual birefringence, \\(\\Gamma_{b}\\) is the optical bias of the system which is set at \\(\\frac{\\pi}{4}\\), and \\(\\Gamma(t)\\) is the phase induced by the electric field on the crystal. For a weak modulation, \\(\\Gamma(t)\\ll 1\\), the EO component can be written as \\[[\\frac{I(t)}{I_{o}}]_{\\rm EO}~{}\\sim~{}\\Gamma(t)~{}=~{}\\frac{1}{2}({n_{e}}^{3} {r_{33}}-{n_{o}}^{3}{r_{13}})\\frac{2\\pi LE_{z}(t)}{\\lambda} \\tag{2}\\] The optical phase shift \\(\\Gamma(t)\\) is linearly proportional to the time-dependent field \\(E_{z}(t)\\) induced by the passage of the electron beam, with \\(L=\\Delta t\\times\\frac{c}{n}\\simeq\\frac{r}{\\gamma n}\\) the distance light travels inside the crystal in the presence of \\(E_{z}(t)\\), \\(n_{e}\\) and \\(n_{o}\\) the extraordinary and ordinary indices of refraction and \\(r_{33}\\), \\(r_{13}\\) the EO coefficients. A relativistic beam produces an anisotropically directed radial field nearly orthogonal to the beam direction and along the z-axis of the EO crystal with strength\\({}^{3}\\) \\[E_{z}(t)=\\frac{1}{4\\pi\\epsilon_{o}}\\frac{\\gamma\\ N_{e}\\ q\\ T(t)}{\\epsilon\\ r^{2}} \\tag{3}\\] where \\(\\gamma\\) is the Lorentz factor, \\(N_{e}\\) the number of electrons in the beam, \\(q\\) the electron charge, \\(T(t)\\) the temporal charge distribution, \\(\\epsilon_{o}\\) the permittivity of free space, \\(\\epsilon\\) the dielectric constant of the EO crystal in the z-axis direction, and \\(r\\) the radial distance of the electron beam from the axis of the optical beam. Finally \\[[\\frac{I(t)}{I_{o}}]_{\\rm EO}\\ \\simeq\\ ({n_{e}}^{3}{r_{33}}-{n_{o}}^{3}{r_{13}}) \\frac{N_{e}\\ q\\ T(t)}{4\\ \\lambda\\ n\\ \\epsilon_{o}\\ \\epsilon\\ r} \\tag{4}\\] **2. Experiment** A vacuum compatible EO modulator setup was constructed using discrete optical components. A Nd:YAG laser, emitting 250 mW of CW power at \\(1.3\\ \\mu\\)m was coupled to a vacuum sealed polarization maintaining fiber collimator and the output was rotated \\(+45^{\\rm o}\\) to the azimuthal. The collimated \\(0.4\\ {\\rm mm}\\) diameter light beam, with polarization purity of \\(\\sim 10^{-}2\\), was directed to the LiNbO\\({}_{3}\\) crystal mounted on a ceramic holder that has a clearance hole of 6.35 mm for the electron beam. The size of the crystal was 6.5(L) x 2.2(H) x 1(W) mm; the optical z-axis was aligned azimuthally and the x-axis was parallel to the propagation direction of the e\\({}^{-}\\) beam. Fluorescent material was placed on the ceramic for guiding the e\\({}^{-}\\) beam through the EO crystal. A CCD camera and a \\(45^{\\rm o}\\) pop-up flag were also used for electron beam measurements. The electron beam contained up to 0.6 nC charge with beam diameter of \\(\\sim 0.5\\) mm in 10 ps bunch length at a repetition rate of 1.5 Hz. A vacuum sealed multimode fiber collimator collected the light output from the analyzer and was coupled separately to 1, 12 GHz photodiode which were connected to digitizing oscilloscopes with bandwidth 1, 7GHz. **3. Results** The electron beam induced EO signal origin was confirmed: (1) The signal vanished in the absence of electron or laser beam (2) The signal polarity changed sign when the direction of the electrical field was reversed (by placing the e\\({}^{-}\\) beam above and bellow the crystal), or when the input laser polarization was rotated by \\(90^{\\rm o}\\), see inset of Fig.1(Left) (a),(b) respectivelly. Fig.1(Left) shows the measured pulse with risetime of \\(\\sim 70\\) ps and in dashed line is the instrument response to a \\(\\sim 15\\) ps laser pulse which shows that our measurement was bandwidth limited by the electronics. The EO signal dependence on electron beam charge was investigated. The charge was measured using a Faraday cup and a stripline. The e\\({}^{-}\\) beam was clearly passing below the EO crystal unobstructed. A linear \\(\\chi^{2}\\) minimization fit to the signal amplitude for 5 charge values is shown in the inset of Fig.1(Right). EO signal dependence on electron beam position was also investigated. Fig.1(Right) displays 5 signal amplitudes when the beam was steered vertically toward, but not traversing the crystal, versus their distance from the center of the laser beam path. A \\(\\chi^{2}\\) minimization fit of the data favors a \\(\\frac{1}{r+a}\\) dependence, where \\(a=1.75mm\\). As the electron beam approached the optical beam path a distinctive positive signal with a long \\(\\sim 100\\) ns decay time superimposed on the negative EO modulation was observed which becomes negative when tranverses the optical beam path. It is the electron beam that ionizes the LiNbO\\({}_{3}\\) crystal creating electron-hole pairs. Since the mobility of ions is small compared to the electrons, an ion field remains and produces an EO signal opposite to that due to the electron beam field. Its decay time will be dictated by the electron-hole recombination time of the crystal[4]. ## 4 Conclusions The effectiveness of a Pockels cell field sensor has been demonstrated for nondestructive measurement of an ultrashort beam bunch. Using an upgraded pump-probe EO detection scheme and state-of-the-art ultrafast optical pulse measurement techniques such as frequency-resolved optical gating or spectral phase interferometry for direct electric-field reconstruction, femtosecond electron bunch may be studied. Furthermore, one can in principle construct a 2-dimensional EO detector array to measure the spatial and temporal profile of the charged particle beam bunch. ## References * [1] Y. K. Semertzidis _et al_ Proc. PAC'99, 490; Y. K. Semertzidis _et al_ NIM **A452(3)**, 396 (2000) * [2] A. Yariv, _Quantum Electronics_ 3rd Ed., John Wiley & Sons, New York, 1989, p315 * [3] J. D. Jackson, _Classical Electrodynamics_ 2nd Ed., J. Wiley & Son, NY, 1975, p.555 * [4] E. W. Taylor, J. Opt. Commun. **9**, 64 (1988) Figure 1: Left:EO signal(solid), instrument response(dashed) and polarity change(inset (a)and(b)); Right: EO signal amplitude vs distance and charge(inset).
We have made an observation of 45 MeV electron beam bunches using the nondestructive electro-optical (EO) technique. The amplitude of the EO modulation was found to increase linearly with electron beam charge and decrease inversely with the optical beam path distance from the electron beam. The risetime of the signal was bandwidth limited by our detection system to \\(\\sim\\!70\\) ps. An EO signal due to ionization caused by the electrons traversing the EO crystal was also observed. The EO technique may be ideal for the measurement of bunch structure with femtosecond resolution of relativistic charged particle beam bunches.
Summarize the following text.
arxiv-format/0101504v1.md
# Radial Oscillations of Neutron Stars in Strong Magnetic Fields V.K.Gupta,Vinita Tuli, S.Singh, J.D.Anand and Ashok Goyal _Department of Physics and Astrophysics, University of Delhi, Delhi-110 007, India. InterUniversity Centre for Astronomy and Astrophysics, Ganeshkhind, Pune 411007, India._ E-mail : [email protected] : [email protected] : [email protected] : [email protected] ###### Introduction It is well known that intense magnetic fields(B\\(\\sim\\)10\\({}^{12-13}\\)G) exist on the surface of many neutron stars.Objects with even higher magnetic fields have been surmised and detected recently.Recent observational studies and several independent arguments link the class of soft \\(\\gamma\\)-ray repeaters and perhaps certain anomalous X-ray pulsars with neutron stars having ultra strong magnetic fields, the so called magnetars. Kuoveliotou et al(1998) found a soft \\(\\gamma\\)-ray repeater SGR 1806-20 with a period of 7.47 seconds and a spin down rate of 2.6\\(\\times\\)10\\({}^{-3}\\)\\(syr^{-1}\\) from which they estimated the pulsar age to be about 1500 years and field strength of \\(\\sim\\) 8\\(\\times\\)10\\({}^{14}\\)G.Since the magnetic field in the core,according to some models,could be 10\\({}^{3}-10^{5}\\) times higher than its value on the surface, it is possible that ultra strong magnetic fields of order 10\\({}^{18}-10^{19}\\)G or even 10\\({}^{20}\\)G exist in the core of certain neutron stars. The virial theorem, it is usually argued,gives an upper bound of about 10\\({}^{18}\\)G for the field inside a neutron star(Lai and Shapiro 1991). However it has also been claimed that this cannot be taken as the last word on the subject, since at the super high density inside the star core,general relativistic corrections to the virial theorem could increase the upper limit on the maximum allowed magnetic fields substantially (Hong 1998). According to Kuoveliotou,a statistical analysis of the population of soft \\(\\gamma\\)-ray repeaters indicates instead of being just isolated examples, as many as 10% of neutron stars could be magnetars. It is therefore of interest to study the equation of state of nuclear matter and various properties of neutron stars under such magnetic fields. In this paper we study the radial oscillations of neutron stars in the presence of super strong magnetic fields. Studies of radial oscillations is of interest since Cameron suggested (1965) more than three decades ago that vibrations of neutron stars could excite motions that can have interesting astrophysical applications. X-ray and \\(\\gamma\\)-ray burst phenomenon are clearly explosive in nature. These explosive events probably perturb the associated neutron star and the resulting dynamical behaviour may eventually be deduced from such observations. Observations of quasi-periodic pulses of pulsars have also been associated with oscillations of underlying neutron stars (Chandrasekhar 1964 a, b, Cutler et al 1990). The EOS is central to the calculation of most neutron star properties as it determines the mass range, the red shift as well as mass-radius relationship for these stars. Since neutron stars span a very wide range of densities, no one EOS is adequate to describe the properties of neutron stars. In the low density regions from the neutron drip density (\\(\\sim\\)4\\(\\times\\)10\\({}^{11}\\)) and upto \\(\\rho_{n}\\)\\(\\simeq\\)3.0\\(\\times\\)10\\({}^{14}\\)gm/cc the density at which the nuclie just begin to dissolve and merge together, the nuclear matter EOS is adequately described by the BPS model (Bayam, Pethick and Sutherland, 1971) which is based on the semi-empirical nuclear mass formula. We adopt this BPS EOS and its magnetised version as given by Lai and Shapiro(1991)in this density range. In the high density range above the neutron drip density\\(\\rho_{n}\\), the physical properties of matter are still uncertain. Many models for the description of nuclear matter at such high densities have been proposed over the years. One of the most studied models is the relativistic nuclear mean field theory, in which the strong interactions among various particles involved are mediated by a scalar field \\(\\sigma\\), an isoscalar-vector field \\(\\omega\\) and an isovector-vector field \\(\\rho\\). Along with scalar self-interaction terms it can reproduce the values of experimentally known quantities revelant to nuclear matter, viz. the binding energy per nucleon, the nuclear density at saturation, the asymmetry energy,the effective mass and the bulk modulus and provides a good description 0f nuclear matter for densities upto a few times the saturation density \\(\\rho_{c}\\). In our study, we have used this nuclear mean field theory and its modification in the presence of a magnetic field. In section 2 a brief discussion of the EOS is given at zero temperature.In section 3 we present the formalism for pulsations of the neutron star models computed here as a result of integration of the relativistic equations. Section 4 deals with results and discussions. ## 2 The Equation of State (EOS) for Nuclear Matter We shall describe nuclear matter at high densities by the relativistic nuclear mean field model, including the \\(\\rho\\)-contribution, as extended to include strong interactions. For densities less than the neutron drip density we adopt the BPS model in the presence of magnetic field as developed by Lai and Shapiro. ### The Nuclear Mean Field EOS at High Densities We consider the charge neutral nuclear matter consisting of neutrons, protons and electrons in \\(\\beta\\)-equilibrium in the presence of a magnetic field and at zero temperature (\\(T=0\\)). The expression for the various quantities like, number density, the scalar number density and energy-pressure etc in the \\([B=0]\\) case are well known. In the presence of magnetic field these expressions for charged particles are modified in a straight forward way, viz \\[\\sum_{spin}\\int d^{3}p=eB\\sum_{\ u=0}^{\ u_{max}}(2-\\delta_{\ u 0})\\int dp_{z} \\tag{1}\\]in the integrals appearing in the expressions for the various quantities listed above. The total pressure P and the mass energy density (\\(\\rho\\)) of the system are given by \\[P = \\sum_{i=p,n,e}P_{i}+\\frac{1}{2}(\\frac{g_{\\omega}}{m_{\\omega}})^{2}< \\omega_{0}>^{2}-\\frac{1}{2}(\\frac{g_{\\sigma}}{m_{\\sigma}})^{-2}(g_{\\sigma} \\sigma)^{2}-\\frac{1}{3}bm_{n}(g_{\\sigma}\\sigma)^{3} \\tag{2}\\] \\[- \\frac{1}{4}c(g_{\\sigma}\\sigma)^{4}+\\frac{1}{2}(\\frac{g_{\\rho}}{m _{\\rho}})^{2}<\\rho_{0}>^{2}\\] \\[\\rho = \\sum_{i=p,n,e}\\rho_{i}+\\frac{1}{2}(\\frac{g_{\\omega}}{m_{\\omega}})^ {2}<\\omega_{0}>^{2}+\\frac{1}{2}(\\frac{g_{\\sigma}}{m_{\\sigma}})^{-2}(g_{\\sigma} \\sigma)^{2} \\tag{3}\\] \\[+ \\frac{1}{3}bm_{n}(g_{\\sigma}\\sigma)^{3}+\\frac{1}{4}c(g_{\\sigma} \\sigma)^{4}+\\frac{1}{2}(\\frac{g_{\\rho}}{m_{\\rho}})^{2}<\\rho_{0}>^{2}\\] In the above \\[P_{n}=\\frac{1}{8\\pi^{2}}[\\frac{1}{3}\\mu_{n}^{*}p_{fn}^{*}(2p_{fn}^{*2}-3m_{n} ^{*2})+m_{n}^{*4}ln(\\frac{\\mu_{n}^{*}+p_{fn}^{*}}{m_{n}^{*}})] \\tag{4}\\] \\[P_{p}=\\frac{eB}{4\\pi^{2}}\\sum_{\ u=0}^{\ u_{max}}(2-\\delta_{\ u 0})[\\mu_{p}^{*}p_{ fp}^{*}-m_{P}^{*2}ln(\\frac{\\mu_{p}^{*}+p_{fp}^{*}}{m_{p}^{*}})] \\tag{5}\\] \\[P_{e}=\\frac{eB}{4\\pi^{2}}\\sum_{\ u=0}^{\ u_{max}}(2-\\delta_{\ u 0})[\\mu_{e}p_{ fe}-m_{e}^{2}ln(\\frac{\\mu_{e}+p_{fe}}{m_{e}})] \\tag{6}\\] \\[\\rho_{n}=\\frac{1}{8\\pi^{2}}[\\mu_{n}^{*}p_{fn}^{*}(2\\mu_{n}^{*2}-m_{n}^{*2})-m_ {n}^{*4}ln(\\frac{\\mu_{n}^{*}+p_{fn}^{*}}{m_{n}^{*}})] \\tag{7}\\] \\[\\rho_{p}=\\frac{eB}{4\\pi^{2}}\\sum_{\ u=0}^{\ u_{max}}(2-\\delta_{\ u 0})(\\mu_{p} ^{*}p_{fp}^{*}+m_{p,\ u}^{*2}ln(\\frac{\\mu_{p}^{*}+p_{fp}^{*}}{m_{p}^{*}})) \\tag{8}\\] \\[\\rho_{e}=\\frac{eB}{4\\pi^{2}}\\sum_{\ u=0}^{\ u_{max}}(2-\\delta_{\ u 0})(\\mu_{e} p_{fe}+m_{e}^{2}ln(\\frac{\\mu_{e}+p_{fe}}{m_{e}})) \\tag{9}\\]\\[{m_{p,\ u}}^{*2}={m_{p}}^{*2}+2\ u eB \\tag{10}\\] \\[{m_{e,\ u}}^{2}={m_{i}}^{2}+2\ u eB \\tag{11}\\] \\[{p_{fp}}^{*2}={\\mu_{p}}^{*2}-{m_{p\ u}}^{*2} \\tag{12}\\] \\[{p_{fn}}^{*2}={\\mu_{n}}^{*2}-{m_{n}}^{*2} \\tag{13}\\] \\[{p_{fe}}^{2}={\\mu_{e}}^{2}-{m_{e}}^{2}-2\ u eB \\tag{14}\\] \\[m_{p}^{*}-m_{p}={m_{n}}^{*}-m_{n}=-(\\frac{g_{\\sigma}}{m_{\\sigma}})^{2}n_{s} \\tag{15}\\] \\[n_{s}=n_{p}^{s}+{n_{n}}^{s} \\tag{16}\\] \\[{n_{n}}^{s}=\\frac{m_{n}^{*}}{2\\pi^{2}}({\\mu_{n}}^{*}p_{fn}-{m_{n}}^{*2}ln( \\frac{{\\mu_{n}}^{*}+{p_{fn}}^{*}}{{m_{n}}^{*}})) \\tag{17}\\] \\[n_{p}^{s}=\\frac{eBm_{p}^{*}}{2\\pi^{2}}\\sum\\limits_{\ u=0}^{\ u_{max}}(2-\\delta _{\ u,0})ln(\\frac{{\\mu_{p}}^{*}+{p_{fp}}^{*}}{{m_{p,\ u}}^{*}}) \\tag{18}\\] \\[n_{B}=n_{p}+n_{n} \\tag{19}\\] \\[n_{n}=\\frac{{p_{fn}}^{*3}}{3\\pi^{2}} \\tag{20}\\] \\[n_{p}=\\frac{eB}{2\\pi^{2}}\\sum\\limits_{\ u=0}^{\ u_{max}}(2-\\delta_{\ u,0}){p_ {fp}}^{*} \\tag{21}\\] \\[n_{e}=\\frac{eB}{2\\pi^{2}}\\sum\\limits_{\ u=0}^{\ u_{max}}(2-\\delta_{\ u,0}){p_ {fe}} \\tag{22}\\] \\[n_{p}=n_{e} \\tag{23}\\] \\[<\\rho_{0}>=\\frac{1}{2}(n_{P}-n_{n}) \\tag{24}\\] \\[<\\omega_{0}>=n_{B} \\tag{25}\\] \\[\\mu_{i}^{*}=\\mu_{i}-(\\frac{g_{\\omega}}{m_{\\omega}})^{2}<\\omega_{0}>-I_{3i}( \\frac{g_{\\rho}}{m_{\\rho}})^{2}<\\rho_{0}> \\tag{26}\\]for i=n,p \\[\\mu_{n}=\\mu_{p}+\\mu_{e} \\tag{27}\\] For a given value of \\(\\mu_{e}\\) and B we can find self consistently the values of \\(m_{p}^{*}\\), \\(\\mu_{p}^{*}\\), \\(\\mu_{n}^{*}\\) and so the pressure and mass energy can be computed. This helps us to compute the equation of state in the mean field approximation as a function of \\(\\mu_{e}\\) or \\(n_{B}\\). ### The Magnetic BPS model The total pressure of the hadron matter below the neutron drip is given by \\[P=P_{e}(n_{e})+P_{L}=P_{e}(n_{e})+\\frac{1}{3}\\varepsilon_{L}(Z,n_{e}) \\tag{28}\\] where \\(\\varepsilon_{L}\\) is the bcc coulomb lattice energy and \\(P_{e}\\) the pressure of the electrons in the presence of the magnetic field is given by \\[P_{e} = \\frac{eB}{4\\pi^{2}}\\sum_{\ u=0}^{\ u_{max}}(2-\\delta_{\ u 0})[\\mu_ {e}\\sqrt{\\mu_{e}^{2}-m_{e}^{2}-2\ u eB} \\tag{29}\\] \\[- (m_{e}^{2}+2\ u eB)ln(\\frac{\\mu_{e}+\\sqrt{\\mu_{e}^{2}-m_{e}^{2}-2 \ u eB}}{\\sqrt{m_{e}^{2}+2\ u eB}})]\\] \\[P_{L}=-1.444Z^{\\frac{2}{3}}e^{2}e^{2}n_{e}{}^{\\frac{4}{3}} \\tag{30}\\] \\[n_{e}=\\frac{eB}{4\\pi^{2}}\\sum_{\ u=o}^{\ u_{max}}(2-\\delta_{\ u 0 })[{\\mu_{e}}^{2}-m_{e}^{2}-2\ u eB]^{\\frac{1}{2}} \\tag{31}\\]Consider matter condensing into a perfect crystal lattice with a single nuclear species(A,Z) at the lattice sites.The energy density is \\[\\varepsilon=\\frac{n_{e}}{Z}W_{N}(A,Z)+\\varepsilon_{e}^{{}^{\\prime}}(n_{e})+ \\varepsilon_{L}(Z,n_{e}) \\tag{32}\\] where \\(W_{N}\\) is the mass-energy of the nucleus(including the rest mass of nucleons and Z electrons) \\(\\varepsilon_{e}^{{}^{\\prime}}\\) is the free electron energy including the rest mass of electrons.Following Lai and Shapiro we use for \\(W_{N}\\), the experimental values for laboratory nuclei as tabulated by Wapastra and Bos(1976,1977). The elements taken in this paper are given below in the table along with their mass energy \\(W_{N}(A,Z)\\). At a given pressure P,the equilibrium values of A and Z are determined by minimising the Gibbs free energy per nucleon, \\[g=\\frac{\\varepsilon+P}{n}=\\frac{W_{N}(A,Z)}{A}+\\frac{Z}{A}(\\mu_{e}-m_{e}c^{2}) +\\frac{4\\varepsilon_{L}}{3An_{e}} \\tag{33}\\] The neutron drip point is determined by the condition \\[g_{min}=m_{n}c^{2} \\tag{34}\\] Knowing A and Z the energy can be determined from Eq.(32)Radial Pulsations of a Non-Rotating Neutron Star The equations governing infinitesimal radial pulsations of a non-rotating star in general relativity were given by Chandrasekhar(1964). The structure of the star in hydrostatic equilibrium is described by the Tolman-Openheimer- Volkoff equations \\[\\frac{dp}{dr}=\\frac{-G(p+\\rho c^{2})(m+\\frac{4\\pi r^{3}p}{c^{2}})}{c^{2}r^{2}(1- \\frac{2Gm}{c^{2}r})} \\tag{35}\\] \\[\\frac{dm}{dr}=4\\pi r^{2}\\rho \\tag{36}\\] \\[\\frac{d\ u}{dr}=\\frac{2Gm(1+\\frac{4\\pi r^{3}p}{mc^{2}})}{c^{2}r(1-\\frac{2Gm}{c ^{2}r})} \\tag{37}\\] Given an equation of state \\(p(\\rho)\\),equations (35)-(37) can be integrated numerically for a given central density to obtain the radius R and gravitational mass \\(M=M(R)\\) of the star. The metric used is given by \\[ds^{2}=-e^{2\ u}c^{2}dt^{2}+e^{2\\lambda}dr^{2}+r^{2}(d\\theta^{2}+sin^{2}\\theta d \\phi^{2}) \\tag{38}\\] If \\(\\Delta r\\) is the radial displacement \\[\\xi=\\frac{\\Delta r}{r} \\tag{39}\\] \\[\\zeta=r^{2}e^{-\ u}\\xi \\tag{40}\\]and the time dependence of the harmonic oscillations is written as \\(exp(i\\sigma t)\\),one gets the equation governing radial adiabatic oscillations(Chandrasekhar 1964,Datta etal 1998,Anand etal 2000) \\[F\\frac{d^{2}\\zeta}{dr^{2}}+G\\frac{d\\zeta}{dr}+H\\zeta=\\sigma^{2}\\zeta \\tag{41}\\] where \\[F=-\\frac{e^{2\ u-2\\lambda}(\\Gamma p)}{p+\\rho c^{2}} \\tag{42}\\] \\[G=-\\frac{e^{2\ u-2\\lambda}}{p+\\rho c^{2}}[(\\Gamma p)(\\lambda+3\ u)+\\frac{d}{ dr}(\\Gamma p)-\\frac{2}{r}(\\Gamma p)] \\tag{43}\\] \\[H=\\frac{e^{2\ u-2\\lambda}}{p+\\rho c^{2}}[\\frac{4}{r}\\frac{dp}{dr}+\\frac{8\\pi G }{c^{4}}e^{2\\lambda}p(p+\\rho c^{2})-\\frac{1}{p+\\rho c^{2}}(\\frac{dp}{dr})^{2}] \\tag{44}\\] \\[\\lambda=-ln[1-\\frac{2Gm}{rc^{2}}]^{\\frac{1}{2}} \\tag{45}\\] In the above equations \\(\\Gamma\\) is the adiabatic index given by \\[\\Gamma=\\frac{p+\\rho c^{2}}{c^{2}p}\\frac{dp}{d\\rho} \\tag{46}\\] The boundary conditions to solve equation (41) are \\[\\zeta(r=0)=0\\] \\[\\delta p(r=R)=0 \\tag{47}\\] The expression for \\(\\delta p\\) as given by Chandrasekhar(1964) is \\[\\delta p(r)=-\\frac{dp}{dr}\\frac{e^{\ u}\\zeta}{r^{2}}-\\frac{\\Gamma pe^{\ u}}{r^ {2}}\\frac{d\\zeta}{dr} \\tag{48}\\] All these equations are totally model independent and are infact the same whether we are considering neutron stars, quark stars or any other dense stellar object. The nature of the object being considered and the particular model affects the structure of the star and the frequency of radial pulsations only through the EOS. Notice that in Chandrasekhar (1964) and Datta etal (1998) the pulsation equations were written in terms of \\(\\xi\\) instead of \\(\\zeta\\). Equation (42) along with the boundary conditions represent a Sturm-Liouville eigenvalue problem for \\(\\sigma^{2}\\). From the theory of such equations we have the well known results: (i) Eigenvalues \\(\\sigma^{2}\\) are all real, and (ii) They form an infinite discreet sequence \\(\\sigma_{0}^{2}<\\sigma_{1}^{2}<\\sigma_{2}^{2}\\) An important consequence of (ii) is that if the fundamental radial mode of a star is stable (\\({\\sigma_{0}}^{2}>0\\)), then all the radial modes are stable. ## 4 Results and Discussions To study the structure and radial oscillations of neutron stars in the presence of a strong magnetic field we have employed the BPS model with its generalization in a magnetic field given by Lai and Shapiro(1991) below the neutron drip and the RMF theory above it. We have used the values of various couplings fixed by Ellis et al(1991) which provide the known values of various nuclear matter parameters : \\[(\\frac{g_{\\sigma}}{m_{\\sigma}}) = 0.01525Mev^{-1}\\] \\[(\\frac{g_{\\omega}}{m_{\\omega}}) = 0.011Mev^{-1}\\] \\[(\\frac{g_{\\rho}}{m_{\\rho}}) = 0.011Mev^{-1}\\] \\[b = 0.003748\\] \\[c = 0.01328\\] As explained in section 2.1 and 2.2, for the RMF theory the equations are solved in a self-consistent manner for the effective masses and chemical potentials,and hence the EOS. Below the neutron drip, the EOS is obtained by the minimization of Gibb's free energy as functions of A and Z. For this purpose we have employed 14 nuclei listed in the table. The problem is solved seprately for \\(B=0\\) and \\(B\ eq 0\\). For each B this gives the EOS in the form \\(P(n_{B})\\) and \\(\\rho(n_{B})\\). The structure of the neutron star is then obtained from the integration of the oppenheimer-Volkoff equations. This then gives the profile of m,p and \\(\ u\\) as a function of r for each star. One more quantity that is required is \\(\\Gamma\\) which is then calculated directly from the EOS for all densities by using a quadratic difference formula for the derivative \\(\\frac{dp}{d(\\rho c^{2})}\\). Along with the M-R relationship, one also obtains the gravitational red shift \\[Z=[1-\\frac{2Gm}{c^{2}r}]^{-\\frac{1}{2}}-1 \\tag{50}\\] which can in principle be observed experimentally. The procedure to obtain the eigen frequencies is simple. We guess a value for \\(\\sigma\\) and integrate the equation outward from the centre upto the surface. The guessed value of \\(\\sigma\\) is varied till the boundary condition \\[\\delta p(r)=0\\ {\\rm at}\\ r=R\\] is satisfied. In figure 1. we plot mass in solar mass unit vs radius in Kms. for magnetic fields 0, 1\\(\\times 10^{3}\\), 5\\(\\times 10^{4}\\), 1\\(\\times 10^{5}\\ MeV^{2}\\) (\\(1MeV^{2}=1.69\\times 10^{14}\\)G) represented by the curves A, B, C and D respectively. It is worthwile to note that the magnetised neutron stars support higher masses. For very high magnetic fields the stars become relatively more compact. In figure 2. we present mass vs central energy density and in figure 3. we have plotted gravitational red shift vs mass. In figure 4. a plot of time period of fundamental mode vs gravitational red shift is given for the same magnetic fields as in figure 1. It is interesting to note that for the observed neutron star mass(1.4 \\(M_{\\odot}\\)), the magnetic field has pratically no influence on radial stability. Similar trend is seen for the first excited mode as shown in figure 5. ## References Anand,J.D.,ChandrikaDevi,N.,Gupta,V.K.,&Singh,S.,2000,ApJ,538,870 Bayam,G.,Pethick,C.,&Sutherland,P.1971,ApJ,170,299 Cameron,A.G.W.1965,Nature,205,787 Chandrasekhar,S.1964a,Phys. Rev. Lett.,12,114 ------- 1964b,ApJ,140,417 Cutler,C.,Lindblom,L.,&Splinter,R.J.1990,ApJ,363,603 Datta,B.,Hasan,S.S.,Sahu,P.K.,&Prasanna,A.R.1998,Int.J.Mod.Phys.,D7,49 Ellis,J.,Kapusta,J.I.,&Olive,K.A.1991,Nucl.Phys.B,348,345 Hong,P.L.1998,Phys. Lett. B,445,36 Kuoveliotou,et al.1998,Nature,393,235 Lie,D.,&Shapiro,S.L.1991,ApJ,170,299 Wapastra,A.H.,&Bos,K.1976,AtomicDataNucl.DataTables,17,474 --------1977,AtomicDataNucl.Data Tables,19,175 \\begin{table} \\begin{tabular}{|c|c|} \\hline Element & BPS MASS-ENERGY \\\\ & (in units of \\(10^{4}\\) Mev) \\\\ \\hline \\hline \\(Fe_{26}^{56}\\) & 5.2103 \\\\ \\(Ni_{28}^{62}\\) & 5.7686 \\\\ \\(Ni_{28}^{64}\\) & 5.9549 \\\\ \\(Ni_{28}^{66}\\) & 6.1413 \\\\ \\(Kr_{36}^{86}\\) & 8.0025 \\\\ \\(Se_{34}^{84}\\) & 7.8170 \\\\ \\(Ge_{32}^{82}\\) & 7.6316 \\\\ \\(Zn_{30}^{80}\\) & 7.4466 \\\\ \\(Ni_{28}^{78}\\) & 7.2621 \\\\ \\(Ru_{44}^{126}\\) & 11.7337 \\\\ \\(Mo_{42}^{124}\\) & 11.5495 \\\\ \\(Zr_{40}^{122}\\) & 11.3655 \\\\ \\(Sr_{38}^{120}\\) & 11.1818 \\\\ \\(Sr_{38}^{122}\\) & 11.3655 \\\\ \\(Kr_{36}^{118}\\) & 10.9985 \\\\ \\hline \\end{tabular} \\end{table} Table 2: BPS Equilibrium Nuclei Below Neutron DripFigure 1: Plot of mass in solar mass unit vs radius in Kms. for magnetic fields 0, 1\\(\\times\\)10\\({}^{4}\\), 5\\(\\times\\)10\\({}^{4}\\), 1\\(\\times\\)10\\({}^{5}\\)\\(MeV^{2}\\) represented by the curves A, B, C and D respectively. Figure 2: Plot of mass in solar mass unit vs energy density for magnetic fields 0, 1\\(\\times\\)10\\({}^{4}\\), 5\\(\\times\\)10\\({}^{4}\\), 1\\(\\times\\)10\\({}^{5}\\)\\(MeV^{2}\\) represented by the curves A, B, C and D respectively. Figure 3: Plot of gravitational red shift(Z) vs mass in solar mass unit for magnetic fields 0, 1\\(\\times\\)10\\({}^{4}\\), 5\\(\\times\\)10\\({}^{4}\\), 1\\(\\times\\)10\\({}^{5}\\)\\(MeV^{2}\\) represented by the curves A, B, C and D respectively. Figure 4: Plot of time period \\(\\tau\\) for fundamental mode vs gravitational red shift(Z) for magnetic fields 0, 1\\(\\times\\)10\\({}^{4}\\), 5\\(\\times\\)10\\({}^{4}\\), 1\\(\\times\\)10\\({}^{5}\\)\\(MeV^{2}\\) represented by the curves A, B, C and D respectively. Figure 5: Plot of time period \\(\\tau\\) for n=1 mode vs gravitational red shift(Z) for magnetic fields 0, 1\\(\\times 10^{4}\\), 5\\(\\times 10^{4}\\), 1\\(\\times 10^{5}\\)\\(MeV^{2}\\) represented by the curves A, B, C and D respectively.
The eigen frequencies of radial pulsations of neutron stars are calculated in a strong magnetic field. At low densities we use the magnetic BPS equation of state(EOS) similar to that obtained by Lai and Shapiro while at high densities the EOS obtained from the relativistic nuclear mean field theory is taken and extended to include strong magnetic field. It is found that magnetised neutron stars support higher maximum mass where as the effect of magnetic field on radial stability for observed neutron star masses is minimal.
Write a summary of the passage below.
arxiv-format/0102026v2.md
Wilsonian effective action for SU(2) Yang-Mills theory with Cho-Faddeev-Niemi-Shabanov decomposition Holger Gies _Institut fur theoretische Physik, Universitat Tubingen, D-72076 Tubingen, Germany_ _and_ _Theory Division, CERN, CH-1211 Geneva, Switzerland_ _E-mail: [email protected]_ Emmy Noether fellow ## 1 Introduction The fact that quarks and gluons are not observed as asymptotic states in our world indicates that a description in terms of these fields is not the most appropriate language for discussing low-energy QCD. On the other hand, there seems to be little predictive virtue in describing the low-energy domain only by observable quantities, such as mesons and baryons. A purposive procedure can be the identification of those (not necessarily observable) degrees of freedom of the system that allow for a \"simple\" description of the observable states. The required \"simplicity\" can be measured in terms of the simplicity of the action that governs those degrees of freedom. Clearly, a clever guess of such degrees of freedom is halfway to the solution of the theory; the remaining problem is to prove that these degrees of freedom truly arise from the fundamental theory by integrating out the high-energy modes. For the pure Yang-Mills (YM) sector of QCD, such a guess has recently been made by Faddeev and Niemi [1] inspired by the work of Cho [2]. For the gauge group SU(2), they decomposed the (implicitly gauge-fixed) gauge potential \\({\\bf A}_{\\mu}\\) into an \"abelian\" component \\(C_{\\mu}\\), a unit color vector \\({\\bf n}\\) and a complex scalar field \\(\\varphi\\); here, \\(C_{\\mu}\\) is the local projection of \\({\\bf A}_{\\mu}\\) onto some direction in color space defined by the space-dependent \\({\\bf n}\\). Faddeev and Niemi conjectured that the important low-energy dynamics of SU(2) YM theory1 is determined by the \\({\\bf n}\\) field; its effective action of nonlinear sigma-model type, the Skyrme-Faddeev model, should then arise from integrating out the further degrees of freedom: \\(C_{\\mu}\\), \\(\\varphi\\), : Footnote 1: Different generalizations of the gauge field decomposition for higher gauge groups can be found in [3], [4] and [9]. \\[\\Gamma^{\\rm FN}_{\\rm eff}=\\int d^{4}x\\,\\left[m^{2}(\\partial_{\\mu}{\\bf n})^{2}+ \\frac{1}{g^{2}}({\\bf n}\\cdot\\partial_{\\mu}{\\bf n}\\times\\partial_{\ u}{\\bf n})^ {2}\\right]. \\tag{1}\\] The additional mass scale \\(m\\) is expected to be generated by the integration process as well; first hints of this mechanism have been observed in a one-loop integration over a reduced set of variables [5, 6]. The associated knotlike solitonic excitations of the Skyrme-Faddeev model are supposed to be identified with glue balls (which are directly observable at least on the lattice).2 Footnote 2: In a very recent paper [7], Faddeev and Niemi generalized their decomposition in order to obtain a manifest duality between the here-considered “magnetic” and additional “electric” variables, involving an abelian scalar multiplet with two complex scalars. This electric sector will not be considered in the present work. The presence of gauge symmetry in YM theory complicates this ambitious conjecture in two ways: first, in order to formulate a quantum theory, the decomposition of \\({\\bf A}_{\\mu}\\) has to also include the overabundant gauge degrees of freedom; and secondly, the gauge has then to be fixed in a prescribed way, not only to be able to perform functional integration, but also to arrive nevertheless at a unique \\({\\bf n}\\) field.3 Footnote 3: A different approach was put forward in [8], where the \\({\\bf n}\\) field was identified by constructing an unconstraint version of SU(2) Yang-Mills theory in a Hamiltonian context. The first problem was solved by Shabanov [9, 10], who established a one-to-one correspondence between the unfixed gauge field \\({\\bf A}_{\\mu}\\) and its decomposition, and the quantum theory was formulated; his results are briefly sketched in Sect. 2 and shall serve as the starting point of our investigations. The second problem of gauge fixing implies that a successful realization of the ideas of Faddeev and Niemi will only be meaningful in a certain gauge. In this (a priori unknown) gauge, the important low-energy degrees of freedom might in fact be determined by the \\({\\bf n}\\) field and a simple action, whereas in a different gauge, these degrees of freedom may be hidden in a highly complicated structure involving the \\({\\bf n}\\) and other fields. The present paper is dedicated to a calculation of the one-loop Wilsonian effective action for SU(2) Yang-Mills theory in terms of the gauge field decomposition of Shabanov. Our intention is to study the renormalization group flow of the mass scale parameter of Eq. (1), the gauge coupling and further marginal couplings. In view of the second problem mentioned above, our results and their interpretation are strictly tied to the particular gauge we shall choose. We face this problem by fixing the gauge in such a way that Lorentz invariance and global color transformations remain as residual symmetries; these are the symmetries of the Skyrme-Faddeev model and must mandatorily be respected. The Wilsonian effective action is characterized by the fact that it governs the dynamics of the low-energy modes below a certain cutoff \\(k\\); it incorporates the interactions that are induced by high-energy fluctuations with momenta between \\(k\\) and the ultraviolet (UV) cutoff \\(\\Lambda\\) which have been integrated out. Following the Faddeev-Niemi conjecture, we only retain the \\({\\bf n}\\) field as low-energy degree of freedom. Actually, we integrate over the high-energy modes in two different ways: first, we integrate out the \\(k<p<\\Lambda\\) fluctuations of all fields _except for_ the \\({\\bf n}\\) field, which is left untouched (Sec. 3). Secondly, we integrate all fields _including_ the \\({\\bf n}\\) field over the same momentum shell (Sec. 4). In this way, we can study the effect of the \\({\\bf n}\\) field fluctuations on the flow of the mass scale and the couplings in detail. The results for both calculations are similar: the mass scale \\(m\\) appearing in Eq. (1) is indeed generated by the renormalization group flow, and the gauge coupling is asymptotically free. As far as the simplicity of the conjectured effective action Eq. (1) is concerned, our results are a bit disappointing: as discussed in Sec. 5, further marginal terms (not displayed in Eq. (1)) are of the same order as the displayed one and therefore have to be included in Eq. (1). Keeping only those terms that involve single derivatives acting on \\({\\bf n}\\) results in an action without stable solitons; nevertheless, stability is in fact ensured owing to the presence of higher-derivative terms. The disadvantage is that these terms spoil the desired simplicity of the low-energy effective theory. Of course, our perturbative results represent only a first glance at the true infrared behavior of the system and are far from providing qualitatively confirmed results, not to mention quantitative predictions. To be precise, the one-loop calculation investigates only the form of the renormalization group trajectories of the couplings in the vicinity of the perturbative Gaussian fixed point. Nevertheless, various extrapolations of the perturbative trajectories can elucidate the question as to whether the Faddeev-Niemi conjecture is realizable or not. ## 2 Quantum Yang-Mills theory in Cho-Faddeev-Niemi-Shabanov variables In decomposing the Yang-Mills gauge connection, we follow [2, 9, 10]. Let \\({\\bf A}_{\\mu}\\) be an SU(2) connection where the color degrees of freedom are represented in vector notation. We parametrize \\({\\bf A}_{\\mu}\\) as \\[{\\bf A}_{\\mu}={\\bf n}\\,C_{\\mu}+(\\partial_{\\mu}{\\bf n})\\times{\\bf n }+{\\bf W}_{\\mu}, \\tag{2}\\] where the cross product is defined via the SU(2) structure constants. \\(C_{\\mu}\\) is an \"abelian\" connection, whereas \\({\\bf n}\\) denotes a unit vector in color space, \\({\\bf n}\\cdot{\\bf n}=1\\). \\({\\bf W}_{\\mu}\\) shall be orthogonal to \\({\\bf n}\\) in color space, obeying \\({\\bf W}_{\\mu}\\cdot{\\bf n}=0\\), so that \\(C_{\\mu}={\\bf n}\\cdot{\\bf A}_{\\mu}\\). For a given \\({\\bf n}\\),and \\({\\bf W}_{\\mu}\\), the connection \\({\\bf A}_{\\mu}\\) is uniquely determined by Eq. (2). In the opposite direction, there is still some arbitrariness: for a given \\({\\bf A}_{\\mu}\\), \\({\\bf n}\\) can generally be chosen at will, but then \\(C_{\\mu}\\) and \\({\\bf W}_{\\mu}\\) are fixed (e.g., \\({\\bf W}_{\\mu}={\\bf n}\\times D_{\\mu}({\\bf A}){\\bf n}\\), where \\(D_{\\mu}\\) denotes the covariant derivative). While the LHS of Eq. (2) describes \\(3_{\\rm color}\\times 4_{\\rm Lorentz}=12\\) off-shell and gauge-unfixed degrees of freedom, the RHS up to now allows for \\((C_{\\mu}\\,{:})4_{\\rm Lorentz}+({\\bf n}\\,{:})2_{\\rm color}+({\\bf W}_{\\mu}\\,{:})3 _{\\rm color}\\times 4_{\\rm Lorentz}-4_{{\\bf n}\\cdot{\\bf W}_{\\mu}=0}=14\\) degrees of freedom. Two degrees of freedom on the RHS remain to be fixed. For example, by fixing \\({\\bf n}\\) to point along a certain direction and imposing gauge conditions on \\({\\bf W}_{\\mu}\\), we arrive at the class of abelian gauges which are known to induce monopole degrees of freedom in \\(C_{\\mu}\\). In order to avoid these topological defects, we let \\({\\bf n}\\) vary in spacetime and impose a general condition on \\(C_{\\mu},{\\bf n}\\) and \\({\\bf W}_{\\mu}\\), \\[\\mathbf{\\chi}({\\bf n},C_{\\mu},{\\bf W}_{\\mu})=0,\\qquad{\\rm with}\\quad \\mathbf{\\chi}\\cdot{\\bf n}=0, \\tag{3}\\] which fixes the redundant two degrees of freedom on the RHS of Eq. (2). Moreover, Eq. (3) also determines how \\({\\bf n}\\), \\(C_{\\mu}\\) and \\({\\bf W}_{\\mu}\\) transform under gauge transformations of \\({\\bf A}_{\\mu}\\): by demanding that \\(\\delta\\mathbf{\\chi}({\\bf n},C_{\\mu}({\\bf A}),{\\bf W}_{\\mu}({\\bf A}))=0\\) (and \\(\\delta(\\mathbf{\\chi}\\cdot{\\bf n})=0\\)), the transformation \\(\\delta{\\bf n}\\) of \\({\\bf n}\\) is uniquely determined, from which \\(\\delta C_{\\mu}\\) and \\(\\delta{\\bf W}_{\\mu}\\) are also obtainable. The thus established one-to-one correspondence between \\({\\bf A}_{\\mu}\\) and its decomposition (2) allows us to rewrite the generating functional of YM theory in terms of a functional integral over the new fields [9, 10]: \\[Z=\\int{\\cal D}{\\bf n}{\\cal D}C{\\cal D}{\\bf W}\\,\\delta(\\mathbf{\\chi}) \\,\\Delta_{\\rm S}\\,\\Delta_{\\rm FP}\\,{\\rm e}^{-S_{\\rm YM}-S_{\\rm sf}}. \\tag{4}\\] Beyond the usual Faddeev-Popov determinant \\(\\Delta_{\\rm FP}\\), the YM action \\(S_{\\rm YM}\\) and the gauge fixing action \\(S_{\\rm gf}\\), we find one further determinant introduced by Shabanov, \\(\\Delta_{\\rm S}\\); this determinant accompanies the \\(\\delta\\) functional which enforces the constraint \\(\\mathbf{\\chi}=0\\), in complete analogy to the Faddeev-Popov procedure: \\[\\Delta_{\\rm S}:={\\rm det}\\,\\left(\\left.\\frac{\\delta\\mathbf{\\chi}}{ \\delta{\\bf n}}\\right|_{\\mathbf{\\chi}=0}\\right). \\tag{5}\\] All objects in the integrand of Eq. (4) are understood to be functions of the 14 integration variables \\({\\bf n}\\), \\(C_{\\mu}\\) and \\({\\bf W}_{\\mu}\\). By construction, the generating functional (4) is invariant under different choices of \\(\\chi\\) for the same reason that it is invariant under different choices of the gauge - this is controlled by the Faddeev-Popov procedure. Nevertheless, the choice of \\(\\chi\\) crucially belongs to the definition of the decomposition (2) and of the conjectured low-energy degrees of freedom; in other words, even if there is one particular \\(\\chi\\) that leads to Eq. (1) as the true low-energy effective action after integrating out \\(C_{\\mu}\\) and \\({\\bf W}_{\\mu}\\), other choices of \\(\\chi\\) will not lead to the same result, because the low-energy degrees of freedom then are differently distributed over \\({\\bf n}\\), \\(C_{\\mu}\\) and \\({\\bf W}_{\\mu}\\). In the present work, \\(\\chi\\) is chosen in such a way that \\({\\bf n}\\) transforms homogeneously under gauge transformations, i.e., \\({\\bf n}\\) is orthogonally rotated in color space [2]: \\[0 = \\mathbf{\\chi}:=\\partial_{\\mu}{\\bf W}_{\\mu}+C_{\\mu}{\\bf n }\\times{\\bf W}_{\\mu}+{\\bf n}({\\bf W}_{\\mu}\\cdot\\partial_{\\mu}{\\bf n}), \\tag{6}\\] \\[\\Rightarrow \\delta{\\bf n}={\\bf n}\\times\\mathbf{\\varphi},\\quad{\\rm under }\\ \\delta{\\bf A}_{\\mu}=D_{\\mu}({\\bf A})\\mathbf{\\varphi}=\\partial_{\\mu} \\mathbf{\\varphi}+{\\bf A}_{\\mu}\\times\\mathbf{\\varphi}.\\]Incidentally, the gauge transformation properties of \\(C_{\\mu}\\) and \\({\\bf W}_{\\mu}\\) also become very simple with the choice (6): \\({\\bf W}_{\\mu}\\) also transforms homogeneously, and \\(\\delta C_{\\mu}={\\bf n}\\cdot\\partial_{\\mu}\\mathbf{\\varphi}\\). Finally, the choice of the gauge-fixing condition must also be viewed as being part of the definition of the decomposition. Not only does the functional form of \\(\\Delta_{\\rm FP}\\) and \\(S_{\\rm gf}\\) depend on this choice, but the discrimination of high- and low-momentum modes is also determined by the gauge fixing. In fact, this gauge dependence of the mode momenta usually is the main obstacle against setting up a Wilsonian renormalization group study. But in the present context, it belongs to the conjecture that the particular gauge that we shall choose singles out those low-momentum modes which finally provide for a simple description of low-energy QCD; in a different gauge, we would encounter different low-momentum modes, but we also would not expect to find the same simple description. In this work, we choose the covariant gauge condition \\(\\partial_{\\mu}{\\bf A}_{\\mu}=0\\). This automatically ensures covariance of the resulting effective action and, moreover, allows for the residual symmetry of global gauge transformations, \\(\\mathbf{\\varphi}={\\rm const}\\). Together with the choice (6), this residual symmetry coincides with the desired global color symmetry of the Skyrme-Faddeev model (1). This means that the demand for color and Lorentz symmetry of the action (1) is satisfied exactly by a covariant gauge and condition (6). ## 3 One-loop effective action without \\({\\bf n}\\) fluctuations Our aim is the construction of the one-loop Wilsonian effective action for the \\({\\bf n}\\) field by integrating out the \\(C\\) and \\({\\bf W}\\) field over a momentum shell between the UV cutoff \\(\\Lambda\\) and an infrared cutoff \\(k<\\Lambda\\). In general, this will induce nonlinear and nonlocal self-interactions of the \\({\\bf n}\\) field; since we are looking for an action of the type (1), we represent these interactions in a derivative expansion and neglect higher derivative terms of order \\({\\cal O}(\\partial^{2}{\\bf n}\\partial^{2}{\\bf n})\\) (later, we shall question this approach). Furthermore, we do not integrate out \\({\\bf n}\\) field fluctuations in this section (see Sect. 4) and disregard any induced \\(C\\) or \\({\\bf W}\\) interactions below the infrared cutoff \\(k\\). From a technical viewpoint, the one-loop approximation of the desired effective action \\(\\Gamma_{k}[{\\bf n}]\\) is obtained by a Gaussian integration of the quadratic \\(C\\) and \\({\\bf W}\\) terms in Eq. (4), neglecting higher-order terms of the action: \\[{\\rm e}^{-\\hat{\\Gamma}_{k}[{\\bf n}]} = {\\rm e}^{-S_{\\rm cl}[{\\bf n}]}\\int_{k}{\\cal D}C{\\cal D}{\\bf W}\\, \\Delta_{\\rm S}[{\\bf n}]\\,\\Delta_{\\rm FP}[{\\bf n}]\\delta(\\mathbf{ \\chi})\\] \\[\\times{\\rm e}^{-\\frac{1}{g^{2}}\\int\\left\\{C_{\\mu}\\frac{1}{2}M_{ \\mu\ u}^{C}C_{\ u}+{\\bf W}_{\\mu}\\frac{1}{2}M_{\\mu\ u}^{\\bf W}{\\bf W}_{\ u}+C_ {\\mu}{\\bf Q}_{\\mu\ u}^{C}\\cdot{\\bf W}_{\ u}+C_{\ u}K_{\ u}^{C}+{\\bf W}_{\\mu} \\cdot{\\bf K}_{\\mu}^{\\bf W}\\right\\}},\\] where the hat on \\(\\hat{\\Gamma}_{k}[{\\bf n}]\\) indicates that the \\({\\bf n}\\) field fluctuations have not been taken into account. Furthermore, any \\(C\\) or \\({\\bf W}\\) dependence of \\(\\Delta_{\\rm S}\\) and \\(\\Delta_{\\rm FP}\\) has been neglected to one-loop order; the various differential operators and currents which all depend on \\({\\bf n}\\) (and the gauge parameter \\(\\alpha\\)) are defined in Appendix A. The classical action of \\({\\bf n}\\) including gauge fixing terms is given by: \\[S_{\\rm cl}[{\\bf n}]:=\\int d^{4}x\\,\\left(\\frac{1}{4g^{2}}(\\partial_{\\mu}{\\bf n }\\times\\partial_{\ u}{\\bf n})^{2}+\\frac{1}{2\\alpha g^{2}}(\\partial^{2}{\\bf n }\\times{\\bf n})^{2}\\right). \\tag{8}\\]We treat the \\(\\delta\\) functional in Eq. (7) in its Fourier representation, \\[\\delta(\\mathbf{\\chi})\\to\\int{\\cal D}\\mathbf{\\phi}\\,{\\rm e}^{-{ \\rm i}\\int\\mathbf{\\phi}\\cdot\\partial_{\\mu}{\\bf W}_{\\mu}+\\mathbf{\\phi}\\cdot C_{\\mu}{\\bf n}\\times{\\bf W}_{\\mu}+(\\mbox{\\boldmath$\\phi$ }\\cdot{\\bf n})(\\partial_{\\mu}{\\bf n}\\cdot{\\bf W}_{\\mu})}, \\tag{9}\\] where the second term in the exponent, the triple vertex, can actually be neglected, because it leads only to nonlocal terms (cf. later) or terms of higher order in derivatives. Inserting Eq. (9) into Eq. (7), we end up with three functional integrals over \\(C\\), \\({\\bf W}\\) and \\(\\phi\\), which can successively be performed, leading to three determinants, \\[{\\rm e}^{-\\hat{\\Gamma}_{k}[{\\bf n}]}\\to{\\rm e}^{-S_{\\rm cl}[{\\bf n}]}\\Delta_{ \\rm S}[{\\bf n}]\\,\\Delta_{\\rm FP}[{\\bf n}]\\left(\\det M^{C}\\right)^{-1/2}\\left( \\det\\overline{M}^{\\bf W}\\right)^{-1/2}\\left(\\det-\\widetilde{Q}^{\\mathbf{\\phi}}_{\\mu}(\\overline{M}^{\\bf W})^{-1}_{\\mu\ u}Q^{\\mathbf{\\phi}}_{\ u}\\right)^{-1/2}, \\tag{10}\\] where we have omitted several nonlocal terms that arise from the completion of the square in the exponent during the Gaussian integration. In Appendix B, we argue that these nonlocal terms are unimportant in the present Wilsonian investigation. Again, details about the various operators in Eq. (10) are given in App. A. The determinants are functionals of \\({\\bf n}\\) only and have to be evaluated over the space of test functions with momenta between \\(k\\) and \\(\\Lambda\\). The determinants depend also on the gauge parameter \\(\\alpha\\). Only for the Landau gauge \\(\\alpha=0\\) is the gauge-fixing \\(\\delta\\) functional implemented exactly; in fact, \\(\\alpha=0\\) appears to be a fixed point of the renormalization group flow [11]. But this in turn ensures that the choice of \\(\\alpha=\\alpha(k)\\equiv\\alpha_{k}\\) at the cutoff scale \\(k\\to\\Lambda\\) is to some extent arbitrary, since \\(\\alpha_{k}\\) flows to zero anyway as \\(k\\) is lowered. This allows us to conveniently choose \\(\\alpha_{k=\\Lambda}=1\\) at the cutoff scale and evaluate the determinants with this parameter choice. As mentioned above, we evaluate the determinants in a derivative expansion based on the assumption that the low-order derivatives of \\({\\bf n}\\) represent the essential degrees of freedom in the low-energy domain. There are various techniques for the calculation at our disposal; it turns out that a direct momentum expansion of the operators is most efficient.4 We shall demonstrate this method by means of the third determinant of Eq. (10), the \"\\(C\\) determinant\"; the key observation is that derivatives acting on the space of test functions create momenta of the order of \\(p\\) with \\(k<p<\\Lambda\\), whereas derivatives of the \\({\\bf n}\\) field are assumed to obey \\(|\\partial{\\bf n}|\\ll k\\) in agreement with the Faddeev-Niemi conjecture. This suggests an expansion of the form Footnote 4: As cross-checks, we also employed a propertime representation for the operators which we decomposed with a heat-kernel expansion as well as with a multiple use of the Baker-Campbell-Hausdorff formula. \\[\\ln\\!\\left(\\det M^{C}\\right)^{1/2} = -\\frac{1}{2}{\\rm Tr}\\,\\ln\\!\\left(-\\partial^{2}\\mbox{$1$}_{\\rm L} +\\partial{\\bf n}\\cdot\\partial{\\bf n}\\right)\\] \\[= -\\frac{1}{2}{\\rm Tr}\\,\\left[\\ln(-\\partial^{2}\\mbox{$1$}_{\\rm L}) +\\ln\\left(\\mbox{$1$}_{\\rm L}+\\frac{\\partial{\\bf n}\\cdot\\partial{\\bf n}}{- \\partial^{2}}\\right)\\right]\\] \\[= -\\frac{1}{2}{\\rm Tr}\\,\\ln(-\\partial^{2}\\mbox{$1$}_{\\rm L})-\\frac {1}{2}{\\rm Tr}\\,\\frac{\\partial{\\bf n}\\cdot\\partial{\\bf n}}{-\\partial^{2}}+ \\frac{1}{4}{\\rm Tr}\\,\\left(\\frac{\\partial{\\bf n}\\cdot\\partial{\\bf n}}{- \\partial^{2}}\\right)^{2}+{\\cal O}((\\partial{\\bf n})^{6}),\\]where we suppressed Lorentz (L) indices. Here, we neglected higher-derivative terms of \\({\\bf n}\\), e.g., \\(\\partial^{2}{\\bf n}\\), which is in the spirit of the Faddeev-Niemi conjecture; of course, this has to be checked later on. Employing the integral formulas given in App. C, we finally obtain for the \\(C\\) determinant \\[\\ln\\bigl{(}\\det M^{C}\\bigr{)}^{1/2} \\simeq -\\frac{1}{32\\pi^{2}}(\\Lambda^{2}-k^{2})\\int_{x}(\\partial_{\\mu}{ \\bf n})^{2} \\tag{12}\\] \\[-\\frac{1}{32\\pi^{2}}\\ln\\frac{\\Lambda}{k}\\int_{x}\\bigl{(}\\partial _{\\mu}{\\bf n}\\times\\partial_{\ u}{\\bf n}\\bigr{)}^{2}+\\frac{1}{32\\pi^{2}}\\ln \\frac{\\Lambda}{k}\\int_{x}(\\partial_{\\mu}{\\bf n})^{4},\\] where \\(\\int_{x}\\equiv\\int d^{4}x\\). The first term contributes to the desired mass term of Eq. (1), whereas the second and third renormalize the classical action (8). The remaining four determinants of Eq. (10) have to be evaluated in the same way. The calculation is straightforward though extensive. Here, we shall cite only the final results: \\[\\ln\\Delta_{\\rm FP} = -\\frac{(\\Lambda^{2}\\!-\\!k^{2})}{64\\pi^{2}}\\int_{x}(\\partial_{\\mu }{\\bf n})^{2}+\\frac{1}{48\\pi^{2}}\\ln\\frac{\\Lambda}{k}\\int_{x}\\bigl{(}\\partial_ {\\mu}{\\bf n}\\!\\times\\!\\partial_{\ u}{\\bf n}\\bigr{)}^{2}-\\frac{1}{32\\pi^{2}}\\ln \\frac{\\Lambda}{k}\\int_{x}\\bigl{(}\\partial_{\\mu}{\\bf n}\\!)^{4},\\] \\[\\ln(\\det\\overline{M}^{\\bf W})^{-1/2} = -\\frac{5(\\Lambda^{2}\\!-\\!k^{2})}{64\\pi^{2}}\\int_{x}(\\partial_{ \\mu}{\\bf n})^{2}\\!-\\frac{5}{24\\pi^{2}}\\ln\\frac{\\Lambda}{k}\\int_{x}\\bigl{(} \\partial_{\\mu}{\\bf n}\\!\\times\\!\\partial_{\ u}{\\bf n}\\bigr{)}^{2}\\!+\\frac{35}{ 128\\pi^{2}}\\ln\\frac{\\Lambda}{k}\\int_{x}\\bigl{(}\\partial_{\\mu}{\\bf n}\\!)^{4},\\] \\[\\ln(\\det-\\widetilde{Q}^{\\phi}\\overline{M}^{\\bf W-1}Q^{\\phi})^{-1/ 2} =\\frac{3(\\Lambda^{2}\\!-\\!k^{2})}{128\\pi^{2}}\\int_{x}(\\partial_{\\mu}{ \\bf n})^{2}\\!+\\frac{49}{192\\pi^{2}}\\ln\\frac{\\Lambda}{k}\\int_{x}\\bigl{(} \\partial_{\\mu}{\\bf n}\\!\\times\\!\\partial_{\ u}{\\bf n}\\bigr{)}^{2} \\tag{13}\\] \\[-\\frac{5}{16\\pi^{2}}\\ln\\frac{\\Lambda}{k}\\int_{x}(\\partial_{\\mu}{ \\bf n})^{4}.\\] The determinant \\(\\Delta_{\\rm S}\\) does not contribute, because it is independent of \\({\\bf n}\\) in one-loop approximation. Inserting these results into Eq. (10) leads us to the desired Wilsonian effective action to one-loop order for the \\({\\bf n}\\) field in a derivative expansion: \\[\\hat{\\Gamma}_{k}[{\\bf n}] = \\frac{13}{8}\\frac{\\Lambda^{2}}{16\\pi^{2}}\\bigl{(}1-{\\rm e}^{2t} \\bigr{)}\\int_{x}(\\partial_{\\mu}{\\bf n})^{2}+\\frac{1}{4}\\left(\\frac{1}{g^{2}}+ \\frac{7}{3}\\frac{1}{16\\pi^{2}}\\,t\\right)\\int_{x}(\\partial_{\\mu}{\\bf n}\\times \\partial_{\ u}{\\bf n})^{2} \\tag{14}\\] \\[-\\frac{1}{2}\\left(\\frac{1}{\\alpha g^{2}}+\\frac{5}{4}\\frac{1}{16 \\pi^{2}}\\,t\\right)\\int_{x}(\\partial_{\\mu}{\\bf n})^{4},\\] where \\(t=\\ln k/\\Lambda\\in]-\\infty,0]\\) denotes the \"renormalization group time\". We would like to stress once more that \\(\\hat{\\Gamma}_{k}[{\\bf n}]\\) does not contain the result of fluctuations of the \\({\\bf n}\\) field itself; in other words, it represents (an approximation to) the \"tree-level action\" for the complete quantum theory of the \\({\\bf n}\\) field. Indeed, the generation of a \"kinetic\" term \\(\\sim(\\partial_{\\mu}{\\bf n})^{2}\\) growing under the flow of increasing \\(k\\) as conjectured by Faddeev and Niemi is observed. Moreover, it has the correct sign (\\(+\\)), implying that an \"effective field theory\" interpretation seems possible. The second term which is proportional to the classical action reveals information about the renormalization of the Yang-Mills coupling: \\[\\frac{1}{\\hat{g}_{\\rm R}^{2}}:=\\frac{1}{g^{2}}+\\frac{7}{3}\\frac{1}{16\\pi^{2}} \\,t\\quad\\Rightarrow\\quad\\hat{\\beta}_{g^{2}}:=\\partial_{t}\\hat{g}_{\\rm R}^{2}= -\\frac{7}{3}\\frac{1}{16\\pi^{2}}\\,\\hat{g}_{\\rm R}^{4}. \\tag{15}\\]The resulting \\(\\hat{\\beta}\\) function is a factor of \\(44/7\\) smaller than the \\(\\beta\\) function of full Yang-Mills theory for SU(2). This is an expected result, since we did not integrate over all degrees of freedom of the gauge field; the \\({\\bf n}\\) integration still remains. Nevertheless, the \\(\\hat{\\beta}\\) function implies asymptotic freedom, which indicates that the decomposition of the Yang-Mills field is not a pathologically absurd choice. It is interesting to observe that the \\(C\\) and \\({\\bf W}\\) determinants contribute positively to \\(\\hat{\\beta}_{g^{2}}\\), whereas the Faddeev-Popov and the \\(\\phi\\) determinant contribute negatively; the latter, which arises from the \\({\\bf W}\\) fixing, even dominates: \\(-7/3=[6_{C}-4_{\\rm FP}+40_{\\bf W}-49_{\\phi}]/3\\). The third term of Eq. (14) contains information about the renormalization of the gauge parameter \\(\\alpha\\) under the flow: \\[\\frac{1}{\\hat{\\alpha}_{\\rm R}\\hat{g}_{\\rm R}^{2}}=\\frac{1}{\\alpha g^{2}}+\\frac {5}{4}\\frac{1}{16\\pi^{2}}\\,t,\\quad\\Rightarrow\\quad\\partial_{t}\\hat{\\alpha}_{ \\rm R}=\\frac{7}{3}\\hat{\\alpha}_{\\rm R}\\left(1-\\frac{15}{28}\\hat{\\alpha}_{\\rm R }\\right)\\frac{\\hat{g}_{\\rm R}^{2}}{16\\pi^{2}}. \\tag{16}\\] The RHS of this renormalization group equation is positive for \\(\\alpha<28/15\\simeq 1.87\\); this implies that \\(\\alpha\\) runs to zero under the flow as long as \\(\\alpha_{\\Lambda}<28/15\\). Therefore, our starting point \\(\\alpha_{\\Lambda}=1\\) is a consistent choice that ensures a running into the desired Landau gauge \\(\\alpha\\to 0\\). Before we discuss the physical implications of our result Eq. (14), let us study the effective action including the \\({\\bf n}\\) field fluctuations. In principle, this action should be obtainable from the present result by inserting \\(\\hat{\\Gamma}_{k}[{\\bf n}]\\) into a functional integral over \\({\\bf n}\\). However, we evaluated \\(\\hat{\\Gamma}_{k}[{\\bf n}]\\) in a derivative expansion, neglecting high-momentum fluctuations of the \\({\\bf n}\\) field. But when integrating over \\({\\bf n}\\) fluctuations, especially these high-momentum modes are important for the renormalization of the couplings. Hence, their correct running cannot be calculated via such an indirect approach. The direct way is presented in the next section. ## 4 One-loop effective action including \\({\\bf n}\\) fluctuations In the following, we propose a different way to integrate out the \"hard\" modes with high momenta \\(p\\), \\(k<p<\\Lambda\\). This time, we also include the hard fluctuations of the \\({\\bf n}\\) field and decompose the complete Yang-Mills field into soft and hard modes, \\[{\\bf A}_{\\mu}={\\bf A}_{\\mu}^{\\rm S}+{\\bf A}_{\\mu}^{\\rm H},\\quad{\\bf A}_{\\mu}^ {\\rm S,H}={\\bf A}_{\\mu}^{\\rm S,H}(C_{\\mu}^{\\rm S,H},{\\bf n}^{\\rm S,H},{\\bf W}_{ \\mu}^{\\rm S,H}). \\tag{17}\\] Since the hard modes \\({\\bf A}_{\\mu}^{\\rm H}\\) shall be integrated out completely, the explicit use of the decomposition into \\(C_{\\mu}^{\\rm H}\\), \\({\\bf n}^{\\rm H}\\) and \\({\\bf W}_{\\mu}^{\\rm H}\\) would be a very inconvenient choice of overabundant integration variables; therefore, the decomposition is only adopted for the soft modes \\({\\bf A}_{\\mu}^{\\rm S}\\). In the spirit of the Faddeev-Niemi conjecture, we assume that these soft modes are dominated by the \\({\\bf n}\\) field: \\[{\\bf A}_{\\mu}^{\\rm S}=\\partial_{\\mu}{\\bf n}^{\\rm S}\\times{\\bf n}^{\\rm S}. \\tag{18}\\]Integrating out the hard modes \\({\\bf A}^{\\rm H}\\) results in two determinants in one-loop approximation, \\[\\Gamma_{k}[{\\bf A}^{\\rm S}]=\\frac{1}{2}\\ln\\det(\\Delta^{\\rm YM}[{\\bf A}^{\\rm S}])^ {-1}-\\ln\\det\\Delta_{\\rm FP}[A^{\\rm S}], \\tag{19}\\] corresponding to the hard gluon and ghost loops; again we dropped the nonlocal terms (cf. App. B). The ghost contribution in the form of the Faddeev-Popov determinant is, of course, identical to the one obtained in the first line of Eq. (13), since the gauge fixing is performed in the same way as before. The gluonic determinant involves the operator \\[(\\Delta^{\\rm YM}[{\\bf A}^{\\rm S}])^{-1}_{\\mu\ u}=-\\left[D^{2}\\,\\mbox{$1$}_{\\rm L }-2{\\rm i}F-DD+\\frac{1}{\\alpha}\\partial\\partial\\right]_{\\mu\ u}\\bigg{|}_{{\\bf A }={\\bf A}^{\\rm S}}, \\tag{20}\\] where \\(D_{\\mu}\\) denotes the covariant derivative and \\(F_{\\mu\ u}\\) the field strength tensor. The explicit representation of Eq. (20) in terms of the \\({\\bf n}\\) field is again given in App. A, Eqs. (A.6) and (A.7). The determinants in Eq. (19) can be calculated in a derivative expansion in the same way as described in the preceding section. Since the computation of the term \\(\\sim(\\partial{\\bf n})^{2}\\) is already very laborious, we do not calculate the marginal terms \\(\\sim(\\partial_{\\mu}{\\bf n}\\times\\partial_{\ u}{\\bf n})^{2}\\) etc. directly, but take over the known one-loop results for the running coupling and the gauge parameter from [11]. The final result for the Wilsonian one-loop effective action for the soft modes of the \\({\\bf n}\\) field reads \\[\\Gamma_{k}[{\\bf n}] = \\frac{\\Lambda^{2}}{16\\pi^{2}}\\big{(}1-{\\rm e}^{2t}\\big{)}\\int_{x }(\\partial_{\\mu}{\\bf n})^{2}+\\frac{1}{4}\\left(\\frac{1}{g^{2}}+\\frac{44}{3} \\frac{1}{16\\pi^{2}}\\,t\\right)\\int_{x}(\\partial_{\\mu}{\\bf n}\\times\\partial_{\ u }{\\bf n})^{2}\\] \\[-\\frac{1}{2}\\left(\\frac{1}{\\alpha g^{2}}+\\frac{14}{3}\\frac{1}{16 \\pi^{2}}\\,t\\right)\\int_{x}(\\partial_{\\mu}{\\bf n})^{4}+\\frac{1}{2}\\left(\\frac{ 1}{\\alpha g^{2}}+\\frac{14}{3}\\frac{1}{16\\pi^{2}}\\,t\\right)\\int_{x}(\\partial^{2 }{\\bf n}\\cdot\\partial^{2}{\\bf n}),\\] where we dropped the superscript S. Furthermore, we included for later use a higher-derivative term \\(\\sim\\partial^{2}{\\bf n}\\cdot\\partial^{2}{\\bf n}\\) which is also marginal in the renormalization group sense and accompanied by the \\(1/(\\alpha g^{2})\\) coefficient in the classical action. Again, the generation of the \"kinetic\" term \\(\\sim(\\partial{\\bf n})^{2}\\) with a mass scale is observed; it is smaller by a factor of \\(8/13\\) than in the preceding section. This means that the hard \\({\\bf n}\\) field fluctuations that have been taken into account in Eq. (2.2) reduce the new mass scale slightly; on the other hand, they increase the running of the Yang-Mills coupling by contributing the missing piece to the \\(\\beta\\) function which now obtains the correct SU(2) value, \\(\\beta_{g^{2}}=\\frac{44}{3}\\frac{1}{16\\pi^{2}}g_{\\rm R}^{4}\\). The running of the gauge parameter \\(\\alpha\\) is also increased, but no qualitative changes compared to Eq. (14) can be observed. ## 5 Discussion and Conclusions The main results of our paper are contained in Eqs. (14) and (2.2), where the Wilsonian one-loop effective actions \\(\\hat{\\Gamma}_{k}\\) and \\(\\Gamma_{k}\\) for the \\({\\bf n}\\) field have been given without and including hard \\({\\bf n}\\) field fluctuations, respectively. We were able to demonstrate that a \"kinetic\" term with a new mass scale for the \\({\\bf n}\\) field is indeed generated perturbatively, as was conjecturedby Faddeev and Niemi. This term is relevant in the renormalization group sense and perturbatively exhibits a quadratic dependence on the UV cutoff \\(\\Lambda\\). Furthermore, we studied the renormalization group flow of the marginal couplings of the \\({\\bf n}\\) field self-interactions given by the Yang-Mills coupling and the gauge parameter. These terms are responsible for the stabilization of possible topological excitations of the \\({\\bf n}\\) field, as suggested by the Skyrme-Faddeev model. In total, the difference between \\(\\hat{\\Gamma}_{k}\\) and \\(\\Gamma_{k}\\) is only of quantitative nature: the inclusion of hard \\({\\bf n}\\) field fluctuations increases the running of the marginal couplings and reduces the new mass scale; qualitative features such as stability of possible solitons remain untouched. In fact, the question of stability turns out to be delicate: truncating our results for \\(\\hat{\\Gamma}_{k}\\) or \\(\\Gamma_{k}\\) in Eqs. (14) or (21) at the level of the original Faddeev-Niemi proposal Eq. (1) (the first lines of Eqs. (14) and (21), respectively), we find an action that allows for stable knotlike solitons, since the coefficients of both terms are positive (as long as we stay away from the Landau pole, which we consider as unphysical). Taking additionally the \\((\\partial{\\bf n})^{4}\\) term of \\(\\hat{\\Gamma}_{k}\\) or \\(\\Gamma_{k}\\) into account, which is also marginal and does not contain second-order derivatives on \\({\\bf n}\\), stability is lost, since the coupling coefficient is negative in Eqs. (14) and (21); for stable solitons, a strictly positive coefficient would be required for this truncation, as was shown in [12]. Finally dropping the demand for first-order derivatives, we can include one further marginal term \\(\\sim\\partial^{2}{\\bf n}\\cdot\\partial^{2}{\\bf n}\\) as given in Eq. (21) for \\(\\Gamma_{k}\\). With the aid of the identity \\[\\int_{x}(\\partial^{2}{\\bf n}\\times{\\bf n})^{2}=\\int_{x}[\\partial^{2}{\\bf n} \\cdot\\partial^{2}{\\bf n}-(\\partial_{\\mu}{\\bf n})^{4}], \\tag{22}\\] we find that the second line of Eq. (21) represents a strictly positive contribution to the action which again stabilizes possible solitons.5 Footnote 5: We expect a similar behavior for the action \\(\\hat{\\Gamma}_{k}\\) in Eq. (14), although we have not calculated the coefficient of the \\(\\partial^{2}{\\bf n}\\cdot\\partial^{2}{\\bf n}\\) term explicitly. Of course, this game could be continued by including further destabilizing and stabilizing higher-order terms again and again, but such terms are irrelevant in a renormalization group sense; that means their corresponding couplings are accompanied by inverse powers of the UV cutoff \\(\\Lambda\\) and are thereby expected to vanish in the limit of large cutoff. To summarize, our perturbative renormalization group analysis suggests enlarging the Faddeev-Niemi proposal for the effective low-energy action of Yang-Mills theory by taking all marginal operators of a derivative expansion into account. The original proposal of Eq. (1) was inspired by a desired Hamiltonian interpretation of the action that demands the absence of third- or higher-order time derivatives. But, as demonstrated, the covariant renormalization group does not care about a Hamiltonian interpretation of the final result. In some sense, the desired \"simplicity\" of the final result is spoiled by the presence of higher-derivative terms; moreover, it remains questionable as to whether the importance of the \\(\\partial^{2}{\\bf n}\\cdot\\partial^{2}{\\bf n}\\) term is still consistent with the derivative expansion of the action. Unfortunately, this cannot be checked within the present approach. It should be stressed once again that the perturbative investigation performed here hardly suffices to confirm results about the infrared domain of Yang-Mills theories. On the contrary, it is only a valid approximation in the vicinity of the Gaussian UV fixed point of the theory. Nevertheless, our study might lend some intuition to possible nonperturbative scenarios: for example, let us assume that the Landau gauge \\(\\alpha=0\\) indeed is an infrared fixed point in covariant gauges. Then the stabilizing term \\(\\sim(\\partial^{2}{\\bf n}\\times{\\bf n})^{2}\\) is enhanced in the infrared, provided that the increase of the running coupling \\(g\\) obeys \\(\\alpha g^{2}\\to 0\\) for \\(k\\to 0\\); this would be realized, e.g., if \\(g\\) approached an infrared fixed point. Such a scenario thus supports the idea of topological knotlike solitons as important infrared degrees of freedom of Yang-Mills theories. Perhaps the main drawback of our study lies in the fact that the new mass scale is not renormalization-group invariant; for example, we can read off from Eq. (21) that \\[m_{k}^{2}=\\frac{1}{16\\pi^{2}}\\,\\Lambda^{2}(1-{\\rm e}^{2t}),\\quad t\\equiv\\ln\\frac {k}{\\Lambda}\\leq 0. \\tag{23}\\] The new mass scale \\(m_{k}\\) is necessarily proportional to \\(\\Lambda\\), because there simply is no other scale in our system. But contrary to the gauge coupling or the gauge parameter, which can be made independent of \\(\\Lambda\\) by adjusting the bare parameters, the \\(\\Lambda\\) dependence of \\(m_{k}\\) persists, since there is no bare mass parameter to adjust. One may speculate that this problem is solved by \"renormalization group improvement\" of the kind \\[\\Lambda^{2}\\to\\Lambda^{2}\\,{\\rm e}^{-\\frac{3\\cdot 16\\pi^{2}}{22g^{2}( \\Lambda)}}, \\tag{24}\\] which upon insertion into Eq. (23) leads to a \\(\\Lambda\\)-independent mass scale for \\(k\\to 0\\). Obviously, our perturbative calculation can never produce the RHS of Eq. (24), but a nonperturbative study of the renormalization group flow should result in such a structure (in a different context, such a mechanism has been observed in [13]). Employing the measured values of the strong coupling constant at various renormalization points, we can determine the order of magnitude of the new mass scale: \\(m\\equiv m_{k\\to 0}={\\cal O}(1){\\rm MeV}\\), e.g., \\(m\\simeq 5.74{\\rm MeV}\\) for \\(\\alpha_{\\rm s}(M_{\\rm Z})=0.12\\) or \\(m\\simeq 0.68{\\rm MeV}\\) for \\(\\alpha_{\\rm s}(10{\\rm GeV})=0.18\\) (the difference between these numbers arises from the fact that the initial values for the coupling are not related by a pure one-loop running). Of physical interest are the masses of the solitonic excitation in this effective theory. Unfortunately, there are no numerical results available for theories with higher-derivative order, so that we have to resort to results for an action identical to the first line of Eq. (21). For this model, the mass of the lowest lying states are approximately given by \\(M\\simeq{\\cal O}(10^{3})\\sqrt{q}\\,m\\), where \\(q\\) denotes the value of the coefficient in front of the \\((\\partial_{\\mu}{\\bf n}\\times\\partial_{\ u}{\\bf n})^{2}\\) term [12, 14]. For couplings of order 1, we end up with soliton masses of the order of \\(M\\sim{\\cal O}(1){\\rm GeV}\\); this is in accordance with lattice results for glue ball masses: e.g., \\(M_{\\rm GB}\\simeq 1.5{\\rm GeV}\\) for the lowest lying state in SU(2) [15]. Of course, this rough and speculative estimate should not be viewed as a \"serious prediction\" of our work. With all these reservations in mind, the Faddeev-Niemi conjecture about possible low-energy degrees of freedom of Yang-Mills theories provides an interesting working hypothesis which deserves further exploration. ## Acknowledgment The author wishes to thank W. Dittrich for helpful conversations and for carefully reading the manuscript. Furthermore, the author profited from discussions with T. Tok, K. Langfeld and A. Schafke. This work was supported in part by the Deutsche Forschungsgemeinschaft under DFG GI 328/1-1. ## Appendix A Differential operators, tensors, currents, etc. This appendix represents a collection of differential operators and other tensorial quantities which are required in the main text. The Faddeev-Popov determinant \\(\\Delta_{\\rm FP}\\) in Eq. (7) and (10) for covariant gauges involves the operator (in one-loop approximation) \\[-\\partial_{\\mu}D_{\\mu}({\\bf A})\\big{|}_{C=0={\\bf W}}=-\\partial^{2}\\mathbb{1}_{ \\rm c}+(\\partial^{2}{\\bf n}\\otimes{\\bf n}-{\\bf n}\\otimes\\partial^{2}{\\bf n})+ (\\partial_{\\mu}{\\bf n}\\otimes{\\bf n}-{\\bf n}\\otimes\\partial_{\\mu}{\\bf n}) \\partial_{\\mu},\\] (A.1) so that \\(\\Delta_{\\rm FP}=\\det\\bigl{(}-\\partial_{\\mu}D_{\\mu}({\\bf A})\\big{|}_{C=0={\\bf W }}\\bigr{)}\\). The objects occurring in the exponent of Eq. (7) are defined as follows: \\[M^{C}_{\\mu\ u} :=-\\partial^{2}\\delta_{\\mu\ u}+\\partial_{\\mu}\\partial_{\ u}- \\frac{1}{\\alpha}\\partial_{\\mu}\\partial_{\ u}+\\frac{1}{\\alpha}\\partial_{\\mu}{ \\bf n}\\cdot\\partial_{\ u}{\\bf n}\\] \\[M^{\\bf W}_{\\mu\ u} :=-\\partial^{2}\\delta_{\\mu\ u}\\mathbb{1}_{\\rm c}+\\partial_{\\mu} \\partial_{\ u}\\mathbb{1}_{\\rm c}-\\frac{1}{\\alpha}\\partial_{\\mu}\\partial_{\ u} \\mathbb{1}_{\\rm c}-\\partial_{\\mu}{\\bf n}\\otimes\\partial_{\ u}{\\bf n}+ \\partial_{\ u}{\\bf n}\\otimes\\partial_{\\mu}{\\bf n}\\] \\[{\\bf Q}^{C}_{\\mu\ u} :=\\frac{1}{\\alpha}\\bigl{(}\\partial_{\\mu}{\\bf n}\\partial_{\ u}+ \\partial_{\ u}{\\bf n}\\partial_{\\mu}+\\partial_{\\mu}\\partial_{\ u}{\\bf n}\\bigr{)}\\] \\[K^{C}_{\\mu} :=\\partial_{\ u}({\\bf n}\\cdot\\partial_{\ u}{\\bf n}\\times\\partial_ {\\mu}{\\bf n})+\\frac{1}{\\alpha}\\partial_{\\mu}{\\bf n}\\cdot\\partial^{2}{\\bf n} \\times{\\bf n}\\] \\[{\\bf K}^{\\bf W}_{\\mu} :=\\frac{1}{\\alpha}\\partial_{\\mu}({\\bf n}\\times\\partial^{2}{\\bf n}).\\] (A.2) The determinants in Eq. (10) employ several composites of these operators. Since we first perform the \\(C\\) integration, the resulting determinant involves only \\(M^{C}\\), whereas the \\({\\bf W}\\) determinant also receives contributions from the mixing term \\({\\bf Q}^{C}\\), \\[\\overline{M}^{\\bf W}_{\\mu\ u}=M^{\\bf W}_{\\mu\ u}+\\widetilde{\\bf Q}^{C}_{\\mu \\kappa}(M^{C})^{-1}_{\\kappa\\lambda}{\\bf Q}^{C}_{\\lambda\ u}.\\] (A.3) Here, \\(\\widetilde{\\bf Q}\\) is defined via partial integration \\[\\int({\\bf Q}^{C}_{\\mu\ u}{\\bf W}_{\ u})f_{\ u}\\stackrel{{\\rm i.b. p}}{{=}}\\int{\\bf W}_{\\mu}\\widetilde{\\bf Q}^{C}_{\\mu\ u}f_{\ u},\\] (A.4) and \\(f_{\ u}\\) denotes an arbitrary test function. The last determinant in Eq. (10) arises from the \\(\\phi\\) integration and receives contributions from the relevant parts of the exponent of Eq. (9), which we denote by \\[Q_{\\mu}^{\\phi}:={\\rm i}\\big{(}{-\\partial_{\\mu}\\mathbb{1}_{\\rm c}+ \\partial_{\\mu}{\\bf n}\\otimes{\\bf n}}\\big{)},\\] (A.5) so that \\(\\delta(\\mathbf{\\chi})\\to\\int{\\cal D}\\mathbf{\\phi}\\exp(-\\int{ \\bf W}_{\\mu}\\cdot Q_{\\mu}^{\\phi}\\mathbf{\\phi})\\) to one-loop order. Employing a notation similar to Eq. (A.4), the differential operator accompanying the term \\(\\sim\\mathbf{\\phi}\\mathbf{\\phi}\\) in the exponent finally reads \\(\\widetilde{Q}_{\\mu}^{\\phi}(\\overline{M}_{\\mu\ u}^{\\bf W})^{-1}Q_{\ u}^{\\phi}\\). Integrating the \\(\\phi\\) field along the imaginary axis leads to the last determinant in Eq. (10). In Sect. 4, we employ the inverse gluon propagator \\((\\Delta^{\\rm YM}[{\\bf A}^{\\rm S}])^{-1}\\) coupled to all orders to the soft \\({\\bf n}\\) field fluctuations. For an explicit representation, we need the covariant derivative, \\[D_{\\mu}[{\\bf n}]=\\partial_{\\mu}\\mathbb{1}_{\\rm c}+{\\bf n}\\otimes \\partial_{\\mu}{\\bf n}-\\partial_{\\mu}{\\bf n}\\otimes{\\bf n},\\] (A.6) where we have inserted the soft gauge potential Eq. (18) into the covariant derivative. The inverse gluon propagator Eq. (20) then reads \\[(\\Delta^{\\rm YM}[{\\bf n}])^{-1}_{\\mu\ u} = \\mathbb{1}_{\\rm c}\\delta_{\\mu\ u}-2({\\bf n}\\otimes\\partial_{ \\lambda}{\\bf n}-\\partial_{\\mu}{\\bf n}\\otimes{\\bf n})\\partial_{\\lambda}\\delta_{ \\mu\ u}+({\\bf n}\\otimes\\partial_{\ u}{\\bf n}-\\partial_{\ u}{\\bf n}\\otimes{\\bf n })\\partial_{\\mu}\\] (A.7) \\[+({\\bf n}\\otimes\\partial_{\\mu}{\\bf n}-\\partial_{\\mu}{\\bf n} \\otimes{\\bf n})\\partial_{\ u}-({\\bf n}\\otimes\\partial^{2}{\\bf n}-\\partial^{2 }{\\bf n}\\otimes{\\bf n})\\delta_{\\mu\ u}+(\\partial_{\\lambda}{\\bf n})^{2}{\\bf n }\\otimes{\\bf n}\\delta_{\\mu\ u}\\] \\[+\\partial_{\\lambda}{\\bf n}\\otimes\\partial_{\\lambda}{\\bf n}\\delta _{\\mu\ u}-(2\\partial_{\\mu}{\\bf n}\\otimes\\partial_{\ u}{\\bf n}-\\partial_{\ u} {\\bf n}\\otimes\\partial_{\\mu}{\\bf n})\\] \\[+({\\bf n}\\otimes\\partial_{\\mu\ u}{\\bf n}-\\partial_{\\mu\ u}{\\bf n }\\otimes{\\bf n})-(\\partial_{\\mu}{\\bf n}\\cdot\\partial_{\ u}{\\bf n}){\\bf n} \\otimes{\\bf n}.\\] ## Appendix B Nonlocal terms During the Gaussian integration over the \\(C\\), \\(\\phi\\) and \\({\\bf W}\\) fields in Sect. 3, several nonlocal terms arise from the completion of the square in the exponent. Here, we shall give reasons why they can be neglected. Let us exemplarily consider the simplest nonlocal contribution arising from the \\(C\\) integration: \\[K^{C}(M^{C})^{-1}K^{C}=({\\bf n}\\cdot\\partial_{\\lambda}{\\bf n}\\times\\partial_{ \\lambda\\mu}{\\bf n})\\left(\\frac{1}{-\\partial^{2}+\\partial{\\bf n}\\cdot\\partial{ \\bf n}}\\right)_{\\mu\ u}({\\bf n}\\cdot\\partial_{\\kappa}{\\bf n}\\times\\partial_{ \\kappa\ u}{\\bf n}).\\] (B.8) Within the calculation of the determinants, we expanded the inverse operator assuming that \\(\\partial{\\bf n}\\cdot\\partial{\\bf n}\\ll-\\partial^{2}\\). This was justified, since the derivative operator acts on the test function space with momenta \\(p\\) between \\(k\\) and \\(\\Lambda\\), which are large compared to the conjectured slow variation of the \\({\\bf n}\\) field. In the present case, the situation is different, because the derivative term \\(-\\partial^{2}\\) acts only on the \\({\\bf n}\\) field and its derivatives to the right (there is no test function to act on). In other words, the nonlocal terms are only numbers, not operators. The derivatives can thus be approximated by the (inverse) scale of variation of the \\({\\bf n}\\) field or its derivatives which is much smaller than \\(k\\) or \\(\\Lambda\\). This implies that the nonlocal terms do not depend on \\(k\\) or \\(\\Lambda\\), so that they cannot contribute to the flow of the couplings. For example, a reasonable lowest-order approximation of the RHS of Eq. (B.8) is given by its local limit, \\[K^{C}(M^{C})^{-1}K^{C}=({\\bf n}\\cdot\\partial_{\\lambda}{\\bf n}\\times\\partial_{ \\lambda\\mu}{\\bf n})\\left(\\frac{1}{\\partial{\\bf n}\\cdot\\partial{\\bf n}}\\right)_{ \\mu\ u}({\\bf n}\\cdot\\partial_{\\kappa}{\\bf n}\\times\\partial_{\\kappa\ u}{\\bf n})+\\ldots,\\] (B.9) where it is obvious that these terms do not contribute to the desired Wilsonian effective action. The same line of argument holds for all nonlocal terms appearing in Sects. 3 and 4. ## Appendix C Momentum integrals Several standard integrals appear in the integration over the momentum shell \\([k,\\Lambda]\\) in Sect. 3. One basic formula is given by \\[\\int\\limits_{[k,\\Lambda]}\\frac{d^{4}p}{(2\\pi)^{4}}\\,\\frac{p_{\\lambda}p_{\\kappa }p_{\\mu}p_{\ u}}{p^{8}}=\\frac{1}{3}\\frac{1}{64\\pi^{2}}\\ln\\frac{\\Lambda}{k} \\,\\big{(}\\delta_{\\lambda\\kappa}\\delta_{\\mu\ u}+\\delta_{\\lambda\\mu}\\delta_{ \\kappa\ u}+\\delta_{\\lambda\ u}\\delta_{\\kappa\\mu}\\big{)}.\\] (C.10) From this formula, we can also deduce upon index contraction that \\[\\int\\limits_{[k,\\Lambda]}\\frac{d^{4}p}{(2\\pi)^{4}}\\,\\frac{p_{\\mu}p_{\ u}}{p^{ 6}}=\\frac{1}{32\\pi^{2}}\\,\\ln\\frac{\\Lambda}{k},\\hskip 28.452756pt\\int\\limits_{[k, \\Lambda]}\\frac{d^{4}p}{(2\\pi)^{4}}\\,\\frac{1}{p^{4}}=\\frac{1}{8\\pi^{2}}\\,\\ln \\frac{\\Lambda}{k}.\\] (C.11) The last integral is, of course, standard and can be used to prove Eq. (C.10) in addition to symmetry arguments. The same philosophy applies to the second type of integrals: \\[\\int\\limits_{[k,\\Lambda]}\\frac{d^{4}p}{(2\\pi)^{4}}\\,\\frac{p_{\\mu}p_{\ u}}{p^{ 4}}=\\frac{1}{64\\pi^{2}}\\,(\\Lambda^{2}-k^{2})\\,\\delta_{\\mu\ u},\\hskip 28.452756pt \\int\\limits_{[k,\\Lambda]}\\frac{d^{4}p}{(2\\pi)^{4}}\\,\\frac{1}{p^{2}}=\\frac{1}{ 16\\pi^{2}}\\,(\\Lambda^{2}-k^{2}).\\] (C.12) ## References * [1] L. Faddeev and A. J. Niemi, Phys. Rev. Lett. **82**, 1624 (1999) [hep-th/9807069]. * [2] Y. M. Cho, Phys. Rev. **D21**, 1080 (1980); Phys. Rev. **D23**, 2415 (1981). * [3] V. Periwal, hep-th/9808127. * [4] L. Faddeev and A. J. Niemi, Phys. Lett. **B449**, 214 (1999) [hep-th/9812090]. * [5] E. Langmann and A. J. Niemi, Phys. Lett. **B463**, 252 (1999) [hep-th/9905147]. * [6] Y. M. Cho, H. Lee and D. G. Pak, hep-th/9905215. * [7] L. Faddeev and A. J. Niemi, hep-th/0101078. * [8] A. M. Khvedelidze and H. P. Pavel, Phys. Rev. D **59**, 105017 (1999) [hep-th/9808102]. * [9] S. V. Shabanov, Phys. Lett. **B458**, 322 (1999) [hep-th/9903223]. * [10] S. V. Shabanov, Phys. Lett. **B463**, 263 (1999) [hep-th/9907182]; hep-th/0004135. * [11] U. Ellwanger, M. Hirsch and A. Weber, Z. Phys. **C69**, 687 (1996) [hep-th/9506019]. * [12] J. Gladikowski and M. Hellmund, Phys. Rev. **D56**, 5194 (1997) [hep-th/9609035]. * [13] U. Ellwanger, Nucl. Phys. **B531**, 593 (1998) [hep-ph/9710326]. * [14] R. A. Battye and P. Sutcliffe, Proc. Roy. Soc. Lond. **A455**, 4305 (1999) [hep-th/9811077]. * [15] M. J. Teper, hep-th/9812187.
The Cho-Faddeev-Niemi-Shabanov decomposition of the SU(2) Yang-Mills field is employed for the calculation of the corresponding Wilsonian effective action to one-loop order with covariant gauge fixing. The generation of a mass scale is observed, and the flow of the marginal couplings is studied. Our results indicate that higher-derivative terms of the color-unit-vector \\(\\mathbf{n}\\) field are necessary for the description of topologically stable knotlike solitons which have been conjectured to be the large-distance degrees of freedom.
Provide a brief summary of the text.
arxiv-format/0103269v1.md
# The Role of Clouds in Brown Dwarf and Extrasolar Giant Planet Atmospheres Mark S. Marley Andrew S. Ackerman ## 1 Introduction Even before the first discovery of brown dwarfs and extrasolar giant planets (EGPs) it had been apparent that a detailed appreciation of cloud physics would be required to understand the atmospheres of these objects (e.g. Lunine et al. 1989). Depending on the atmospheric effective temperature, Fe, Mg\\({}_{2}\\)SiO\\({}_{4}\\), MgSiO\\({}_{3}\\), H\\({}_{2}\\)O, and NH\\({}_{3}\\) among others may condense in substellar atmospheres. Since every atmosphere in the solar system is influenced by clouds, dust, or hazes, the need to follow the fate of condensates in brown dwarf and EGP atmospheres is self-evident. What has become clearer over the past five years is that details such as the vertical structure and particle sizes in clouds play a decisive role in controlling the thermal structure and emergent spectra from these atmospheres. Indeed the available data are already sufficient to help us choose among competing models. In this contribution we will briefly summarize some of the roles clouds play in a few solar system atmospheres to illustrate what might be expected of brown dwarf and extrasolar giant planet atmospheres. Then we will summarize a new cloud model developed to study these effects, present some model results, and compare them to data. Since brown dwarfs have similar compositions and effective temperatures to EGPs and a rich dataset already exists, we focus on the lessons learned from the L- and T-dwarfs. We then briefly review the importance of clouds to EGP atmospheres and future observations. ## 2 Clouds in the Solar System Clouds dramatically alter the appearance, thermal structure, and even evolution of planets. Venus glistens white in the morning and evening skies because sunlight reflects off of its bright cloud tops. If there were no condensates in Venus' atmosphere the planet would take on a bluish hue from Rayleigh scattered sunlight. Mars' atmosphere is warmer than it would otherwise be thanks to absorption of incident solar radiation by atmospheric dust (Pollack et al. 1979). The effectiveness of the CO\\({}_{2}\\) greenhouse during Mars's putative warm and wet early history is tied to poorly understood details of its cloud physics and radiative transfer (Mischna, et al. 2000). Indeed the future climate of Earth in a fossil-fuel-fired greenhouse may hinge on the role water clouds will play in altering Earth's albedo and scattering or absorbing thermal radiation. The appearance of the Jovian planets is controlled by the extensive cloud decks covering their disks. On Jupiter and Saturn thick NH\\({}_{3}\\) clouds, contaminated by an unknown additional absorber, reflect about 35% of incident radiation back to space. CH\\({}_{4}\\) and H\\({}_{2}\\)S clouds play a similar role at Uranus and Neptune. The vertical structure of the jovian cloud layers was deduced by variation of their reflected spectra inside and outside of molecular absorption bands. Figure 1 illustrates this process. In the left hand image incident sunlight penetrates relatively deeply into the atmosphere and is scattered principally by a cloud deck over the south pole and a bright cloud near the northern mid-latitude Figure 1: Near consecutive HST images of Uranus taken through different filters. The filter employed for the left hand image probes a broad spectral range from 0.85 to 1 \\(\\mu\\)m while the right hand image is taken through a narrow filter sensitive to the 0.89 \\(\\mu\\)m CH\\({}_{4}\\) absorption band. The relative visibility of various cloud features between the two images is a measure of the cloud height as the incident photon penetration depth is modulated by methane absorption. Images courtesy H. Hammel and K. Rages. limb. The relative heights of these two features cannot be discerned from this single image. The right hand image, however, was taken in the strong 0.89-\\(\\mu\\)m methane absorption band. Here the south polar cloud is invisible since incident sunlight is absorbed by CH\\({}_{4}\\) gas above the cloud before it can scatter. We conclude that the bright northern cloud lies higher in the atmosphere since it is still visible in this image. The application of this technique to spectra and images of the giant planets has yielded virtually all the information we have about the vertical structure of these atmospheres (e.g. West, Strobel, & Tomasko 1986; Baines & Hammel 1994; Baines et al. 1995). A similar reasoning process can be applied to brown dwarf and EGP atmospheres. The large body of work on jovian clouds cannot be easily generalized, but two robust results are apparent. First, sedimentation of cloud droplets is important. Cloud particles condense from the atmosphere, coagulate, and fall. The fall velocity depends on the size of the drops and the upward velocity induced by convection or other motions in the atmosphere. They do not stay put. A diagnostic often retrieved from imaging or spectroscopic observations of clouds is the ratio of the cloud particle scale height to that of the gas. If condensates were distributed uniformly vertically in the atmosphere this ratio would be 1. Instead numerous investigations have found a ratio for Jupiter's ammonia clouds of about 0.3 (Carlson, Lacis, & Rossow 1994). The clouds are thus relatively thin in vertical extent. The importance of sedimentation is borne out even for unseen Fe clouds, for example, by Jupiter's atmospheric chemistry (Fegley & Lodders 1994). A second important result is that cloud particles are large, a result of coagulation processes within the atmosphere. Sizes are difficult to infer remotely and the sizes to which a given observation is sensitive depend upon the wavelength observed. Nevertheless it is clear that Jupiter's ammonia clouds include particles with radii exceeding 1 to 10 \\(\\mu\\)m, much larger than might be expected simply by direct condensation from vapor in the presence of abundant condensation nuclei (Carlson et al. 1994; Brooke et al. 1996). Similar results are found for ammonia clouds on Saturn (Tomasko et al. 1984) and methane clouds in Uranus and Neptune (Baines et al. 1995). These two lessons from the solar jovian atmospheres - clouds have finite vertical extents governed by sedimentation and large condensate sizes - guide us as we consider clouds in brown dwarf and extrasolar giant planet atmospheres. ## 3 Evidence of Clouds in Brown Dwarf Atmospheres The first models of the prototypical T-dwarf Gl 229 B established that grains play a minor role, if any, in controlling the spectrum of the object. The early Gl 229 B models of Marley et al. (1996), Allard et al. (1996) and Tsuji et al. (1996) all found best fits to the observed spectrum by neglecting grain opacity. This provided strong evidence that any cloud layer was confined below the visible atmosphere. All the models, however, shared the same shortcoming of predicting infrared water bands deeper than observed. Another difficulty with the early models is that they either predicted too much flux shortwards of 1 \\(\\mu\\)m (Marley at al. 1996) or used unrealistic molecular opacities (Allard et al. 1996) to lower the optical flux. Griffith, Yelle, & Marley (1999) and Tsuji, Ohnaka,& Aoki (1999) suggested variations of particulate opacity to lower the flux, but ultimately Burrows, Marley, & Sharp (2000) argued that broadened alkali metal bands were responsible for the diminution in flux, a prediction verified by Liebert et al. (2000). The first confirmation that dust was present in the atmospheres of at least some brown dwarfs came with the discovery of the warmer L-dwarfs. These objects, unlike the methane-dominated T-dwarfs, have red colors in \\(J-K\\) and spectra that have been best fit with dusty atmosphere models (Jones & Tsuji 1997), although a complete analysis does not yet exist. The difficulty arose in explaining how the dusty, red L-dwarfs evolved into the clear, blue T-dwarfs (Figure 2). Models in which dust does not settle into discrete cloud layers (Chabrier et al. 2000) predict that cooling brown dwarfs would become redder in \\(J-K\\) with falling effective temperature as more and more dust dominates the atmosphere. Since the atmosphere models employed in this work ignore the lessons learned from our jovian planets (they employ sub-micron particle sizes and do not allow the dust to settle) it is not surprising that they do not fit the data. ## 4 A New Cloud Model A number of models have been developed to describe the cloud formation processes in giant planet and brown dwarf atmospheres. Ackerman & Marley (2001) describe these in some detail. In general these models suffer from a number of drawbacks which limit their utility for brown dwarf and EGP modeling. Some rely upon free parameters which are almost impossible to predict while others do not predict quantities relevant to radiative transfer of in the atmosphere. For example, the atmospheric supersaturation cannot be specified without a detailed knowledge of the number of condensation nuclei available. Ackerman & Marley developed a new eddy sedimentation model for cloud formation in substellar atmospheres that attempts to predict cloud particle sizes and vertical extents. Ackerman & Marley argue that in terrestrial clouds the downward transport of large drops as rain removes substantial mass from clouds and reduces their optical depth. Yet properly modeling the condensation, coagulation, and transport of such drops requires a complex microphysical model and a concomitant abundance of free parameters. In an attempt to account for the expected effects of such microphysical processes without modeling them in detail, they introduce a new term into the equation governing the mass fraction \\(q_{t}\\) of an atmospheric condensate at a given altitude \\(z\\) in an atmosphere: \\[K\\frac{\\partial q_{t}}{\\partial z}+f_{\\rm rain}w_{*}q_{c}=0. \\tag{1}\\] Here the upward transport of the vapor and condensate is by eddy diffusion as parameterized by an eddy diffusion coefficient \\(K\\). In equilibrium this upward transport is balanced by the downward transport of condensate \\(q_{c}\\). The free parameter \\(f_{\\rm rain}\\) has been introduced as the ratio of the mass-weighted droplet sedimentation velocity to \\(w_{*}\\), the convective velocity scale. In essence \\(f_{\\rm rain}\\) allows downward mass transport to be dominated by massive drops larger than the scale Figure 2: \\(J-K\\) color of brown dwarfs as a function of \\(T_{\\rm eff}\\). Open datapoints represent L- and T-dwarf colors measured by Stephens et al. (2001) with L-dwarf temperatures estimated from fits of \\(K-L^{\\prime}\\) to models of Marley et al. (2001). Since \\(K-L^{\\prime}\\) is relatively insensitive to the presence or absence of clouds for the L-dwarfs it provides a good \\(T_{\\rm eff}\\) scale (Marley et al. 2001). The early T-types (\\(0.5<J-K<1\\)) are arbitrarily all assigned to \\(T_{\\rm eff}=1100\\,{\\rm K}\\). Likewise model \\(T_{\\rm eff}\\)s are given estimated error bars of \\(\\pm 100\\,{\\rm K}\\). The filled circle represents the position of the prototypical T-dwarf Gl 229 B (Saumon et al. 2000; Leggett et al. 1999). Four model cases are shown from the work of Marley et al. (2001): evolution with no clouds, and with clouds following the prescription of Ackerman & Marley (2001) with \\(f_{\\rm rain}\\) (rainfall efficiency, see text) varying from 7 (heavy rainfall) to 3 (moderate rain). Also shown are colors (C00) from models by Chabrier et al. (2000) in which there is no downward transport of condensate. The Marley et al. model lines are for objects with gravity \\(g=1000\\,{\\rm m\\,sec^{-2}}\\), roughly appropriate for a \\(30\\,{\\rm M_{J}}\\) object. There is little dependence of \\(J-K\\) on gravity in this regime. The Chabrier et al. lines are for 30 and \\(60\\,{\\rm M_{J}}\\) objects. set by the local eddy updraft velocity: in other words, rain. Ackerman & Marley (2001) treat \\(f_{\\rm rain}\\) as an adjustable parameter and explore its consequences. ## 5 Clouds and the L- to T-dwarf transition Given the importance of clouds to the L-dwarf spectra and the absence of significant cloud opacity in the T-dwarfs, it is clear that the departure of clouds with falling \\(T_{\\rm eff}\\) is an important milestone in the transition from L- to T-dwarfs. Marley (2000) demonstrated that a simple cloud model in which the silicate cloud was always one scale-height thick could account for the change in \\(J-K\\) color from the red L-dwarfs to the blue T-dwarfs. Now using the more physically motivated cloud model of Ackerman & Marley we can better test this hypothesis. Figure 3 illustrates the brightness temperature spectra of six brown dwarf models with three different \\(T_{\\rm eff}\\). In the warmest and coolest cases (\\(T_{\\rm eff}=1800\\) and 900 K) models with and without clouds appear similar. In the warmer case silicate and iron clouds are just forming in the atmosphere and are relatively optically thin, so their influence is slight. In the cooler case as in the right-hand image of Uranus in Figure 1, the main cloud deck forms below the visible atmosphere. In the intermediate case (\\(T_{\\rm eff}=1400\\,\\)K) an optically thick cloud forms in the visible atmosphere and substantially alters the emitted spectrum. The atmospheric structure predicted by the Ackerman & Marley (2001) model for this case is similar to that inferred by Basri et al. (2000) from Cs line shapes in L-dwarf atmospheres. Thus a cooling brown dwarf moves from relatively cloud free conditions to cloudy to clear. The solid lines in Figure 2 show how the \\(J-K\\) color evolves with \\(T_{\\rm eff}\\). Objects first become red as dust begins to dominate the visible atmosphere, then blue as water and methane begin to absorb strongly in K band. Models in which the dust does not settle (Chabrier et al. 2000) predict \\(J-K\\) colors much redder than observed. Instead the colors of the L-dwarfs are best fit by models which include some precipitation as parameterized by \\(f_{\\rm rain}=3\\) to 5. The data clearly require models for objects cooler than the latest L-dwarfs to rapidly change from \\(J-K\\sim 2\\) to 0 over a relatively small \\(T_{\\rm eff}\\) range. While models with \\(f_{\\rm rain}=3\\) to 5 do turn blue as the clouds sink below the visible atmosphere (Figure 2), the variation is not rapid enough to satisfy the observational constraints. Ackerman & Marley suggest that holes in the clouds may begin to dominate the disk-averaged spectra as the clouds are sinking out of sight. Jupiter's 5-\\(\\mu\\)m spectrum is indeed dominated by flux emerging through holes in its clouds. Bailer-Jones & Mundt (2000) find variability in L-dwarf atmospheres that may be related to such horizontal cloud patchiness. Despite the successes of the Ackerman & Marley model, clearly much more work needs to be done to understand clouds in the brown dwarfs. Perhaps three dimensional models of convection coupled to radiative transport will be required. ## 6 Extrasolar Giant Planets The issues of cloud physics considered above of course will also apply to the extrasolar giant planets (Marley 1998; Marley et al. 1999; Seager, Whitney, &Figure 3: Model brightness temperature spectra from Ackerman & Marley (2001). Spectra depict approximate depth in the atmosphere at which emission arises. Solid curves depict cloudy models and dotted curves cloud-free models with the same \\(T_{\\rm eff}\\) (all for \\(g=1000\\,{\\rm m\\,sec^{-2}}\\) & \\(f_{\\rm rain}=3\\)). Horizontal dashed and solid lines demark the level at which cloud opacity, integrated from above, reaches 0.1 and the base of the silicate cloud, respectively. In the early-L like model (a) and the T-dwarf like model (c) clouds play a relatively small role as they are either optically thin (a) or form below the level at which most emission arises (c). Only in the late-L case (b) do the optically-thick clouds substantially alter the emitted spectrum and limit the depth from which photons emerge. Cloud base varies with pressure and cloud thickness varies with strength of convection, accounting for the varying cloud base temperature and thickness. Sasselov 2000; Sudarsky, Burrows, & Pinto 2000). These papers demonstrate that the reflected spectra of extrasolar giant planets depends sensitively on the cloud particle size and vertical distribution. As already demonstrated by the brown dwarfs in the foregoing section, the emergent thermal flux is similarly affected. Indeed Sudarsky et al. suggest that a classification scheme based on the presence or absence of specific cloud layers be used to categorize the extrasolar giant planets. Moderate spectral resolution transit observations of close-in EGPs, if the bandpasses are correctly chosen, will certainly provide first-order information on cloud heights and vertical profiles of these atmospheres (Seager & Sasselov 2000; Hubbard et al. 2001). Coronagraphic multi-wavelength imaging of extrasolar giant planets will provide similar information (see Figure 1). ## 7 Conclusion It is ironic that although the physics governing the vast bulk of the mass of brown dwarfs and extrasolar planets is very well in hand, the old problem of weather prediction governs the radiative transfer and thus the only remotely sensed quantity. The good news is that there will soon be much more weather to talk about, even if we aren't any farther along in doing anything about it. This work was supported by NASA grant NAG5-8919 and NSF grants AST-9624878 and AST-0086288. The authors benefited from conversations with Dave Stevenson, Sara Seager, Adam Burrows, and Bill Hubbard. Heidi Hammel and Kevin Zahnle offered particularly helpful comments on an earlier draft of this contribution. ## References * [1] Ackerman, A. & Marley, M. 2001, ApJ in press * [2] Allard, F., Hauschildt, P. H., Baraffe, I. & Chabrier, G. 1996, ApJ, 465, L123 * [3] Bailer-Jones, C. A. L. & Mundt, R. 2001, A&A in press. * [4] Baines, K. H. & Hammel, H. B. 1994, Icarus 109, 20 * [5] Baines, K., Hammel, H., Rages, K., Romani, P., and Samuelson, R. 1995, in Neptune (Univ. Ariz. Press), 489 * [6] Basri, G., Mohanty, S., Allard, F., Hauschildt, P. H., Delfosse, X., Martin, E. L., Forveille, T. & Goldman, B. 2000, ApJ 538, 363 * [7] Brooke, T. Y., Knacke, R. F., Encrenaz, T., Drossart, P., Crisp, D. & Feuchtgruber, H. 1998, Icarus 136, 1 * [8] Burrows, A., Marley, M. S. & Sharp, C. M. 2000, ApJ 531, 438 * [9] Carlson, B. E., Lacis, A. A. & Rossow, W. B. 1994 J. Geophys. Res. 99, 114623 * [10] Chabrier, G., Baraffe, I., Allard, F. & Hauschildt, P. 2000, ApJ 542, 464 * [11] Fegley, B. J. & Lodders, K. 1994, Icarus 110, 117 * [12] Griffith, C. A., Yelle, R. V. & Marley, M. S. 1998, Science 282, 2063 * [13] Hubbard, W., Fortney, J., Lunine, J., Burrows, A., Sudarsky, D., & Pinto, P. (2001) ApJ submitted* (1999) Leggett, S. K., Toomey, D. W., Geballe, T. R. & Brown, R. H. 1999, ApJ 517, L139 * (2000) Liebert, J., Reid, I. N., Burrows, A., Burgasser, A. J., Kirkpatrick, J. D. & Gizis, J. E. 2000, ApJ 533, L155 * (1989) Lunine, J. I., Hubbard, W. B., Burrows, A., Wang, Y. -. & Garlow, K. 1989, ApJ 338, 314 * (1996) Marley, M. S., Saumon, D., Guillot, T., Freedman, R. S., Hubbard, W. B., Burrows, A. & Lunine, J. I. 1996, Science 272, 1919 * (1999) Marley, M. S., Gelino, C., Stephens, D., Lunine, J. I. & Freedman, R. 1999, ApJ 513, 879 * (1998) Marley, M. S. 1998, in Brown dwarfs and extrasolar planets, ASP Conf. Series #134, eds. R. Rebolo; E. Martin; M. Zapatero Osorio, 383 * (2000) Marley, M. S. 2000, in From Giant Planets to Cool Stars, ASP Conf. Series #212, eds. C. Griffith & M. Marley, 152 * (2000) Mischna, M. A., Kasting, J. F., Pavlov, A., & Freedman, R. 2000, Icarus 145, 246 * (1979) Pollack, J. B., Colburn, D. S., Flasar, F. M., Kahn, R., Carlston, C. E. & Pidek, D. G. 1979, J. Geophys. Res. 84, 2929 * (2000) Saumon, D., Geballe, T. R., Leggett, S. K., Marley, M. S., Freedman, R. S., Lodders, K., Fegley, B. & Sengupta, S. K. 2000, ApJ 541, 374 * (2000) Seager, S. & Sasselov, D. D. 2000, ApJ537, 916 * (2000) Seager, S., Whitney, B. A. & Sasselov, D. D. 2000, ApJ540, 504 * (2001) Stephens, D., Marley, M., Noll, K., & Chanover, N. 2001, ApJ, submitted * (2000) Sudarsky, D., Burrows, A. & Pinto, P. 2000, ApJ 538, 885 * (1984) Tomasko, M. G., West, R. A., Orton, G. S. & Teifel, V. G. 1984, in Saturn (Univ, Ariz. Press), 150 * (1997) Jones, H. R. A. & Tsuji, T. 1997, ApJ 480, L39 * (1996) Tsuji, T., Ohnaka, K., Aoki, W. & Nakajima, T. 1996, A&A 308, L29 * (1999) Tsuji, T., Ohnaka, K. & Aoki, W. 1999, ApJ 520, L119 * (1986) West, R. A., Strobel, D. F. & Tomasko, M. G. 1986, Icarus 65, 161
Clouds and hazes are important throughout our solar system and in the atmospheres of brown dwarfs and extrasolar giant planets. Among the brown dwarfs, clouds control the colors and spectra of the L-dwarfs; the disappearance of clouds helps herald the arrival of the T-dwarfs. The structure and composition of clouds will be among the first remote-sensing results from the direct detection of extrasolar giant planets. NASA Ames Research Center; Mail Stop 245-3; Moffett Field, CA and New Mexico State University; Department of Astronomy; Las Cruces, NM 88003 NASA Ames Research Center; Mail Stop 245-4; Moffett Field, CA 94035
Give a concise overview of the text below.
arxiv-format/0104017v1.md
**Fragment Isotope Distributions and the Isospin Dependent Equation of State** W.P. Tan\\({}^{a}\\), B-A. Li\\({}^{b}\\), R. Donangelo\\({}^{c}\\), C.K. Gelbke\\({}^{a}\\), M.-J. van Goethem\\({}^{a}\\), X.D. Liu\\({}^{a}\\), W.G. Lynch\\({}^{a}\\), S. Souza\\({}^{c}\\), M.B. Tsang\\({}^{a}\\), G. Verde\\({}^{a}\\), A. Wagner\\({}^{a1}\\), H.S. Xu\\({}^{a2}\\), \\({}^{a}\\)_National Superconducting Cyclotron Laboratory and Department of Physics and Astronomy, Michigan State University, East Lansing, MI 48824, USA, \\({}^{b}\\)Department of Chemistry and Physics, Arkansas State University, State University, AR 72467, USA, \\({}^{c}\\)Instituto de Fisica, Universidade Federal do Rio de Janeiro, Cidade Universitaria, CP 68528, 21945-970 Rio de Janeiro, Brazil._ # PACS numbers: 25.70.-z, 25.75.Ld, 25.10.Lx \\({}^{1}\\)Present address: Institut fur Kern- und Hadronenphysik, Forschungszentrum Rossendorf, D-01314 Dresden, Germany \\({}^{2}\\)On leave from the Institute of Modern Physics, Lanzhou, ChinaThe equation of state (EOS) of strongly interacting matter governs the dynamics of dense matter in supernovae [1] and neutron stars [2,3]. Under laboratory-controlled conditions, the EOS has been investigated by colliding nuclei and measuring compression sensitive observables. The nuclear monopole and isoscalar dipole resonances, for example, sample the curvature of the EOS near the saturation density \\(\\rho_{0}\\)[4]. Measurements of the collective flow of particles emitted from the dense and compressed matter formed at relativistic incident energies can sample the EOS at densities as high as 4\\(\\rho_{0}\\)[5]. In both types of experiment, investigations have primarily focused upon terms in the EOS that describe symmetric matter (equal numbers of protons and neutrons), leaving the asymmetry term that reflects the difference between neutron and proton densities largely unexplored [6]. For very asymmetric matter, however, details of this asymmetry term are critically important. For example, the asymmetry term dominates the pressure within neutron stars at densities of \\(\\rho\\)\\(\\leq\\) 2\\(\\rho_{0}\\), determines certain aspects of neutron star structure, and modifies proto-neutron star cooling rates [2,3]. Various studies have shown that the mean energy per nucleon e(\\(\\rho\\),\\(\\delta\\)) in nuclear matter at density \\(\\rho\\) and isospin asymmetry parameter \\(\\delta\\)=(\\(\\rho_{n}\\)-\\(\\rho_{p}\\))/(\\(\\rho_{n}\\)+\\(\\rho_{p}\\)) can be approximated by a parabolic function \\[\\rm{e(\\rho\\),\\(\\delta\\))=e(\\(\\rho\\),0)+S(\\(\\rho\\))\\(\\delta^{2}} \\tag{1}\\] where e(\\(\\rho\\),0) provides the EOS of symmetric matter, and S(\\(\\rho\\)) is the symmetry energy [2,3,6]. Different functional forms for S(\\(\\rho\\)) have been proposed [7], all consistent with constraints on S(\\(\\rho_{0}\\)) from nuclear mass measurements. Some theoretical studies have explored the influence of the density dependence of S(\\(\\rho\\)) on nuclear reaction dynamics [7-11]. Calculations of energetic nucleus-nucleus collisions [8-11] reveal that the relative emission of neutrons and protons during the early non-equilibrium stages has a robust sensitivity to the density dependence of S(\\(\\rho\\)). In general, pre-equilibrium neutron emission increases relative to pre-equilibrium proton emission for smaller values of the curvature K\\({}_{\\rm sym}\\) defined as: \\[\\rm{K_{sym}=9\\rho_{0}^{2}\\frac{\\partial^{2}S(\\rho)}{\\partial\\rho^{2}}\\Bigg{|} _{\\rho=\\rho_{0}}}. \\tag{2}\\]Enhanced pre-equilibrium neutron emission reduces the neutron-to-proton ratio in the dense region that remains behind [8,10]. Central collisions of complex nuclei of comparable mass provide the principal means to produce and study nuclear matter at densities either significantly above or below the saturation value. In near central Sn+Sn collisions at an incident energy of E/A=50 MeV, for example, matter is compressed to densities of about _1.5 \\(\\rho_{0}\\)_ before expanding and disassembling into 6-7 fragments with charges of _3\\(\\leq\\)Z\\(\\leq\\)30_ plus assorted light particles [12]. Detailed analyses imply that such multifragment disassemblies occur at an overall density of \\(\\rho\\)=\\(\\rho_{0}\\)/6-\\(\\rho_{0}\\)/3_ and over a time interval of about _z=30-100 fm/c_ [13-21]. Essentially all initial isotopic compositions are determined by the properties of the system during this narrow time frame when the density is significantly less than \\(\\rho_{0}\\). This implies that fragment isotopic distributions may have a significant sensitivity to the density dependence of S(\\(\\rho\\)). One can also enhance the sensitivity to the asymmetry term S(\\(\\rho\\))\\(\\cdot\\)\\(\\delta^{2}\\) by varying the N/Z of the initial system. Unfortunately, the observed isotopic distributions are also influenced by secondary decay, making it very important to identify observables that are insensitive to sequential decay. Statistical calculations have identified certain ratios of isotopic multiplicities as being robust with respect to the secondary decay [22,23]. For example, the ratio of the multiplicities \\(\\rm R_{\\,21}(N_{\\,i},Z_{\\,i})\\) = \\(\\rm M_{\\,2}(N_{\\,i},Z_{\\,i})/M_{\\,1}(N_{\\,i},Z_{\\,i})\\) of an isotope with neutron number \\(N_{\\,i}\\) and proton number \\(Z_{\\,i}\\) from two reactions 1 and 2 is relatively insensitive to the distortions from sequential decay. For multifragmentation, compound nuclear evaporation, and selected strongly damped collisions, such ratios as functions of \\(N_{\\,i}\\) and \\(Z_{\\,i}\\) have been experimentally shown to satisfy a power law relationship: \\[\\rm R_{\\,21}(N_{\\,i},Z_{\\,i})\\,\\rm{=}\\,M_{\\,2}(N_{\\,i},Z_{\\,i})/M_{\\,1}(N_{\\, i},Z_{\\,i})\\,\\rm{=}C\\left(\\hat{ single reaction [22, 23], but the reduction of secondary decay effects may be less effective in this case. The solid circles and squares in Fig. 1 show values for \\(\\hat{\\rho}_{\\rm p}\\) and \\(\\hat{\\rho}_{\\rm n}\\), respectively, obtained from fragments with \\(3\\!\\!\\leq\\!Z_{i}\\!\\!\\leq\\!\\!8\\) detected in central \\({}^{112}\\)Sn+\\({}^{112}\\)Sn, \\({}^{112}\\)Sn+\\({}^{124}\\)Sn and \\({}^{124}\\)Sn+\\({}^{124}\\)Sn collisions at E/A=50 MeV [22]. The \\({}^{112}\\)Sn+\\({}^{112}\\)Sn reaction was labeled as 1 in Eq. 3; the different data points correspond to the three choices for reaction 2 and are plotted in both left and right panels as a function of N\\({}_{\\rm tot}\\)/Z\\({}_{\\rm tot}\\) where N\\({}_{\\rm tot}\\) and Z\\({}_{\\rm tot}\\) are the total numbers of neutrons and protons involved in reaction 2. The solid and open points in Fig. 2 show the experimental values for the mirror nuclei ratios constructed from the multiplicities of \\({}^{7}\\)Li, \\({}^{7}\\)Be, \\({}^{11}\\)B and \\({}^{11}\\)C fragments [22]. The upper and lower panels are for \\({}^{124}\\)Sn+\\({}^{124}\\)Sn and \\({}^{112}\\)Sn+\\({}^{112}\\)Sn collisions, respectively. As discussed previously, the isospin asymmetries of the excited systems prior to multifragment breakup are sensitive to the density dependence of the asymmetry term of the EOS [8-10]. The \"prefragment\" is reduced in size relative to the total system by preequilibrium emission when it disintegrates into the final fragments. Both the Stochastic Mean Field (SMF) [24] and the Boltzmann-Uehling-Uhlenbeck (BUU) [25] formalisms, which describe the time evolution of the collision using a self-consistent mean field (with and without fluctuations, respectively), predict preequilibrium emission that is increasingly neutron-deficient and corresponding prefragments that are more neutron-rich for larger values of K\\({}_{\\rm sym}\\)[8, 26]. These two formalisms are essentially identical during the early stages of the collision when the densities exceed \\(\\rho_{0}\\)/2 and fluctuations in the mean field are negligible. The mechanism for the disintegration of the prefragment into the observed fragments with \\(3\\!\\!\\leq\\!\\!Z\\!\\!\\leq\\!\\!30\\) is an issue that is not settled but instead, is evolving considerably as new measurements and models become available. Dynamical multifragmentation models [14, 27] have been used with some success, as have statistical models either with fragment emission probabilities determined from the rates for evaporative surface emission [28] or from the yields assuming thermal equilibrium [29, 30]. Here, we examine the isotopic effects shown in Figs. 1 and 2 in the latter limit, which assumes that thermal equilibrium is achieved at breakup. Such calculations have provided surprisingly accurate predictions for the fragmentation of projectile- and target-like residues in peripheral and mid-impact parameter heavy ion collisions at incident energies E\\({}_{\\rm beam}\\)/A\\(>\\)200 MeV [31,32], central heavy ion collisions at E\\({}_{\\rm beam}\\)/A \\(\\leq\\) 50 MeV [16,33] and in light ion induced collisions at E\\({}_{\\rm beam}\\)\\(>\\) 4 GeV [34], after some accounting is made for preequilibrium light particle emission. Comparisons of experimental data to such approaches provide an assessment of the importance of non-equilibrium phenomena; accordingly, more difficulties in such approaches are encountered in central heavy ion collisions at E\\({}_{\\rm beam}\\)/A \\(>\\) 50 MeV, reflecting the decreased time available for equilibration [33,35]. Specifically, we solved the BUU equation to obtain predictions for the dynamical emission of light particles during the compression and expansion stages of the collision. Then, we calculate the multifragment disintegration of the denser portions of the system via the Statistical Multifragmentation Model (SMM) of ref. [36,37]. In the first step of the hybrid calculations described here, the mean field for symmetric nuclear matter in the BUU calculations was chosen to have a stiff EOS (K=386 MeV) [38]. Calculations were performed with two different expressions for the asymmetry term, \"asy-stiff\" (K\\({}_{\\rm sym}\\)=+61MeV) and \"asy-soft\" (K\\({}_{\\rm sym}\\)=-69 MeV) [8,9]. Using these mean fields, BUU calculations were followed through the initial compression and subsequent expansion for an elapsed time of 100 fm/c at which point the central density decreased to a value of about \\(\\rho\\)\\(\\omega\\)\\(\\theta\\). The regions with densities \\(\\rho\\)\\(>\\)\\(\\rho_{0}\\)/8 were then isolated and their decay was calculated with the SMM. The N/Z ratio and the nucleon number A of these fragmenting systems (\"prefragments\") are given in two leftmost columns in Table I. To illustrate the sensitivity of prefragment size and asymmetry to the elapsed time and density cutoff, values for N/Z and A are also given in Table I for an elapsed time of 80 fm/c. Calculations have shown that the N/Z ratio is not sensitive to the density cutoff [8]. While A is sensitive to these parameters, the N/Z ratio is relatively insensitive; to within 3%, values of N/Z of 1.27 (1.16), 1.36 (1.19) and 1.44 (1.23) are obtained for the source asymmetry of asy-stiff (asy-soft) calculations for \\({}^{112}\\)Sn+\\({}^{112}\\)Sn, \\({}^{112}\\)Sn+\\({}^{124}\\)Sn and \\({}^{124}\\)Sn+\\({}^{124}\\)Sn collisions independent of matching condition. The excitation energy per nucleon of the prefragment depends strongly on the matching condition; however, this quantity is presently difficult to calculate accurately. A range of values for the excitation energy per nucleon of E\\({}^{\\prime}\\)/A = 4-6 MeV was therefore assumed in the subsequent SMM calculations to estimate the range of possible values consistent with the present approach. Accurate calculations for isotopic yields from the multifragment decay of the excited prefragment within the SMM approach require a careful accounting of the structure and branching ratios of the excited fragments [23, 36]. Using an SMM code [36, 37] that carefully addresses such effects, the isotopic ratios in Figs. 1 and 2 were calculated for the prefragment source parameters in Table I. To indicate the sensitivity of these ratios to the secondary decay of heavier particle unstable nuclei, the open rectangles indicate the ratios obtained from the yields of primary fragments and the cross-hatched rectangles indicate the ratios obtained from the yields of the final fragments after secondary decay. The vertical height of each rectangle reflects the range of values for each quantity as the assumed excitation energy is varied over the range of E\\({}^{*}\\)/A = 4-6 MeV. The left and right panels in Fig. 1 provide values calculated for prefragments obtained with the asy-stiff and asy-soft EOS's, respectively. In both panels, it can be seen that the ratios calculated from the primary yields (open rectangles) and those calculated from the secondary yields (cross-hatched rectangles) are similar, indicating that values for R\\({}_{21}\\) (N,Z) are relatively insensitive to secondary decay. With the exception of the value of \\(\\left<\\hat{\\rho}_{{}_{\\rm p}}\\right>\\) for the \\({}^{124}\\)Sn+\\({}^{124}\\)Sn reaction, N\\({}_{\\rm tot}\\)/Z\\({}_{\\rm tot}\\) =1.48, the ratios calculated from the final yields with the asy-stiff EOS (left panel) overlap the data. In comparison, the calculations using the asy-soft EOS (right panels) show a significantly weaker dependence on N\\({}_{\\rm tot}\\)/Z\\({}_{\\rm tot}\\) than do the data. The left and right panels in Fig. 2 provide values for the mirror nuclei ratios calculated with the asy-stiff and asy-soft EOS's, respectively. For these ratios, the sensitivity to the density dependence of the symmetry energy and to the secondary decay corrections are more significant. Ratios of mirror nuclei calculated with the asy-stiff EOS exceed those calculated with the asy-soft EOS by about a factor of two and overlap with the experimental values for three of the four ratios measured. In the present simplified approach, the sensitivity of isotope and the mirror nuclei ratios to the asymmetry term arises from the different (N/Z) ratios of the prefragments that are predicted by BUU calculations. There is little sensitivity to the total mass of the prefragment, but additional sensitivity to its excitation energy per nucleon. Within the present model dependent analysis, this uncertainty in excitation energy is the limiting factor that prevents a more quantitative constraint on S(p). Light cluster emission during the early compression and expansion stages of the collision can influence the N/Z ratio and excitation energy of the prefragment. Incorporating the emission of light particles up to A=4 within transport model calculations will help address this issue [39, 40]. While the present hybrid model approach demonstrates a sensitivity of the isotopic fragment yields to the asymmetry term of the EOS, the detailed nature of this sensitivity is model dependent. For example, the hybrid model predicts that an asy-stiff EOS leads to fragments that are more neutron-rich than those produced when the EOS is asy-soft. On the other hand, recent calculations with the Expanding Evaporating Source (EES) model, which assumes the fragments originate from surface emission and not from the equilibrium decay of the residue, predict the opposite trend [41]. It is therefore highly desirable to explore the connection between the fragment isotopic distributions and the EOS within other statistical and dynamical fragment production models currently in use and under development. These long-term goals require significant future theoretical efforts. In summary, we have explored the connection between the isotopic composition of particles emitted during an energetic nucleus-nucleus collision and density dependence of the asymmetry term of the nuclear equation of state. This initial exploration was performed within the limit of an equilibrated freezeout condition. These calculations suggest that such data are sensitive to the density dependence of the asymmetry term of the equation of state. This work was supported in part by the National Science Foundation under Grant Nos. PHY-95-28844 and PHY-0088934, by Arkansas Science and Technology Authority Grant No. 00-B-14.by CNPq, FAPERJ, and FUJB and by the MCT/FINEP/CNPq (PRONEX) program, under contract #41.96.0886.00. ## References * [1] H.A. Bethe, Rev. Mod. Phys. 62, 801 (1990). * [2] C.J. Pethick and D.G. Ravenhall, Ann. Rev. Nucl. Part. Sci. 45, 429 (1995). * [3] J.M. Lattimer and M. Prakash, Ap. J. 550, 426 (2001); J.M. Lattimer and M. Prakash, Phys. Rep. 333, 121 (2000). * [4] D. H. Youngblood, H. L. Clark, and Y.-W. Lui, Phys. Rev. Lett. 82, 691 (1999); H. L. Clark, Y.-W. Lui, and D. H. Youngblood, Phys. Rev. C 63, 031301(R) (2001). * [5] P. Danielewicz, Nucl. Phys. A 685, 368C (2001). * [6] I. Bombaci, in \"Isospin Physics in Heavy-Ion Collisions at Intermediate Energies\", Eds. Bao-An Li and W. Udo Schroeder, NOVA Science Publishers, Inc. (New York), (2001) in press and refs. therein. * [7] M. Prakash, T.L. Ainsworth and J.M. Lattimer, Phys. Rev. Lett. 61, 2518 (1988). * [8] Bao-An Li, Phys. Rev. Lett. 85, 4221, (2000). * [9] Bao-An Li, C.M. Ko and Zhongshou Ren, Phys. Rev. Lett. 78, 1644 (1997). * [10] V. Baran, M. Colonna, M. Di Toro and A.B. Larionov, Nucl. Phys. A 632, 287 (1998). * [11] L. Scalone, M. Colonna and M. Di Toro, Physics Letters B, 461, 9 (1999). * [12] The fragment multiplicities are scaled according to the total charge from Xe+Sn [13] and Xe+Au [14] collisions. * [13] N. Marie, A. Chbihi, J.B. Natowitz, A. Le Fevre, S. Salou, J.P. Wieleczko, L. Gingras, M. Assenard, G. Auger, Ch.O. Bacri, F. Bocage, B. Borderie, R. Bougault, R. Brou, P. Buchet, J.L. Charvet, J. Cibor, J. Colin, D. Cussol, R. Dayras, A. Demeyer, D. Dore, D. Durand, P. Eudes, J.D. Frankland, E. Galichet, E. Genouin-Duhamel, E. Gerlic, M. Germain, D. Gourio, D. Guinet, K. Hagel, P. Lautesse, J.L. Laville, J.F. Lecolley, T. Lefort, R. Legrain, N. Le Neindre, O. Lopez, M. Louvel, Z. Majka, A.M. Maskay, L. Nalpas, A.D. Nguyen, M. Parlog, J. Peter, E. Plagnol, A. Rahmani, T. Reposeur, M.F. Rivet, E. Rosato, F. Saint-Laurent, J.C. Steckmeyer, M. Stern, G. Tabacaru, B. Tamain, O. Tirel, E. Vient, C. Volant, and R. Wada, Phys. Rev. C58, 256 (1998). * [14] B. Borderie, G. Tbcaru,, Ph. Chomaz, M. Colonna, A. Guarnera, M. Parlog, M. F. Rivet, G. Auger, Ch. O. Bacri, N. Bellaize, R. Bougault, B. Bouriquet, R. Brou, P. Buchet, A. Chbihi, J. Colin, A. Demeyer, E. Galichet,, E. Gerlic, D. Guinet, S. Hudan, P. Lautesse,F. Lavaud, J. L. Laville, J. F. Lecolley, C. Leduc, R. Legrain, N. Le Neindre, O. Lopez, M. Louvel, A. M. Maskay, J. Normand, P. Pawowski, E. Rosato, F.Saint-Laurent, J. C. Steckmeyer, B. Tamain, L. Tassan-Got, E. Vient, and J. P. Wieleczko, Phys. Rev. Lett. 86, 3252 (2001). * [15] D.R. Bowman, G.F. Peaslee, R.T. de Souza, N. Carlin, C.K. Gelbke, W.G. Gong, Y.D. Kim, M.A. Lisa, W.G. Lynch, L. Phair, M.B. Tsang, C. Williams, N. Colonna, K. Hanold, M.A. McMahan, G.J. Wozniak, L.G. Morreto, and W.A. Friedman, Phys. Rev. Lett. 67, 1527 (1991). * [16] M. D'Agostino, A.S. Botvina, P.M. Milazzo, M. Bruno, G.J. Kunde, D.R. Bowman, L. Celano, N. Colonna, J.D. Dinius, A. Ferrero, M.L. Fiandri, C.K. Gelbke, T. Glasmacher, F. Gramegna, D.O. Handzy, D. Horn, W.C. Hsi, M. Huang, I. Iori, M.A. Lisa, W.G. Lynch, L. Manduci, G.V. Margagliotti, P.F. Mastinu, I.N. Mishustin, C.P. Montoya, A. Moroni, G.F. Peaslee, F. Petruzzelli, L. Phair, R. Rui, C. Schwarz, M.B. Tsang, G. Vannini, and C. Williams, Phys. Lett. B371, 175 (1996). * [17] R. Popescu, T. Glasmacher, J.D. Dinius, S.J. Gaff, C.K. Gelbke, D.O. Handzy, M.J. Huang, G.J. Kunde, W.G. Lynch, L. Martin, C.P. Montoya, M.B. Tsang, N. Colonna, L. Celano, G. Tagliente, G.V. Margagliotti, P.M. Milazzo, R. Rui, G. Vannini, M. Bruno, M. D'Agostino, M.L. Fiandri, F. Gramegna, A. Ferrero, I. Iori, A. Moroni, F. Petruzzelli, P.F. Mastinu, L. Phair, and K. Tso, Phys. Rev. 58, 270 (1998). * [18] T. Glasmacher, L. Phair, D. R. Bowman, C. K. Gelbke, W. G. Gong, Y. D. Kim, M. A. Lisa, W. G. Lynch, G. F. Peaslee, R. T. de Souza, M. B. Tsang, and F. Zhu, Phys. Rev. C50, 952 (1994). * [19] L. Beaulieu, T. Lefort, K. Kwiatkowski, R. T. de Souza, W.-c. Hsi, L. Pienkowski, B. Back, D. S. Bracken, H. Breuer, E. Cornell, F. Gimeno-Nogues, D. S. Ginger, S. Gushue, R. G. Korteling, R. Laforest, E. Martin, K. B. Morley, E. Ramakrishnan, L. P. Remsberg, D. Rowland, A. Ruangma, V. E. Viola, G. Wang, E. Winchester, and S. J. Yennello, Phys. Rev. Lett. 84, 5971 (2000). 20. G. Wang, K. Kwiatkowski, D.S. Bracken, E. Renshaw Foxford, W.-c. Hsi, K.B. Morley, V.E. Viola, N.R. Yoder, C. Volant, R. Legrain, E.C. Pollacco, R.G. Korteling, W.A. Friedman, A. Botvina, J. Brzychczyk, and H. Breuer, Phys. Rev. C60, 014603 (1999). * 21. D.H.E. Gross, G. Klotz-Engmann, and H. Oeschler, Phys. Lett. B224, 29 (1989). * 22. H. S. Xu, M. B. Tsang, T. X. Liu, X. D. Liu, W. G. Lynch, W. P. Tan, A. Vander Molen, G. Verde, A. Wagner, H. F. Xi, C. K. Gelbke, L. Beaulieu, B. Davin, Y. Larochelle, T. Lefort, R. T. de Souza, R. Yanez, V. E. Viola, R. J. Charity, and L. G. Sobotka, Phys. Rev. Lett. 85, 716 (2000). * 23. M.B Tsang, C.K. Gelbke, X.D. Liu, W.G. Lynch, W.P. Tan, G. Verde, H.S. Xu, W. A. Friedman, R. Donangelo, S. R. Souza, C.B. Das, S. Das Gupta, D. Zhabinsky, to be published. * 24. M. Colonna, M. Di Toro, G. Fabbri, and S. Maccarone, Phys. Rev. C 57, 1410 (1998) and refs. therein. * 25. G. Bertsch and S. Das Gupta, Phys. Rep. 160, 189 (1988) and refs. therein. * 26. A.B. Larionov, A.S. Botvina, M. Colonna, M Di Toro, Nucl. Phys. A. 658, 375 (1999). * 27. R. Wada, K. Hagel, J. Cibor, M. Gonin, Th. Keutgen, M. Murray, J. B. Natowitz, A. Ono, J. C. Steckmeyer, A. Kerambrum, J. C. Angelique, A. Auger, G. Bizard, R. Brou, C. Cabot, E. Crema, D. Cussol, D. Durand, Y. El Masri, P. Eudes, Z. Y. He, S. C. Jeong, C. Lebrun, J. P. Patry, A. Peghaire, J. Peter, R. Regimbart, E. Rosato, F. Saint-Laurent, B. Tamain, and E. Vient, Phys. Rev. C 62, 034601 (2000) and refs. therein. * 28. W.A. Friedman, Phys. Rev. C42, 667 (1990). * 29. J.P. Bondorf, A.S. Botvina, A.S. Iljinov, I.N, Mishustin, and K. Sneppen, Phys. Rep. 257, 133 (1995) and refs. therein. * 30. D.H.E. Gross, Phys. Rep. 279, 119 (1997) and refs. therein. * [31] B.-A. Li, A.R. De Angelis, and D.H.E. Gross, Phys. Lett. B303, 225 (1993). * [32] A.S. Botvina, I.N. Mishustin, M. Begemann-Blaich, J. Hubele, G. Imme, I. Iori, P. Kreutz, G.J. Kunde, W.D. Kunze, V. Lindenstruth, U. Lynen, A. Moroni, W.F.J. Muller, C.A. Ogilvie, J. Pochodzalla, G. Raciti, Th. Rubehn, H. Sann, A. Schuttauf, W. Seidel, W. Trautmann, and A. Worner, Nucl. Phys. A584, 737 (1995). * [33] C. Williams, W. G. Lynch, C. Schwarz, M. B. Tsang, W. C. Hsi, M. J. Huang, D. R. Bowman, J. Dinius, C. K. Gelbke, D. O. Handzy, G. J. Kunde, M. A. Lisa, G. F. Peaslee, L. Phair, A. Botvina, M-C. Lemaire, S. R. Souza, G. Van Buren, R. J. Charity, and L. G. Sobotka, U. Lynen, J. Pochodzalla, H. Sann, W. Trautmann, D. Fox and R. T. de Souza, and N. Carlin, Phys. Rev. C55, R2132 (1997). * [34] K. Kwiatkowski, A.S. Botvina, D.S. Bracken, E. Renshaw Foxford, W.A. Friedman, R.G. Korteling, K.B. Morley, E.C. Pollacco, V.E. Viola, and C. Volant, Phys. Lett. B423, 21 (1998). * [35] W. Reisdorf, D. Best, A. Gobbi, N. Herrmann, K.D. Hildenbrand, B. Hong, S.C. Jeong, Y. Leifels, C. Pinkenburg, J.L. Ritman, D. Schull, U. Sodan, K. Teh, G.S. Wang, J.P. Wessels, T. Wienold, J.P. Alard, V. Amouroux, Z. Basrak, N. Bastid, I. Belyaev, L. Berger, J. Biegansky, M. Bini, S. Boussange, A. Buta, R. Caplar, N. Cindro, J.P. Coffin, P. Crochet, R. Dona, P. Dupieux, M. Dzelalija, J. Ero, M. Eskef, P. Fintz, Z. Fodor, L. Fraysse, A. Genoux-Lubain, G. Goebels, G. Guillaume, Y. Grigorian, E. Hafele, S. Holbling, A. Houari, M. Ibnouzahir, M. Joriot, F. Jundt, J. Kecskemeti, M. Kirejczyk, P. Koncz, Y. Korchagin, M. Korolija, R. Kotte, C. Kuhn, D. Lambrecht, A. Lebedev, A. Lebedev, I. Legrand, C. Maazouzi, V. Manko, T. Matulewicz, P.R. Maurenzig, H. Merlitz, G. Mgebrishvili, J. Mosner, S. Mohren, D. Moisa, G. Montarou, I. Montbel, P. Morel, W. Neubert, A. Olmi, G. Pasquali, D. Pelte, M. Petrovici, G. Poggi, P. Pras, F. Rami, V. Ramillien, C. Roy, A. Sadchikov, Z. Seres, B. Sikora, V. Simion, K. Siwek-Wilczynska, V. Smolyankin, N. Taccetti, R. Tezkratt, L. Tizniti, M. Trzaska, M.A. Vasiliev, P. Wagner, K. Wisniewski, D. Wohlfarth, and A. Zhilin, Nucl. Phys. A612, 493 (1997). 36. S. R. Souza, W. P. Tan, R. Donangelo, C. K. Gelbke, W. G. Lynch, and M. B. Tsang, Phys. Rev. C 62, 064607 (2000). 37. This model is based upon the approach of J.P. Bondorf, R. Donangelo, I.N. Mishustin, C.J. Pethick, H. Schulz, et al., Nucl. Phys. A 443, 321 (1985) but has more careful account of structure effects. 38. The N/Z of the residue is independent of the choice for the incompressibility of the symmetric matter part of the EOS and independent of the choice of in-medium nucleon-nucleon cross section [9]. The mass of the residue is influenced by the latter two choices but is immaterial to the main issues of the present work. 39. L.G. Sobotka, J.F. Dempsey, and R.J. Charity, Phys. Rev. C55, 2109 (1997). 40. M.B. Tsang, P. Danielewicz, D.R. Bowman, N. Carlin, C.K. Gelbke, W.G. Gong, Y.D. Kim, W.G. Lynch, L. Phair, R.T. de Souza, and F. Zhu, Phys. Lett. B297, 243 (1992). 41. M.B. Tsang, W.A. Friedman, C.K. Gelbke, W.G. Lynch, G. Verde, H. Xu, Phys. Rev. Lett. (2001) in press. \\begin{table} \\begin{tabular}{|l|l|l|l|l|l|l|l|l|} \\hline reaction & \\multicolumn{2}{l|}{t=100 fm/c, \\(\\rho_{\\rm c}\\)=\\(\\rho_{0}\\)/8} & \\multicolumn{2}{l|}{t=80 fm/c, \\(\\rho_{\\rm c}\\)=\\(\\rho_{0}\\)/8} \\\\ \\hline & \\multicolumn{2}{l|}{asy-soft} & \\multicolumn{2}{l|}{asy-stiff} & \\multicolumn{2}{l|}{asy-soft} & \\multicolumn{2}{l|}{asy-stiff} \\\\ \\hline & N/Z & A & N/Z & A & N/Z & A & N/Z & A \\\\ \\hline \\({}^{112}\\)Sn+\\({}^{112}\\)Sn & 1.16 & 153 & 1.27 & 152 & 1.17 & 165 & 1.27 & 165 \\\\ \\hline \\({}^{112}\\)Sn+\\({}^{124}\\)Sn & 1.19 & 161 & 1.36 & 162 & 1.22 & 174 & 1.36 & 175 \\\\ \\hline \\({}^{124}\\)Sn+\\({}^{124}\\)Sn & 1.23 & 172 & 1.44 & 173 & 1.27 & 183 & 1.45 & 185 \\\\ \\hline \\end{tabular} \\end{table} Table 1: The first two columns provide the N/Z ratio and number of nucleons in the prefragments produced in the calculations for an elapsed time of 100 fm/c and density cutoff of \\(\\rho_{0}\\)/8. The next two columns provide corresponding information for the same cutoff density but a shorter elapsed time of 80 fm/c. All calculations were performed at an impact parameter of 1 fm. **Figure Captions:** Figure 1: Both panels: The solid circles and solid squares in Fig. 1 show values for \\(\\hat{\\rho}_{p}\\) and \\(\\hat{\\rho}_{n}\\), respectively; measured in central \\({}^{112}\\)Sn+\\({}^{112}\\)Sn, \\({}^{112}\\)Sn+\\({}^{124}\\)Sn and \\({}^{124}\\)Sn+\\({}^{124}\\)Sn collisions at E/A=50 MeV. Left panel: the open and cross-hatched rectangles show corresponding hybrid calculations for \\(R_{21}\\) calculated from the primary and final fragment yields, respectively, predicted by the hybrid calculations using the Asy-stiff EOS. Right panel: the open and cross-hatched rectangles show corresponding hybrid calculations for \\(R_{21}\\) calculated from the primary and final fragment yields, respectively, predicted by the hybrid calculations using the Asy-soft EOS. Fig. 1Fig. 2
Calculations predict a connection between the isotopic composition of particles emitted during an energetic nucleus-nucleus collision and the density dependence of the asymmetry term of the nuclear equation of state (EOS). This connection is investigated for central \\({}^{112}\\)Sn+\\({}^{112}\\)Sn and \\({}^{124}\\)Sn+\\({}^{124}\\)Sn collisions at E/A=50 MeV in the limit of an equilibrated freezeout condition. Comparisons between measured isotopic yield ratios and theoretical predictions in the equilibrium limit are used to assess the sensitivity to the density dependence of the asymmetry term of the EOS. This analysis suggests that such comparisons may provide an opportunity to constrain the asymmetry term of the EOS.
Condense the content of the following passage.
arxiv-format/0107221v2.md
## 1 Introduction Bound states, as opposed to fundamental particles, are commonly thought of as derived quantities, in the sense that the properties of positronium or atoms can be computed from the known electromagnetic interactions of their constituents. The conceptual separation between bound states and fundamental particles is, however, not always so obvious. As an example, it has been proposed that the Higgs scalar can be viewed as a top-antitop bound state [1], with a compositeness scale much above the characteristic scale of electroweak symmetry breaking. The mass of the bound state (and therefore the scale of electroweak symmetry breaking) depends in this model on a free parameter characterizing the strength of a four-fermion interaction. For a bound-state mass or momentum near the compositeness scale \\(\\Lambda\\), all the usual properties of bound states are visible. If the Higgs boson mass is substantially smaller than \\(\\Lambda\\), however, the bound state behaves like a fundamental particle for all practical aspects relating to momentum scales sufficiently below the compositeness scale. Depending on the momentum scale, the particle can therefore appear either as a typical bound state or a fundamental particle. The scale dependence of the physical picture can be cast into the language of the renormalization group (RG) by considering a scale-dependent effective action. It should be possible to understand the issues related to bound states or composite fields in this context. In this paper we demonstrate how the effective behavior as bound state or \"fundamental particle\" in dependence on a parameter of themodel can be understood within the exact renormalization group equation for the effective average action [2]. In strong interactions, bound states or composite fields play an essential role in the dynamics at low momenta. In particular, scalar quark-antiquark bound states are responsible for chiral symmetry breaking with the associated dynamics of the pions. Furthermore, it has been proposed that the condensation of a color octet composite field may lead to \"spontaneous breaking of color\" [3] with a successful phenomenology of the spectrum and interactions of the light pseudoscalars, vector mesons and baryons. In order to verify or falsify such a proposal and connect the parameters of an effective low-energy description to the fundamental parameters of QCD one needs a reliable connection between short and long distance within the RG approach. In such a formalism it is convenient to represent fundamental particles and bound states by fields on equal footing. For quark-antiquark bound states this can be achieved by partial bosonization. A first picture of the flow of bound states in the exact RG approach has been developed in [4]. A shortcoming of these initial proposals is the fact that the bosonization is typically performed at a fixed scale. In a RG picture it would seem more appropriate that the relation between the fields for composite and fundamental particles becomes scale dependent. Furthermore, the simple observation that a typical bound-state behavior should not lead to the same relevant (or marginal) parameters as in the case of fundamental particles has not been very apparent so far. In this paper we propose a modified exact renormalization group equation which copes with these issues. The field variables themselves depend on the renormalization scale \\(k\\). For this purpose we use \\(k\\)-dependent nonlinear field transformations [5], [6]. As a consequence, partial bosonization can be performed continuously for all \\(k\\). This yields a description where explicit four-quark interactions which have the same structure as those produced by the exchange of a bound state are absent for every scale \\(k\\). These interactions are then completely accounted for by the exchange of composite fields. We will demonstrate this approach in a simple model, whereas the more formal aspects can be found in appendices. As a result, we conclude that \"fundamental behavior\" is related to a flow governed by an infrared unstable fixed point with the appropriate relevant parameters. For the typical \"bound-state behavior\" such a fixed point does not govern the flow. The parameters characterizing the bound-state mass and interactions are rather determined by an infrared attractive (partial) fixed point and become therefore predictable as a function of the relevant or marginal parameters characterizing masses and interactions of other \"fundamental fields\". As a consequence, the notions of bound state or fundamental particle become scale dependent, with a possible crossover from one behavior to another. As a simplified model sharing many features of electroweak or strong interactions we consider the gauged NJL model [7] (with one flavor \\(N_{F}=1\\)), with action \\[S = \\int d^{4}x\\big{[}\\bar{\\psi}i\\gamma^{\\mu}(\\partial_{\\mu}+{\\rm i} eA_{\\mu})\\psi+2\\lambda_{\\rm NJL}(\\bar{\\psi}_{R}\\psi_{L})(\\bar{\\psi}_{L}\\psi_{R}) \\tag{1}\\] \\[\\qquad\\qquad+{{1\\over 4}}F_{\\mu\ u}F_{\\mu\ u}+{ {1\\over 2\\alpha}}(\\partial_{\\mu}A_{\\mu})^{2}\\big{]}.\\] We consider here a small gauge coupling \\(e\\). This model has two simple limits: For small \\(\\lambda_{\\rm NJL}\\) we recover massless quantum electrodynamics (QED), whereas for large enough \\(\\lambda_{\\rm NJL}\\) one expects spontaneous chiral symmetry breaking. The region of validity of perturbative electrodynamics can be established by comparing \\(\\lambda_{\\rm NJL}\\) to the effective four-fermion interaction generated by box diagrams in the limit of vanishing external momenta: \\[\\Delta{\\cal L}_{B} = \\frac{1}{4}\\Delta\\lambda(\\bar{\\psi}\\gamma_{\\mu}\\gamma_{5}\\psi)( \\bar{\\psi}\\gamma_{\\mu}\\gamma_{5}\\psi) \\tag{2}\\] \\[= \\Delta\\lambda\\left[2(\\bar{\\psi}_{L}\\psi_{R})(\\bar{\\psi}_{R}\\psi_{ L})+\\frac{1}{4}(\\bar{\\psi}\\gamma_{\\mu}\\psi)(\\bar{\\psi}\\gamma_{\\mu}\\psi) \\right].\\] Since the box diagrams are infrared divergent in the chiral limit of vanishing electron mass, we have introduced a scale by implementing an infrared cutoff \\(\\sim k\\) in the propagators such that1 Footnote 1: We note that \\(\\Delta\\lambda\\) does not depend on the gauge-fixing parameter \\(\\alpha\\). \\[\\Delta\\lambda=6e^{4}\\int\\frac{d^{4}q}{(2\\pi)^{4}}[q^{2}(1+r_{B}(q))]^{-2}[q^{ 2}(1+r_{F}(q))^{2}]^{-1}=\\frac{9}{16\\pi^{2}}\\frac{e^{4}}{k^{2}}. \\tag{3}\\] (The second equality holds for the particular cutoff functions \\(r_{B},r_{F}\\) described in appendix E.) As long as perturbation theory remains valid (small \\(e\\)), and \\(\\lambda_{\\rm NJL}\\lesssim\\Delta\\lambda\\), we do not expect the four-fermion interaction \\(\\sim\\lambda_{\\rm NJL}\\) to disturb substantially the physics of massless QED. (In this case \\(\\lambda_{\\rm NJL}\\) is an irrelevant parameter in the renormalization group (RG) language.) The spontaneous breaking of the chiral symmetry for \\(\\lambda_{\\rm NJL}>\\lambda_{\\rm c}\\) has been studied by a variety of methods [7, 8, 9, 10]. For strong four-fermion interactions the dominant physics can be described by a Yukawa interaction with an effective composite scalar field. The phase transition at \\(\\lambda_{\\rm NJL}=\\lambda_{\\rm c}\\) is of second order. In the vicinity of this transition the composite scalar has all the properties usually attributed to a fundamental field. In particular, its mass is governed by a relevant parameter. In this paper we present a unified description of all these different features in terms of flow equations for the effective average action. ## 2 Flow equation for the gauged NJL model Our starting point is the exact renormalization group equation for the scale-dependent effective action \\(\\Gamma_{k}\\) in the form [2] \\[\\partial_{t}\\Gamma_{k}=\\frac{1}{2}{\\rm STr}\\{\\partial_{t}R_{k}(\\Gamma_{k}^{(2 )}+R_{k})^{-1}\\}. \\tag{4}\\] The solution \\(\\Gamma_{k}\\) to this equation interpolates between its boundary condition in the ultraviolet \\(\\Gamma_{\\Lambda}\\), usually given by the classical action, and the effective action \\(\\Gamma_{k=0}\\), representing the generating functional of the 1PI Green's functions. This flow is controlled by the to some extent arbitrary positive function \\(R_{k}(q^{2})\\) that regulates the infrared fluctuationsat a scale \\(k\\) and falls off quickly for \\(q^{2}>k^{2}\\). Indeed, the insertion \\(\\partial_{t}R_{k}\\) suppresses the contribution of modes with momenta \\(q^{2}\\gg k^{2}\\). The operator \\(\\partial_{t}\\) represents a logarithmic derivative \\(\\partial_{t}=k\\frac{d}{dk}\\). The heart of the flow equation is the fluctuation matrix \\(\\Gamma_{k}^{(2)}\\) that comprises second functional derivatives of \\(\\Gamma_{k}\\) with respect to all fields, and together with \\(R_{k}\\) it corresponds to the exact inverse propagator at a given scale \\(k\\). The (super-)trace runs over momenta and all internal indices including momenta and provides appropriate minus signs for the fermionic sector. For our study, we use the following simple truncation for the gauged NJL model including the scalars arising from bosonization [10] (Hubbard-Stratonovich transformation): \\[\\Gamma_{k} = \\int d^{4}x\\bigg{\\{}\\bar{\\psi}{\\rm i}\\partial\\!\\!\\!/\\psi+2\\bar{ \\lambda}_{\\sigma,k}\\,\\bar{\\psi}_{\\rm R}\\psi_{\\rm L}\\bar{\\psi}_{\\rm L}\\psi_{\\rm R} \\tag{5}\\] \\[\\qquad\\qquad+Z_{\\phi,k}\\partial_{\\mu}\\phi^{*}\\partial_{\\mu}\\phi+ \\bar{m}_{k}^{2}\\,\\phi^{*}\\phi+\\bar{h}_{k}(\\bar{\\psi}_{\\rm R}\\psi_{\\rm L}\\phi- \\bar{\\psi}_{\\rm L}\\psi_{\\rm R}\\phi^{*})\\] \\[\\qquad\\qquad+{{{ 1}\\over{ 4}}}F_{\\mu\ u}F_{\\mu\ u}+{{{ 1}\\over{ 2\\alpha}}}(\\partial_{\\mu}A_{\\mu})^{2}-e\\bar{\\psi}/\\!\\!\\!/\\psi\\bigg{\\}}.\\] This truncation is sufficient for our purposes. For quantitative estimates some of the simplifications could be improved in future work. This concerns, in particular: setting the fermion and gauge-field wave function renormalization constants to 1, reducing an a priori arbitrary scalar potential to a pure mass term, skipping all vector, axialvector, etc. channels of the four-fermion interaction as well as all higher-order operators, neglecting the running of the gauge coupling \\(e\\) and dropping all higher-order derivative terms. Especially the gauge sector is treated insufficiently, although this is appropriate for small \\(e\\); for simplicity, we use Feynman gauge, \\(\\alpha=1\\). The running of the scalar wave function renormalization \\(Z_{\\phi,k}\\) will also not be studied explicitly; since \\(Z_{\\phi,k}\\) is zero for the bosonization of a point-like four-fermion interaction, we shall assume that it remains small in the region of interest. Nevertheless, the essential points of how fermionic interactions may be translated into the scalar sector can be studied in this simple truncation. Of course, the truncation is otherwise not supposed to reveal all properties of the system even qualitatively; in particular, the interesting aspects of the gauged NJL model at strong coupling [11, 12, 13] cannot be covered unless the scalar potential is generalized. The truncation (5) is related to the bosonized gauged NJL model, if we impose the relation \\[\\lambda_{\\rm NJL}:=\\frac{1}{2}\\,\\frac{\\bar{h}_{\\Lambda}^{2}}{\\bar{m}_{\\Lambda }^{2}} \\tag{6}\\] as a boundary condition at the bosonization scale \\(\\Lambda\\) and \\(\\bar{\\lambda}_{\\sigma,\\Lambda}=0\\), \\(Z_{\\phi,\\Lambda}=0\\); it is this bosonization scale \\(\\Lambda\\) that we consider as the ultraviolet starting point of the flow. In fact, the action (1) can be recovered by solving the field equation of \\(\\phi\\) as functional of \\(\\psi,\\bar{\\psi}\\) and reinserting the solution into Eq. (5). Using the truncation (5), the flow equation (4) can be boiled down to first-order coupled differential equations for the couplings \\(\\bar{m}_{k}^{2}\\), \\(\\bar{h}_{k}\\) and \\(\\bar{\\lambda}_{\\sigma,k}\\). For this, we rewrite Eq. (4) in the form \\[\\partial_{t}\\Gamma_{k}=\\frac{1}{2}\\,{\\rm STr}\\,\\tilde{\\partial}_{t}\\,\\ln(\\Gamma_{k }^{(2)}+R_{k}), \\tag{7}\\] where the symbol \\(\\tilde{\\partial}_{t}\\) specifies a formal derivative that acts only on the \\(k\\) dependence of the cutoff function \\(R_{k}\\). Let us specify the elements of Eq. (4) more precisely: \\[\\left(\\Gamma_{k}^{(2)}\\right)_{ab}:=\\frac{\\overrightarrow{\\delta}}{\\delta \\Phi_{a}^{T}}\\,\\Gamma_{k}\\,\\frac{\\overleftarrow{\\delta}}{\\delta\\Phi_{b}}, \\quad\\Phi=\\left(\\begin{array}{c}A\\\\ \\phi\\\\ \\phi^{*}\\\\ \\psi\\\\ \\bar{\\psi}^{T}\\end{array}\\right),\\quad\\Phi^{T}=(A^{T},\\phi,\\phi^{*},\\psi^{T}, \\bar{\\psi}). \\tag{8}\\] Here \\(A\\equiv A_{\\mu}\\) is understood as a column vector, and \\(A^{T}\\) denotes its Lorentz transposed row vector. For spinors the superscript \\(T\\) characterizes transposed quantities in Dirac space. The complex scalars \\(\\phi\\) and \\(\\phi^{*}\\) as well as the fermions \\(\\bar{\\psi}\\) and \\(\\psi\\) are considered as independent, but transposed quantities are not: e.g., \\(\\Phi\\) and \\(\\Phi^{T}\\) carry the same information. Performing the functional differentiation, the fluctuation matrix can be decomposed as \\[\\Gamma_{k}^{(2)}+R_{k}={\\cal P}+{\\cal F}, \\tag{9}\\] where \\({\\cal F}\\) contains all the field dependence and \\({\\cal P}\\) the propagators including the cutoff functions. Their explicit representations are given in appendix B. Inserting Eq. (9) into Eq. (7), we can perform an expansion in the number of fields, \\[\\partial_{t}\\Gamma_{k} = \\frac{1}{2}\\,{\\rm STr}\\,\\tilde{\\partial}_{t}\\,\\ln({\\cal P}+{\\cal F})\\] \\[= \\frac{1}{2}\\,{\\rm STr}\\,\\tilde{\\partial}_{t}\\,\\left(\\frac{1}{{ \\cal P}}{\\cal F}\\right)-\\frac{1}{4}\\,{\\rm STr}\\,\\tilde{\\partial}_{t}\\,\\left( \\frac{1}{{\\cal P}}{\\cal F}\\right)^{2}+\\frac{1}{6}\\,{\\rm STr}\\,\\tilde{\\partial} _{t}\\,\\left(\\frac{1}{{\\cal P}}{\\cal F}\\right)^{3}-\\frac{1}{8}\\,{\\rm STr}\\, \\tilde{\\partial}_{t}\\,\\left(\\frac{1}{{\\cal P}}{\\cal F}\\right)^{4}+\\ldots\\,.\\] The dots denote field-independent terms and terms beyond our truncation. For our purposes it suffices to take the fields constant in space. The corresponding powers of \\(\\frac{1}{{\\cal P}}{\\cal F}\\) can be computed by simple matrix multiplication and the (super-)traces can be taken straightforwardly. This results in the following flow equations for the desired couplings: \\[\\partial_{t}\\bar{m}_{k}^{2}\\equiv\\,\\beta_{m} = 8k^{2}v_{4}l_{1}^{(F)\\,4}(0)\\,\\bar{h}_{k}^{2},\\] \\[\\partial_{t}\\bar{h}_{k}\\equiv\\,\\,\\beta_{h} = -16k^{2}v_{4}l_{1}^{(F)\\,4}(0)\\,\\bar{\\lambda}_{\\sigma,k}\\bar{h}_{k }-16v_{4}l_{1,1}^{(FB)\\,4}(0,0)\\,e^{2}\\bar{h}_{k},\\] \\[\\partial_{t}\\bar{\\lambda}_{\\sigma,k}\\equiv\\,\\beta_{\\lambda_{ \\sigma}} = -24k^{-2}v_{4}l_{1,2}^{(FB)\\,4}(0,0)\\,e^{4}-32v_{4}l_{1,1}^{(FB)\\,4 }(0,0)\\,e^{2}\\bar{\\lambda}_{\\sigma,k} \\tag{11}\\] \\[+8v_{4}\\,\\frac{1}{Z_{\\phi,k}}l_{1,1}^{(FB)\\,4}(0,\\frac{n_{k}^{2}} {Z_{\\phi,k}k^{2}})\\,\\bar{h}_{k}^{2}\\bar{\\lambda}_{\\sigma,k}-8k^{2}v_{4}l_{1}^{ (F)\\,4}(0)\\,\\bar{\\lambda}_{\\sigma,k}^{2}\\] \\[+\\frac{2v_{4}}{Z_{\\phi,k}^{2}k^{2}}l_{1,2}^{(FB)4}\\left(0,\\frac{n_ {k}^{2}}{Z_{\\phi,k}k^{2}}\\right)\\bar{h}_{k}^{4},\\]where \\(v_{4}=1/(32\\pi^{2})\\). The threshold functions \\(l\\) fall off for large arguments and describe the decoupling of particles with mass larger than \\(k\\). They are defined in [10]; explicit examples are given in appendix E. At this point, it is important to stress that all vector (V) and axialvector (A) four-fermion couplings on the right-hand side of the flow equation have been brought into the form (V) and \\((V+A)\\), and then the \\((V+A)\\) terms have been Fierz transformed into the chirally invariant scalar four-fermion coupling \\((S-P)\\) used in our truncation (cf. Eq. (2)). The pure vector coupling is omitted for the time being and will be discussed in Sect. 6. It should also be mentioned that no tensor four-fermion coupling is generated on the right-hand side of the flow equation. Incidentally, the mass equation coincides with the results of [10]; we find agreement of the third equation with the results of [12] where the same model was investigated in a nonbosonized version.2. We note that the last term in \\(\\beta_{\\lambda_{\\sigma}}\\), which is \\(\\sim\\bar{h}_{k}^{\\,4}\\), is suppressed by the threshold function as long as \\(\\bar{m}^{2}/(Z_{\\phi}k^{2})\\) remains large. For simplicity of the discussion we will first omit it and comment on its quantitative impact later on. The inclusion of this term does not change the qualitative behavior. Footnote 2: Remaining numerical differences arise from wave function renormalization which was included in [12], but is neglected here for simplicity. ## 3 Fermion-boson translation by hand As mentioned above, the boundary conditions for the flow equation are such that the four-fermion interaction vanishes at the bosonization scale, \\(\\bar{\\lambda}_{\\sigma,\\Lambda}=0\\). But lowering \\(k\\) a bit introduces the four-fermion interaction again according to Eq. (11): \\[\\left.\\partial_{t}\\bar{\\lambda}_{\\sigma,k}\\right|_{k=\\Lambda}=-24\\Lambda^{-2} v_{4}l_{1,2}^{(FB)\\,4}(0,0)\\,e^{4}\ eq 0. \\tag{12}\\] In Eq. (5), we may again solve the field equations for \\(\\phi\\) as a functional of \\(\\bar{\\psi},\\psi\\) and find in Fourier space \\[\\phi(q)=\\frac{\\bar{h}_{k}(\\bar{\\psi}_{\\rm L}\\psi_{\\rm R})(q)}{\\bar{m}_{k}^{\\, 2}+Z_{\\phi,k}q^{2}},\\quad\\phi^{*}(q)=-\\frac{\\bar{h}_{k}(\\bar{\\psi}_{\\rm R}\\psi _{\\rm L})(-q)}{\\bar{m}_{k}^{\\,2}+Z_{\\phi,k}q^{2}}. \\tag{13}\\] Inserting this result into \\(\\Gamma_{k}\\) yields the \"total\" four-fermion interaction (\\(\\int_{q}\\equiv\\int\\left(\\frac{dq}{2\\pi}\\right)^{4}\\)) \\[\\int_{q}\\left(2\\bar{\\lambda}_{\\sigma,k}+\\frac{\\bar{h}_{k}^{\\,2}}{\\bar{m}_{k}^ {\\,2}+Z_{\\phi,k}q^{2}}\\right)(\\bar{\\psi}_{\\rm R}\\psi_{\\rm L})(-q)(\\bar{\\psi}_{ \\rm L}\\psi_{\\rm R})(q). \\tag{14}\\] The local component (for \\(q^{2}=0\\)) contains a direct contribution \\(\\sim 2\\bar{\\lambda}_{\\sigma,k}\\) (one-particle irreducible in the bosonized version) and a scalar exchange contribution (one-particle reducible in the bosonized version). From the point of view of the original fermionic theory, there is no distinction between the two contributions (both are 1PI in the purely fermioniclanguage). This shows a redundancy in our parametrization, since we may change \\(\\bar{\\lambda}_{\\sigma,k}\\), \\(\\bar{h}_{k}\\) and \\(\\bar{m}_{k}^{2}\\) while keeping the effective coupling \\[2\\lambda_{\\sigma}^{\\rm eff}(q)=2\\bar{\\lambda}_{\\sigma,k}+\\frac{\\bar{h}_{k}^{2}} {\\bar{m}_{k}^{2}+Z_{\\phi,k}q^{2}} \\tag{15}\\] fixed. Indeed, a choice \\(\\bar{\\lambda}_{\\sigma,k}^{\\prime}\\), \\(\\bar{h}_{k}^{\\prime}\\), \\(\\bar{m}_{k}^{2\\prime}\\) leads to the same \\(\\lambda_{\\sigma}^{\\rm eff}(0)\\) if it obeys \\[\\bar{\\lambda}_{\\sigma,k}^{\\prime}=\\bar{\\lambda}_{\\sigma,k}+\\frac{\\bar{h}_{k}^ {2}}{2\\bar{m}_{k}^{2}}-\\frac{\\bar{h}_{k}^{2\\prime}}{2\\bar{m}_{k}^{2\\prime}}. \\tag{16}\\] In particular, we will choose a parametrization where \\(\\bar{\\lambda}_{\\sigma,k}^{\\prime}\\) vanishes for all \\(k\\). In this parametrization, any increase \\(d\\bar{\\lambda}_{\\sigma,k}\\) according to Eq. (11) is compensated for by a change of \\(\\frac{1}{2}d\\left(\\frac{\\bar{h}_{k}^{2}}{\\bar{m}_{k}^{2}}\\right)\\) of the same size. An increase in \\(\\bar{\\lambda}_{\\sigma,k}\\) is mapped into an increase in \\(\\bar{h}_{k}^{2}/(2\\bar{m}_{k}^{2})\\). In this parametrization, the four-fermion coupling remains zero, whereas the flow of \\(\\bar{h}_{k}^{2}/\\bar{m}_{k}^{2}\\) receives an additional contribution \\[\\partial_{t}\\left(\\frac{\\bar{h}_{k}^{2}}{\\bar{m}_{k}^{2}}\\right)=\\partial_{t} \\left(\\frac{\\bar{h}_{k}^{2}}{\\bar{m}_{k}^{2}}\\right)\\bigg{|}_{\\bar{\\lambda}_{ \\sigma,k}}+2\\partial_{t}\\bar{\\lambda}_{\\sigma,k}\\bigg{|}_{\\bar{h}_{k}^{2},\\bar{ m}_{k}^{2}}. \\tag{17}\\] More explicitly, we can write \\[\\partial_{t}\\left(\\frac{\\bar{m}_{k}^{2}}{\\bar{h}_{k}^{2}}\\right)=\\frac{1}{ \\bar{h}_{k}^{2}}\\partial_{t}\\bar{m}_{k}^{2}\\big{|}_{\\bar{\\lambda}_{\\sigma,k}} -2\\frac{\\bar{m}_{k}^{2}}{\\bar{h}_{k}^{3}}\\partial_{t}\\bar{h}_{k}\\big{|}_{\\bar{ \\lambda}_{\\sigma,k}}-2\\frac{\\bar{m}_{k}^{4}}{\\bar{h}_{k}^{4}}\\partial_{t}\\bar {\\lambda}_{\\sigma,k}, \\tag{18}\\] with \\(\\partial_{t}\\bar{m}_{k}^{2}\\big{|}_{\\bar{\\lambda}_{\\sigma,k}}\\), \\(\\partial_{t}\\bar{h}_{k}\\big{|}_{\\bar{\\lambda}_{\\sigma,k}}\\), \\(\\partial_{t}\\bar{\\lambda}_{\\sigma,k}\\) given by Eq. (11) with the replacement \\(\\bar{\\lambda}_{\\sigma,k}\\to 0\\) on the right-hand sides. One obtains \\[\\partial_{t}\\left(\\frac{\\bar{m}_{k}^{2}}{\\bar{h}_{k}^{2}}\\right)=v_{4}\\left[8 \\,l_{1}^{(F)\\,4}(0)\\,k^{2}+32\\,l_{1,1}^{(FB)\\,4}(0,0)\\,e^{2}\\frac{\\bar{m}_{k}^ {2}}{\\bar{h}_{k}^{2}}+48\\,l_{1,2}^{(FB)\\,4}(0,0)\\,\\frac{e^{4}}{k^{2}}\\left( \\frac{\\bar{m}_{k}^{2}}{\\bar{h}_{k}^{2}}\\right)^{2}\\right]. \\tag{19}\\] The characteristics of this flow can be understood best in terms of the dimensionless quantity \\[\\tilde{\\epsilon}_{k}=\\frac{\\bar{m}_{k}^{2}}{\\bar{h}_{k}^{2}k^{2}}. \\tag{20}\\] It obeys the flow equation \\[\\partial_{t}\\tilde{\\epsilon}_{k}=\\beta_{\\tilde{\\epsilon}} = -2\\tilde{\\epsilon}_{k}+v_{4}\\left[8l_{1}^{(F)\\,4}(0)+32l_{1,1}^{( FB)\\,4}(0,0)e^{2}\\tilde{\\epsilon}_{k}+48l_{1,2}^{(FB)\\,4}(0,0)e^{4}\\tilde{ \\epsilon}_{k}^{2}\\right]\\] \\[= -2\\tilde{\\epsilon}_{k}+\\frac{1}{8\\pi^{2}}+\\frac{1}{\\pi^{2}}e^{2} \\,\\tilde{\\epsilon}_{k}+\\frac{9}{4\\pi^{2}}e^{4}\\,\\tilde{\\epsilon}_{k}^{2},\\] where in the last line we have inserted the values of the threshold functions for optimized cutoffs [14] discussed in appendix E. Neglecting the running of the gauge coupling \\(e\\), we note in Fig. 1 the appearance of two fixed points. For gauge couplings of order 1 or smaller and to leading order in \\(e\\), these two fixed points are given by \\[\\tilde{\\epsilon}_{1}^{*}\\simeq\\frac{1}{16\\pi^{2}}+{\\cal O}(e^{2}/(16\\pi^{2})^{2 }),\\quad\\tilde{\\epsilon}_{2}^{*}\\simeq\\frac{8\\pi^{2}}{9e^{4}}+{\\cal O}(1/e^{2}). \\tag{22}\\] The smaller fixed point \\(\\tilde{\\epsilon}_{1}^{*}\\) is infrared unstable, whereas the larger fixed point \\(\\tilde{\\epsilon}_{2}^{*}\\) is infrared stable. Therefore, starting with an initial value of \\(0<\\tilde{\\epsilon}_{\\Lambda}<\\tilde{\\epsilon}_{1}^{*}\\), the flow of the scalar mass-to-Yukawa-coupling ratio will be dominated by the first two terms in the modified flow equation (21) \\(\\sim-2\\tilde{\\epsilon}_{k}+1/(8\\pi^{2})\\). This is nothing but the flow of a theory involving a \"fundamental\" scalar with Yukawa coupling to a fermion sector. Moreover, we will end in a phase with (dynamical) chiral symmetry breaking, since \\(\\tilde{\\epsilon}\\) is driven to negative values. (Higher order terms in the scalar potential need to be included once \\(\\tilde{\\epsilon}\\) becomes zero or negative.) This all agrees with the common knowledge that the low-energy degrees of freedom of the strongly coupled NJL model are (composite) scalars which nevertheless behave as fundamental particles. On the other hand, if we start with an initial \\(\\tilde{\\epsilon}_{\\Lambda}\\) value that is larger than the first (infrared unstable) fixed point, the flow will necessarily be attracted towards the second fixed point \\(\\tilde{\\epsilon}_{2}^{*}\\); there, the flow will stop. This flow does not at all remind us of the flow of a fundamental scalar. Moreover, there will be no dynamical symmetry breaking, since the mass remains positive. The effective four-fermion interaction corresponding to the second fixed point reads \\[\\lambda_{\\sigma}^{*}=\\frac{1}{2k^{2}\\tilde{\\epsilon}_{2}^{*}}\\approx\\frac{9}{1 6\\pi^{2}}\\frac{e^{4}}{k^{2}}. \\tag{23}\\] It coincides with the perturbative value (3) of massless QED. We conclude that the second fixed point characterizes massless QED. The scalar field shows a typical bound-state Figure 1: Fixed-point structure of the \\(\\tilde{\\epsilon}_{k}\\) flow equation after fermion-boson translation by hand. The graph is plotted for the threshold functions discussed in appendix E with \\(e=1\\). Note that \\(\\tilde{\\epsilon}_{1}^{*}\\) is small but different from zero (cf. Eq. (22)). Arrows indicate the flow towards the infrared, \\(k\\to 0\\). behavior with mass and couplings expressed by \\(e\\) and \\(k\\). (The question as to whether the bound state behaves like a propagating particle (i.e., \"positronium\") depends on the existence of an appropriate pole in the scalar propagator. At least for massive QED one would expect such a pole with renormalized mass corresponding to the \"rest mass\" of scalar positronium.) From a different viewpoint, the fixed point \\(\\tilde{\\epsilon}_{1}^{*}\\) corresponds directly to the critical coupling of the NJL model, which distinguishes between the symmetric and the broken phase. As long as the flow is governed by the vicinity of this fixed point, the scalar behaves like a fundamental particle, with mass given by the relevant parameter characterizing the flow away from this fixed point. Our interpretation is that the \"range of relevance\" of these two fixed points tell us whether the scalar appears as a \"fundamental\" or a \"composite\" particle, corresponding to the state of the system being governed by \\(\\tilde{\\epsilon}_{1}^{*}\\) or \\(\\tilde{\\epsilon}_{2}^{*}\\), respectively. The incorporation of the flow of the momentum-independent part of \\(\\bar{\\lambda}_{\\sigma,k}\\) into the flow of \\(\\bar{h}_{k}\\) and \\(\\bar{m}_{k}^{2}\\) affects only the ratio \\(\\bar{m}_{k}^{2}/\\bar{h}_{k}^{2}\\). At this point, it does not differentiate which part of the correction appears in the separate flow equations for \\(\\bar{m}_{k}^{2}\\) and \\(\\bar{h}_{k}\\), respectively. This degeneracy can be lifted if we include information about the flow of \\(\\bar{\\lambda}_{\\sigma,k}\\) for two different values of the external momenta. Let us define \\(\\bar{\\lambda}_{\\sigma,k}(s)\\) as \\(\\bar{\\lambda}_{\\sigma,k}(p_{1},p_{2},p_{3},p_{4})\\) with \\(p_{1}=p_{3}=(1/2)(\\sqrt{s},\\sqrt{s},0,0)\\), \\(p_{2}=p_{4}=(1/2)(\\sqrt{s},-\\sqrt{s},0,0)\\), where \\(s=(p_{1}+p_{2})^{2}=(p_{3}+p_{4})^{2}\\) is the square of the exchanged momentum in the \\(s\\) channel [5]. The coupling \\(\\bar{\\lambda}_{\\sigma,k}\\) appearing on the right-hand side of Eq. (17) corresponds in this notation to \\(\\bar{\\lambda}_{\\sigma,k}(s=0)\\). We can now achieve the simultaneous vanishing of \\(\\bar{\\lambda}_{\\sigma,k}(s=0)\\) and \\(\\bar{\\lambda}_{\\sigma,k}(s=k^{2})\\) if we redefine \\(\\bar{m}_{k}^{2}\\) and \\(\\bar{h}_{k}\\) such that they obey in addition \\[\\partial_{t}\\left(\\frac{\\bar{h}_{k}^{2}}{\\bar{m}_{k}^{2}+Z_{\\phi,k}k^{2}} \\right)=\\partial_{t}\\left(\\frac{\\bar{h}_{k}^{2}}{\\bar{m}_{k}^{2}+Z_{\\phi,k}k ^{2}}\\right)_{|\\bar{\\lambda}_{\\sigma,k}}+2\\partial_{t}\\bar{\\lambda}_{\\sigma,k }(s=k^{2}). \\tag{24}\\] Incorporation of this effect should improve a truncation where the 1PI four-fermion coupling is neglected subsequently, since we realize now a matching at two different momenta. The combination of Eqs. (17) and (24) specifies the evolution of \\(\\bar{m}_{k}^{2}\\) and \\(\\bar{h}_{k}\\), \\[\\partial_{t}\\bar{m}_{k}^{2} = \\beta_{m}\\big{|}_{\\bar{\\lambda}_{\\sigma,k}}+\\frac{2\\bar{m}_{k}^{2 }(\\bar{m}_{k}^{2}+Z_{\\phi,k}k^{2})}{\\bar{h}_{k}^{2}}\\left(\\frac{\\bar{m}_{k}^{2 }+Z_{\\phi,k}k^{2}}{Z_{\\phi,k}k^{2}}\\,\\partial_{t}\\Delta\\bar{\\lambda}_{\\sigma,k }+\\partial_{t}\\bar{\\lambda}_{\\sigma,k}(s=0)\\right), \\tag{25}\\] \\[\\partial_{t}\\bar{h}_{k} = \\beta_{h}\\big{|}_{\\bar{\\lambda}_{\\sigma,k}}+\\frac{2\\bar{m}_{k}^{ 2}+Z_{\\phi,k}k^{2}}{\\bar{h}_{k}}\\,\\partial_{t}\\bar{\\lambda}_{\\sigma,k}(s=0)+ \\frac{(\\bar{m}_{k}^{2}+Z_{\\phi,k}k^{2})^{2}}{Z_{\\phi,k}k^{2}\\,\\bar{h}_{k}}\\, \\partial_{t}\\Delta\\bar{\\lambda}_{\\sigma,k}, \\tag{26}\\] where we have used \\[\\Delta\\bar{\\lambda}_{\\sigma,k}=\\bar{\\lambda}_{\\sigma,k}(s=k^{2})-\\bar{\\lambda }_{\\sigma,k}(s=0). \\tag{27}\\] Let us finally comment on the influence of the last term \\(\\sim\\bar{h}_{k}^{4}\\) of Eq. (11), omitted up to now, on the flow equation (21) for \\(\\tilde{\\epsilon}_{k}\\): the contribution of this term to Eq. (21) is \\(\\sim(\\frac{m_{k}^{2}}{Z_{\\phi,k}k^{2}})^{2}\\,I_{1,2}^{(FB)\\,4}(0,\\frac{m_{k}^ {2}}{Z_{\\phi,k}k^{2}})\\). For large (\\(\\frac{m_{k}^{2}}{Z_{\\phi,k}k^{2}}\\)), this term approaches a constant, so that a slight vertical shift of the parabola of Fig. 1 is induced. We observe that this shift leaves the position of the second fixed point \\(\\tilde{\\epsilon}_{2}^{*}\\) unaffected to lowest order in \\(e\\). This justifies the omission of the \\(\\sim\\bar{h}_{k}^{4}\\) term in the preceding discussion. The influence of the \\(\\bar{h}_{k}^{4}\\) term on the first fixed point is discussed at the end of the next section. ## 4 Flow with continuous fermion-boson translation The ideas of the preceding section shall now be made more rigorous by deriving the results directly from an appropriate exact flow equation. As a natural approach to this aim, we could search for a \\(k\\)-dependent field transformation of the scalars, \\(\\phi\\to\\hat{\\phi}_{k}[\\phi]\\). In terms of the new variables, the flow equation (7) should then provide for the vanishing of the four-fermion coupling in the transformed effective action. Indeed, we sketch this approach briefly in appendix D. Instead, we propose here a somewhat different approach relying on a variant of the usual flow equation where the cutoff is adapted to \\(k\\)-dependent fields. The advantage is a very simple structure of the resulting flow equations in coincidence with those of the preceding section. The idea is to employ a flow equation for a scale-dependent effective action \\(\\Gamma_{k}[\\phi_{k}]\\), where the field variable \\(\\phi_{k}\\) is allowed to vary during the flow; we derive this flow equation in appendix C. To be precise within the present context, upon an infinitesimal renormalization group step from a scale \\(k\\) to \\(k-dk\\), the scalar field variables also undergo an infinitesimal transformation of the type (in momentum space) \\[\\phi_{k-dk}(q) = \\phi_{k}(q)+\\delta\\alpha_{k}(q)\\,(\\bar{\\psi}_{\\rm L}\\psi_{\\rm R} )(q) \\tag{28}\\] \\[\\equiv \\phi_{k}(q)+\\delta\\alpha_{k}(q)\\int_{p}\\,\\bar{\\psi}_{\\rm L}(p) \\psi_{\\rm R}(p+q).\\] Including the corresponding transformation of the complex conjugate variable, the flow of the scalar fields is given by \\[\\partial_{t}\\phi_{k}(q) = -(\\bar{\\psi}_{\\rm L}\\psi_{\\rm R})(q)\\,\\partial_{t}\\alpha_{k}(q),\\] \\[\\partial_{t}\\phi_{k}^{*}(q) = (\\bar{\\psi}_{\\rm R}\\psi_{\\rm L})(-q)\\,\\partial_{t}\\alpha_{k}(q). \\tag{29}\\] The transformation parameter \\(\\alpha_{k}(q)\\) is an a priori arbitrary function, expressing a redundancy in the parametrization of the effective action. As shown in Eq. (C.8), the effective action \\(\\Gamma_{k}[\\phi_{k},\\phi_{k}^{*}]\\) obeys the modified flow equation \\[\\partial_{t}\\Gamma_{k}[\\phi_{k},\\phi_{k}^{*}]=\\partial_{t}\\Gamma_{k}[\\phi_{k}, \\phi_{k}^{*}]\\big{|}_{\\phi_{k},\\phi_{k}^{*}}+\\int_{q}\\!\\left(\\frac{\\delta \\Gamma_{k}}{\\delta\\phi_{k}(q)}\\,\\partial_{t}\\phi_{k}(q)+\\frac{\\delta\\Gamma_{k }}{\\delta\\phi_{k}^{*}(q)}\\,\\partial_{t}\\phi_{k}^{*}(q)\\right), \\tag{30}\\] where the notation omits the remaining fermion and gauge fields for simplicity. The first term on the right-hand side is nothing but the flow equation for fixed variables evaluated at \\(\\phi_{k}\\), \\(\\phi_{k}^{*}\\) instead of \\(\\phi,\\phi^{*}=\\phi_{\\Lambda},\\phi_{\\Lambda}^{*}\\). The second term reflects the flow of the variable. Projecting Eq. (30) onto our truncation (5), we arrive at modified flows for the couplings: \\[\\partial_{t}\\bar{m}_{k}^{2} = \\partial_{t}\\bar{m}_{k}^{2}\\Big{|}_{\\phi_{k},\\phi_{k}^{*}},\\] \\[\\partial_{t}\\bar{h}_{k} = \\partial_{t}\\bar{h}_{k}\\big{|}_{\\phi_{k},\\phi_{k}^{*}}+(\\bar{m}_{ k}^{2}+Z_{\\phi,k}q^{2})\\,\\partial_{t}\\alpha_{k}(q), \\tag{31}\\] \\[\\partial_{t}\\bar{\\lambda}_{\\sigma,k} = \\partial_{t}\\bar{\\lambda}_{\\sigma,k}\\big{|}_{\\phi_{k},\\phi_{k}^{ *}}-\\bar{h}_{k}\\,\\partial_{t}\\alpha_{k}(q).\\] Again, the first terms on the right-hand sides are nothing but the right-hand sides of Eq. (11), i.e., the corresponding beta functions \\(\\beta_{m,h,\\lambda_{\\sigma}}\\). The further terms represent the modifications owing to the flow of the field variables, as obtained from the two last terms in Eq. (30) by inserting Eq. (29). Obviously, we could have generalized the method easily to the case of momentum-dependent couplings (see below). In the following, however, it suffices to study the point-like limit, which we associate to \\(q=0\\). We exploit the freedom in the choice of variables in Eq. (29) by fixing \\(\\alpha_{k}=\\alpha_{k}(q=0)\\) in such a way that the four-fermion coupling is not renormalized, \\(\\partial_{t}\\bar{\\lambda}_{\\sigma,k}=0\\). This implies the flow equation for \\(\\alpha_{k}\\), \\[\\partial_{t}\\alpha_{k}=\\beta_{\\lambda_{\\sigma}}/\\bar{h}_{k}. \\tag{32}\\] Together with the boundary condition \\(\\lambda_{\\sigma,\\Lambda}=0\\), this guarantees a vanishing four-fermion coupling at all scales, \\(\\bar{\\lambda}_{\\sigma,k}=0\\). The (nonlinear) fields corresponding to this choice obtain for every \\(k\\) by integrating the flow (32) for \\(\\alpha_{k}\\), with \\(\\alpha_{\\Lambda}=0\\). Of course, imposing the condition (32) also influences the flow of the Yukawa coupling according to Eq. (31), \\[\\partial_{t}\\bar{h}_{k}=\\beta_{h}+\\frac{\\bar{m}_{k}^{2}}{\\bar{h}_{k}}\\,\\beta_ {\\lambda_{\\sigma}}. \\tag{33}\\] In consequence, the flow equation for the quantity of interest, \\(\\bar{h}_{k}^{2}/\\bar{m}_{k}^{2}\\), then reads \\[\\partial_{t}\\left(\\frac{\\bar{h}_{k}^{2}}{\\bar{m}_{k}^{2}}\\right)=\\partial_{t} \\left(\\frac{\\bar{h}_{k}^{2}}{\\bar{m}_{k}^{2}}\\right)\\bigg{|}_{\\phi_{k},\\phi_{k }^{*}}+2\\beta_{\\lambda_{\\sigma}}. \\tag{34}\\] This coincides precisely with Eq. (17) where we have translated the fermionic interaction into the scalar sector by hand. The flow equation of the dimensionless combination \\(\\tilde{\\epsilon}_{k}=\\frac{\\bar{m}_{k}^{2}}{k^{2}h_{k}^{2}}\\) is therefore identical to the one derived in Eq. (21), so that the fixed-point structure described above is also recovered in the more rigorous approach. The underlying picture of this approach can be described as a permanent translation of four-fermion interactions, arising during each renormalization group step, into the scalar interactions. Thereby, bosonization takes place at any scale and not only at a fixed initial one. One should note that the field transformation is not fixed uniquely by the vanishing of \\(\\bar{\\lambda}_{\\sigma,k}\\). For instance, an additional contribution in Eq. (28) \\(\\sim\\delta\\beta_{k}(q)\\phi_{k}(q)\\) can absorb the momentum dependence of the Yukawa coupling by modifying the scalar propagator. Similarly to the discussion in Sect. 3, this can be used in order to achieve simultaneously the vanishing of \\(\\bar{\\lambda}_{\\sigma,k}(s)\\) for all \\(s\\) and \\(k\\) and a momentum-independent \\(\\bar{h}_{k}\\). First, the variable change \\[\\partial_{t}\\phi_{k}(q) = -(\\bar{\\psi}_{\\rm L}\\psi_{\\rm R})(q)\\,\\partial_{t}\\alpha_{k}(q)+ \\phi_{k}(q)\\,\\partial_{t}\\beta_{k}(q),\\] \\[\\partial_{t}\\phi_{k}^{*}(q) = (\\bar{\\psi}_{\\rm R}\\psi_{\\rm L})(-q)\\,\\partial_{t}\\alpha_{k}(q)+ \\phi_{k}^{*}(q)\\,\\partial_{t}\\beta_{k}(q) \\tag{35}\\] indeed ensures the vanishing of \\(\\bar{\\lambda}_{\\sigma,k}(s=q^{2})\\) if \\(\\partial_{t}\\alpha_{k}(q)=\\bar{h}_{k}^{-1}\\partial_{t}\\bar{\\lambda}_{\\sigma,k }(q^{2})\\). This choice results in \\[\\partial_{t}\\bar{h}_{k}(q) = \\partial_{t}\\bar{h}_{k}(q)\\big{|}_{\\bar{\\lambda}_{\\sigma,k}}+ \\frac{Z_{\\phi,k}q^{2}+\\bar{m}_{k}^{2}}{\\bar{h}_{k}}\\,\\partial_{t}\\bar{\\lambda }_{\\sigma,k}(q^{2})+\\bar{h}_{k}\\,\\partial_{t}\\beta_{k}(q),\\] \\[\\partial_{t}Z_{\\phi,k}(q)q^{2}+\\partial_{t}\\bar{m}_{k}^{2} = \\partial_{t}\\bar{m}_{k}^{2}\\big{|}_{\\bar{\\lambda}_{\\sigma,k}}+2 \\partial_{t}\\beta_{k}(q)\\,(Z_{\\phi,k}q^{2}+\\bar{m}_{k}^{2}), \\tag{36}\\] where \\(\\bar{h}_{k}(q)\\) and \\(Z_{\\phi,k}(q)\\) depend now on \\(q^{2}\\). Secondly, the momentum dependence of \\(\\bar{h}_{k}(q)\\) can be absorbed by the choice \\[\\partial_{t}\\beta_{k}(q)=-\\frac{Z_{\\phi,k}q^{2}+\\bar{m}_{k}^{2}}{\\bar{h}_{k}^ {2}}\\,\\partial_{t}\\bar{\\lambda}_{\\sigma,k}(q^{2})+\\frac{1}{Z_{\\phi,k}k^{2}\\bar {h}_{k}^{2}}\\left[(Z_{\\phi,k}k^{2}+\\bar{m}_{k}^{2})^{2}\\partial_{t}\\bar{\\lambda }_{\\sigma,k}(k^{2})-\\bar{m}_{k}^{4}\\partial_{t}\\bar{\\lambda}_{\\sigma,k}(0) \\right]. \\tag{37}\\] The particular form of the \\(q\\)-independent part of \\(\\partial_{t}\\beta_{k}\\) was selected in order to obtain \\(\\partial_{t}Z_{\\phi,k}(q^{2}=k^{2})=0\\) such that our approximation of constant \\(Z_{\\phi,k}\\) is self-consistent. Inserting Eq. (37) into the evolution equation (36) for \\(\\bar{h}_{k}\\) and \\(\\bar{m}_{k}^{2}\\), we recover Eqs. (25) and (26). We also note that the evolution of \\(\\tilde{\\epsilon}=\\bar{m}_{k}^{2}/(\\bar{h}_{k}^{2}(0)k^{2})\\) is independent of the choice of \\(\\beta_{k}(q)\\). It is interesting to observe that reinserting the classical equations of motion at a given scale in order to eliminate auxiliary variables is equivalent to the here-proposed variant of the flow equation with flowing variables. In contrast, the standard form of the flow equation in combination with a variable transformation, to be discussed in appendix D, leads to a more complex structure, which is in general more difficult to solve. ## 5 Between massless QED and spontaneous chiral symmetry breaking In this section, we briefly present some quantitative results for the flow in the gauged NJL model. Despite our rough approximation, they represent the characteristic physics. We concentrate on the flow of the dimensionless renormalized couplings \\[\\epsilon_{k}=\\frac{\\bar{m}_{k}^{2}}{Z_{\\phi,k}k^{2}},\\quad h_{k}=\\bar{h}_{k} \\,Z_{\\phi,k}^{-1/2},\\quad\\tilde{\\epsilon}_{k}=\\frac{\\epsilon_{k}}{h_{k}^{2}},\\quad\\tilde{\\alpha}_{k}=\\alpha_{k}\\,Z_{\\phi,k}^{1/2}\\,k^{2} \\tag{38}\\] in the symmetric phase. Inserting the specific threshold functions of appendix E, we find the system of differential flow equations \\[\\partial_{t}\\epsilon_{k} = -2\\epsilon_{k}+\\frac{h_{k}^{2}}{8\\pi^{2}}-\\frac{\\epsilon_{k}( \\epsilon_{k}+1)}{h_{k}^{2}}\\left(\\frac{9e^{4}}{4\\pi^{2}}-\\frac{h_{k}^{4}}{16 \\pi^{2}}\\frac{3+\\epsilon_{k}}{(1+\\epsilon_{k})^{3}}\\right)\\big{(}1+(1+ \\epsilon_{k})Q_{\\sigma}\\big{)},\\] \\[\\partial_{t}h_{k} = -\\frac{e^{2}}{2\\pi^{2}}\\,h_{k}-\\frac{2\\epsilon_{k}+1+(1+\\epsilon _{k})^{2}Q_{\\sigma}}{h_{k}}\\left(\\frac{9e^{4}}{8\\pi^{2}}-\\frac{h_{k}^{4}}{32\\pi^ {2}}\\frac{3+\\epsilon_{k}}{(1+\\epsilon_{k})^{3}}\\right). \\tag{39}\\]The resulting flow for \\(\\tilde{\\epsilon}_{k}\\) is independent of \\(Q_{\\sigma}\\equiv\\partial_{t}\\Delta\\bar{\\lambda}_{\\sigma,k}/\\partial_{t}\\bar{ \\lambda}_{\\sigma,k}(0)\\): \\[\\partial_{t}\\tilde{\\epsilon}_{k}=\\beta_{\\tilde{\\epsilon}}=-2\\tilde{\\epsilon}_{ k}+\\frac{1}{8\\pi^{2}}+\\frac{e^{2}}{\\pi^{2}}\\tilde{\\epsilon}_{k}+\\frac{9e^{4}}{4 \\pi^{2}}\\tilde{\\epsilon}_{k}^{2}-\\frac{1}{16\\pi^{2}}\\frac{\\epsilon_{k}^{2}(3+ \\epsilon_{k})}{(1+\\epsilon_{k})^{3}}. \\tag{40}\\] Here the last term reflects the last contribution to \\(\\beta_{\\lambda_{\\sigma}}\\) in Eq. (11), which has been neglected in the preceding section (cf. Eq. (21)). We see that its influence is small for \\(\\epsilon_{k}\\ll 1\\), whereas for \\(\\epsilon_{k}\\gg 1\\) it reduces the constant term by a factor \\(1/2\\). Note that, for a given \\(Q_{\\sigma}\\), Eqs. (39) form a closed set of equations. The same is true for the flows of \\(\\tilde{\\epsilon}_{k}\\) and \\(\\epsilon_{k}\\) if we express \\(h_{k}\\) in terms of \\(\\tilde{\\epsilon}_{k}\\) and \\(\\epsilon_{k}\\) in the first line of Eq. (39). In order to obtain \\(Q_{\\sigma}\\), the flow of \\(\\bar{\\lambda}_{\\sigma,k}(s)\\) has to be known; however, far less information is already sufficient for a qualitative analysis. First, it is natural to expect that \\(\\bar{\\lambda}_{\\sigma,k}(s)\\) is maximal for \\(s=0\\), since \\(\\bar{\\lambda}_{\\sigma,k}(s)\\) will be suppressed for large \\(s\\) owing to the external momenta. This implies \\(\\Delta\\bar{\\lambda}_{\\sigma,k}/\\bar{\\lambda}_{\\sigma,k}(s=0)<0\\). With the simplifying assumption that \\(\\Delta\\bar{\\lambda}_{\\sigma,k}/\\bar{\\lambda}_{\\sigma,k}(0)\\simeq\\mbox{const.}\\), we also conclude that \\[Q_{\\sigma}<0. \\tag{41}\\] For a qualitative solution of the flow equations, we assume \\(|Q_{\\sigma}|\\) to be of order 1 or smaller. We next need initial values \\(\\epsilon_{\\Lambda}\\), \\(\\tilde{\\epsilon}_{\\Lambda}\\) for solving the system of differential equations. We note that the initial value \\(\\epsilon_{\\Lambda}\\) diverges for the pure NJL model, since \\(Z_{\\phi,\\Lambda}=0\\). For large \\(\\epsilon_{k}\\), one has \\[\\partial_{t}\\epsilon_{k}=\\left[-2+\\frac{1}{8\\pi^{2}\\tilde{\\epsilon}_{k}}+(|Q_ {\\sigma}|\\,\\epsilon_{k}-1)\\left(\\frac{9e^{4}\\tilde{\\epsilon}_{k}}{4\\pi^{2}}- \\frac{1}{16\\pi^{2}\\tilde{\\epsilon}_{k}}\\right)\\right]\\epsilon_{k}, \\tag{42}\\] and we find that \\(\\epsilon_{k}\\) decreases rapidly for \\[\\tilde{\\epsilon}_{\\Lambda}>\\frac{1}{6e^{2}}. \\tag{43}\\] (In a more complete treatment, it decreases rapidly for arbitrary \\(\\tilde{\\epsilon}_{k}\\) owing to the generation of a nonvanishing \\(Z_{\\phi,k}\\) by the fluctuations. In the present truncation, the qualitative behaviour of the flow will depend on the details of \\(Q_{\\sigma}\\) if \\(\\tilde{\\epsilon}_{\\Lambda}\\) does not satisfy this bound.) We confine our discussion to initial values satisfying Eq. (43), which can always be accomplished without fine-tuning. In Fig. 2 we present a numerical solution for large \\(\\epsilon_{\\Lambda}\\) (small nonzero \\(Z_{\\phi,k}\\)), \\(Q_{\\sigma}=-0.1\\), and \\(\\tilde{\\epsilon}_{\\Lambda}\\) slightly above the bound given by Eq. (43) for \\(e=1\\); these initial conditions correspond to the symmetric phase. We observe that both \\(h_{k}\\) and \\(\\epsilon_{k}\\) approach constant values in the infrared. This corresponds to the \"bound-state fixed point\" for \\(\\tilde{\\epsilon}_{k}\\): \\(\\tilde{\\epsilon}_{2}^{*}\\simeq 8\\pi^{2}/(9e^{4})\\). A constant \\(\\epsilon_{k}\\) implies that the renormalized mass term \\(m_{k}^{2}=\\epsilon_{k}k^{2}\\) decreases \\(\\sim k^{2}\\) in the symmetric phase. The precise value of the Yukawa coupling at the fixed point depends on \\(e\\) and \\(|Q_{\\sigma}|\\): \\[(h^{*})^{2}=16\\pi^{2}\\epsilon^{*}-\\frac{8\\epsilon^{*}(\\epsilon^{*}+1)(1-|Q_{ \\sigma}|(\\epsilon^{*}+1))}{(2\\epsilon^{*}+1-|Q_{\\sigma}|(\\epsilon^{*}+1)^{2}) }e^{2}. \\tag{44}\\]If \\(\\epsilon^{*}\\gg 1\\) still holds, the fixed-point values can be given more explicitly: \\[\\epsilon^{*}\\simeq\\frac{2}{|Q_{\\sigma}|},\\quad h^{*}\\simeq\\frac{3e^{2}}{2\\pi \\sqrt{|Q_{\\sigma}|}}. \\tag{45}\\] Note that \\(\\epsilon^{*}\\gg 1\\) is equivalent to \\(|Q_{\\sigma}|\\ll 1\\); numerically, we find that Eqs. (45) describe the fixed-point values reasonably well already for \\(|Q_{\\sigma}|\\lesssim 0.1\\). We observe that the fixed-point values are independent of the initial values \\(\\epsilon_{\\Lambda}\\) and \\(\\tilde{\\epsilon}_{\\Lambda}\\), so that the system has \"lost its memory\". Finally, the parameter \\(\\tilde{\\alpha}_{k}\\) governing the field redefinition obeys the flow equation \\[\\partial_{t}\\tilde{\\alpha}_{k}=2\\tilde{\\alpha}_{k}-\\frac{9e^{4}}{8\\pi^{2}\\,h _{k}}+\\frac{h_{k}^{3}}{32\\pi^{2}}\\,\\frac{3+\\epsilon_{k}}{(1+\\epsilon_{k})^{3}}. \\tag{46}\\] A numerical solution is plotted in Fig. 2, right panel. Also \\(\\tilde{\\alpha}_{k}\\) approaches a constant for small \\(k\\). Therefore, the transformation parameter \\(\\alpha_{k}\\sim\\tilde{\\alpha}_{k}/k^{2}\\) increases for small \\(k\\). The physical picture of the fixed point (45) is quite simple. We may first translate back to an effective four-fermion interaction by solving the scalar field equations: \\[\\bar{\\lambda}_{\\sigma,k}(q^{2})=\\frac{1}{2}\\frac{(h^{*})^{2}}{(q^{2}+\\epsilon ^{*}\\,k^{2})}=\\frac{9e^{4}}{8\\pi^{2}}\\frac{1}{(|Q_{\\sigma}|q^{2}+2k^{2})}. \\tag{47}\\] In the limit \\(k\\to 0\\), this mimics the exchange of a massless positronium-like state with effective coupling \\(h^{*}=3e^{2}/(2\\pi\\sqrt{|Q_{\\sigma}|})\\). Indeed, if we switch on the electron mass \\(m_{\\rm e}\\), we expect that the running of the positronium mass term stops at \\(k^{2}\\simeq m_{\\rm e}^{2}\\). In consequence, the positronium state will acquire a mass \\(\\sim m_{\\rm e}\\), which is, in principle, calculable by an improved truncation within our framework. On the other hand, starting with small enough \\(\\tilde{\\epsilon}_{\\Lambda}\\), one will observe chiral symmetry breaking as we have already argued in Sect. (3). Quantitative accuracy should include at least the flow of the scalar wave function renormalization in this case. Near the boundary between the two phases, the infrared physics is described by a renormalizable theory for QED with a neutral scalar coupled to the fermion. ## 6 Modified gauge fields The possibility of \\(k\\)-dependent field redefinitions is not restricted to composite fields. We demonstrate this here by a transformation of the gauge field, which becomes a \\(k\\)-dependent nonlinear combination according to \\[\\partial_{t}A_{\\mu}(q)=-\\partial_{t}\\gamma_{k}(\\bar{\\psi}\\gamma_{\\mu}\\psi)(q)- \\partial_{t}\\delta_{k}(\\partial_{\ u}F_{\\mu\ u})(q)-\\partial_{t}\\zeta_{k}( \\partial_{\\mu}\\partial_{\ u}A_{\ u})(q). \\tag{48}\\] This transformation can absorb the vector channel in the four-fermion interaction. Indeed, we may enlarge our truncation (2) by a term \\[\\Gamma^{(V)}_{k}=\\int d^{4}x\\bar{\\lambda}_{v,k}(\\bar{\\psi}\\gamma_{\\mu}\\psi)( \\bar{\\psi}\\gamma_{\\mu}\\psi) \\tag{49}\\] (or a corresponding generalization with momentum-dependent coupling \\(\\bar{\\lambda}_{v,k}\\)). The flow equation for \\(\\bar{\\lambda}_{v}\\) reads \\[\\partial_{t}\\bar{\\lambda}_{v,k}=-6k^{-2}v_{4}l^{(FB)4}_{1,2}(0,0)e^{4}+\\frac{ 1}{2}k^{-2}v_{4}\\frac{1}{Z_{\\phi,k}^{2}}l^{(FB)4}_{1,2}(0,\\frac{m_{k}^{2}}{z_ {\\phi,k}k^{2}})\\bar{h}_{k}^{4}+e\\partial_{t}\\gamma_{k}+{\\cal O}(\\bar{\\lambda} _{\\sigma,k},\\bar{\\lambda}_{v,k}). \\tag{50}\\] In the following, we again omit the term \\(\\sim\\bar{h}_{k}^{4}\\), whose contributions are subdominant once the scalars have decoupled from the flow. Choosing \\(\\gamma_{k}\\) according to \\[\\partial_{t}\\gamma_{k}=6k^{-2}v_{4}l^{(FB)4}_{1,2}(0,0)e^{3}, \\tag{51}\\] we can obtain a vanishing of \\(\\bar{\\lambda}_{v}\\) for all \\(k\\). This procedure introduces additional terms \\(\\sim\\bar{\\sigma}_{k}(\\partial_{\ u}F_{\\mu\ u})\\bar{\\psi}\\gamma_{\\mu}\\psi\\) with \\(\\bar{\\sigma}_{k}\\) obeying \\[\\partial_{t}\\bar{\\sigma}_{k}=-\\partial_{t}\\gamma_{k}+e\\partial_{t}\\delta_{k}+\\ldots, \\tag{52}\\] where the dots correspond to contributions from \\(\\partial_{t}\\Gamma\\) at fixed fields. Adjusting \\(\\delta_{k}\\) permits us to enforce \\(\\bar{\\sigma}_{k}=0\\). As a result, only the gauge field propagator gets modified by higher derivative terms. We note that the modified gauge field has the same gauge transformation properties as the original field only for \\(\\zeta_{k}=0\\). In fact, the gauge fixing becomes dependent on the fermions by a term \\(\\bar{\\sigma}_{k}^{(gf)}\\bar{\\psi}\\gamma_{\\mu}\\psi\\partial_{\\mu}\\partial_{\ u} A_{\ u}\\) according to \\[\\partial_{t}\\bar{\\sigma}_{k}^{(gf)}=\\frac{1}{\\alpha}\\partial_{t}\\gamma_{k}+e \\partial_{t}\\zeta_{k}+\\ldots \\tag{53}\\]Again, we can enforce a vanishing \\(\\bar{\\sigma}_{k}^{(gf)}\\) for all \\(k\\) by an appropriate choice of \\(\\zeta_{k}\\). The contribution to the evolution of the gauge field propagator resulting from the field redefinition (48) is \\[\\partial_{t}\\Gamma^{(A2)}=-\\partial_{t}\\delta_{k}(\\partial_{\ u}F_{\\mu\ u})( \\partial_{\\rho}F_{\\mu\\rho})+\\frac{1}{\\alpha}\\partial_{t}\\zeta_{k}(\\partial_{ \\mu}\\partial_{\ u}A_{\ u})(\\partial_{\\mu}\\partial_{\\rho}A_{\\rho})+\\dots \\tag{54}\\] With \\(\\partial_{t}\\gamma_{k}\\sim e^{3},\\ \\partial_{t}\\delta_{k}\\sim e^{2},\\ \\partial_{t}\\zeta_{k}\\sim e^{2}/\\alpha\\) we see that the field redefinitions lead to a modification of the kinetic term (or a momentum-dependent wave function renormalization of the gauge field) already in leading order \\(\\sim e^{2}\\). Depending on the precise definition of the renormalized gauge coupling this can modify the \\(\\beta\\)-function for the \"composite gauge field\" as compared to the original one. This modification is the counterpart of the elimination of the effective vertices \\(\\sim\\bar{\\sigma}_{k},\\bar{\\sigma}_{k}^{(gf)}\\). (We note that no corrections arise if \\(e\\) is defined by the effective electromagnetic vertex at very small momentum.) ## 7 Conclusions It is an inherent feature of quantum field theory that a system with certain fundamental degrees of freedom at a \"microscopic\" scale can exhibit completely different degrees of freedom at a \"macroscopic\" scale, which appear to be equivalently \"fundamental\" in an operational sense. A prominent example are the pions in a low-momentum effective theory for strong interactions. These different faces of one and the same system are related by the action of the renormalization group. In the present work, we realize this formal concept with the aid of a renormalization group flow equation for the effective average action whose field variables are allowed to change continuously under the flow from one scale to another. In particular, this generally nonlinear transformation of variables is suitable for studying the renormalization flow of bound states. We illustrate these ideas by way of example by considering the gauged NJL model at weak gauge coupling. Our flow equations can clearly identify the phase transition to spontaneous chiral symmetry breaking. In our picture, the interaction between the fermions, representing the fundamental degrees of freedom at high momentum scales, gives rise to a pairing into scalar degrees of freedom. These so-formed bound states may still appear effectively as composite objects at lower scales or rather as fundamental degrees of freedom, depending on the strength of the initial interaction. As the criterion that distinguishes between these two cases, we classify the renormalization flow of the scalar bound states: \"fundamental behavior\" is governed by a typical infrared unstable fixed point with the relevant parameter corresponding to the mass of the scalar. Contrary to this, \"bound-state behaviour\" is related to an infrared attractive (partial) fixed point that is governed by the relevant and marginal parameters of the \"fundamental\" fermion and photon - massless QED in our case. The flow may show a crossover from one to the other characteristic behavior. This physical picture is obtained from the continuous transformation of the field variables under the flow that translates the fermion interactions into the parameters of the scalar sector. In the case of spontaneous chiral symmetry breaking, the scalars always appear as \"fundamental\" on scales characteristic for the phase transition and the order parameter. From a different perspective, we propose a technique for performing a bosonization of self-interactions of fundamental fermion fields permanently at all scales during the renormalization flow. Provided that appropriate low-energy degrees of freedom of a quantum system are known, our modified flow equation for the average effective action is capable of describing the crossover from one set of variables to another during the flow in a well-controlled manner. Thereby, the notions of fundamental particle or bound state become scale dependent. For the translation from fermion bilinears to scalars, the gauge field acts rather as a spectator, permanently catalyzing the generation of four-fermion interactions under the flow. In the vector channel, however, the gauge field can also participate in the field transformation. Hereby, a four-fermion interaction \\(\\sim(\\bar{\\psi}\\gamma^{\\mu}\\psi)^{2}\\) is absorbed at the expense of a modified photon kinetic term, which can lead to a change in the beta function \\(\\beta_{e}\\) of an appropriately defined effective gauge coupling. We expect this type of transformation to be particularly useful in the strong-gauge-coupling sector of the gauged NJL model. Here it is known that the four-fermion interaction can acquire an anomalous scaling dimension of 4 (instead of 6) [11], so that it mixes with the gauge interaction (in a renormalization-group sense) anyway. It should be worthwhile to employ this transformation for a search for the existence of ultraviolet stable fixed points in the \\(\\beta_{e}\\) function, to be expected for a large number of fermion species \\(N_{\\rm F}\\)[9]. In view of the motivating cases of top quark condensation in the Higgs sector and color octet condensation in low-energy QCD, we now have an important tool at our disposal which allows for a nonperturbative study of the transition from the underlying theory to the condensing degrees of freedom. Particularly in the case of \"spontaneous breaking of color\", a quantitatively reliable calculation of the potential for the quark-antiquark degrees of freedom seems possible. Analogously to the gauged NJL model, the effective quark self-interactions, being induced by the exchange of gluons and instantons, have to be translated into the scalar bound-state sector. The renormalization flow of the latter and the symmetry properties of their corresponding potential shall finally adjudicate on \"spontaneous breaking of color\". ## Appendix A Dirac algebra and Fierz transformations We work in a chiral basis, \\(\\psi=\\left(\\begin{array}{c}\\psi_{\\rm L}\\\\ \\psi_{\\rm R}\\end{array}\\right)\\), \\(\\bar{\\psi}=(\\bar{\\psi}_{\\rm R},\\bar{\\psi}_{\\rm L})\\), where \\(\\psi\\) and \\(\\bar{\\psi}\\) are anticommuting Grassmann variables and should be considered as independent, \\(\\psi_{L}=\\frac{1}{2}(1+\\gamma_{5})\\psi\\). The Dirac algebra for 4-dimensional Euclidean spacetime is given by \\[\\{\\gamma_{\\mu},\\gamma_{\ u}\\} = 2\\delta_{\\mu\ u},\\quad\\gamma_{\\mu}=(\\gamma_{\\mu})^{\\dagger},\\] \\[\\gamma_{\\mu}\\gamma_{\ u} = \\delta_{\\mu\ u}-{\\rm i}\\sigma_{\\mu\ u},\\quad\\sigma_{\\mu\ u}={ \\frac{{\\rm i}}{2}}[\\gamma_{\\mu},\\gamma_{\ u}],\\] \\[\\gamma_{5} = \\gamma_{1}\\gamma_{2}\\gamma_{3}\\gamma_{0}.\\] (A.1) Defining \\(O_{\\rm S}=1,\\,O_{\\rm V}=\\gamma_{\\mu},\\,O_{\\rm T}=\\frac{1}{\\sqrt{2}}\\sigma_{ \\mu\ u},\\,O_{\\rm A}={\\rm i}\\gamma_{\\mu}\\gamma_{5},\\,O_{\\rm P}=\\gamma_{5}\\), we obtain the Fierz identities in the form \\[(\\bar{\\psi}^{a}O_{X}\\psi_{b})(\\bar{\\psi}_{c}O_{X}\\psi_{d})=\\sum_{Y}C_{XY}(\\bar {\\psi}_{a}O_{Y}\\psi_{d})(\\bar{\\psi}_{c}O_{Y}\\psi_{b}),\\] (A.2) where \\(X,Y=\\)S,V,T,A,P and \\[C_{XY}=\\left(\\begin{array}{cccc}-\\frac{1}{4}&-\\frac{1}{4}&-\\frac{1}{4}&- \\frac{1}{4}\\\\ -1&\\frac{1}{2}&0&-\\frac{1}{2}&1\\\\ -\\frac{3}{2}&0&\\frac{1}{2}&0&-\\frac{3}{2}\\\\ -1&-\\frac{1}{2}&0&\\frac{1}{2}&1\\\\ -\\frac{1}{4}&\\frac{1}{4}&-\\frac{1}{4}&\\frac{1}{4}&-\\frac{1}{4}\\end{array} \\right).\\] (A.3) The structure \\((\\bar{\\psi}O_{\\rm V}\\psi)^{2}-(\\bar{\\psi}O_{\\rm A}\\psi)^{2}\\) is invariant under Fierz transformations, and \\((\\bar{\\psi}O_{\\rm V}\\psi)^{2}+(\\bar{\\psi}O_{\\rm A}\\psi)^{2}\\) can be completely transformed into (pseudo-)scalar channels: \\[(\\bar{\\psi}O_{\\rm V}\\psi)^{2}+(\\bar{\\psi}O_{\\rm A}\\psi)^{2}=-2[(\\bar{\\psi}O_{ \\rm S}\\psi)^{2}-(\\bar{\\psi}O_{\\rm P}\\psi)^{2}].\\] (A.4) Further useful identities are \\[(\\bar{\\psi}O_{\\rm T}\\gamma_{5}\\psi)^{2} = (\\bar{\\psi}O_{\\rm T}\\psi)^{2},\\] \\[(\\bar{\\psi}\\gamma_{\\alpha}\\gamma_{\\beta}\\gamma_{\\delta}\\psi)( \\bar{\\psi}\\gamma_{\\alpha}\\gamma_{\\beta}\\gamma_{\\delta}\\psi) = 10(\\bar{\\psi}\\gamma_{\\mu}\\psi)^{2}+6(\\bar{\\psi}\\gamma_{\\mu} \\gamma_{5}\\psi)^{2},\\] \\[(\\bar{\\psi}\\gamma_{\\alpha}\\gamma_{\\beta}\\gamma_{\\delta}\\psi)( \\bar{\\psi}\\gamma_{\\delta}\\gamma_{\\beta}\\gamma_{\\alpha}\\psi) = 10(\\bar{\\psi}\\gamma_{\\mu}\\psi)^{2}-6(\\bar{\\psi}\\gamma_{\\mu} \\gamma_{5}\\psi)^{2}.\\] (A.5) ## Appendix B Details of the fluctuation matrix In Eq. (9), we decompose the fluctuation matrix into \\({\\cal P}\\) and \\({\\cal F}\\), the latter containing the field dependence. The inverse propagator is diagonal in momentum space, \\[{\\cal P}\\!=\\!\\!\\left(\\begin{array}{cccc}q^{2}(1+r_{B})&&&\\\\ &0&Z_{\\phi,k}q^{2}(1+\\!r_{B})+\\bar{m}_{k}^{2}&&&\\\\ &Z_{\\phi,k}q^{2}(1+\\!r_{B})+\\bar{m}_{k}^{2}&0&&&\\\\ &&&0&-\ ot{q}^{T}(1+r_{\\rm F})\\\\ &&&-\ ot{q}(1+r_{\\rm F})&0\\end{array}\\right)\\!.\\] (B.1)It involves the dimensionless cutoff functions \\(r_{B}\\) and \\(r_{\\rm F}\\), being related to the components of \\(R_{k}\\) by \\[R_{k}^{A}=q^{2}r_{B},\\quad R_{k}^{\\phi}=Z_{\\phi,k}q^{2}r_{B},\\quad R _{k}^{\\psi}=-\ ot{q}\\,r_{\\rm F}.\\] (B.2) Of course, these cutoff functions are supposed to satisfy the usual requirements of cutting of the infrared and suppressing the ultraviolet sufficiently strongly. The conventions for the Fourier transformation employed here can be characterized by \\[\\psi(x)=\\int\\frac{d^{4}q}{(2\\pi)^{4}}\\,{\\rm e}^{{\\rm i}qx}\\,\\psi( q),\\quad\\bar{\\psi}(x)=\\int\\frac{d^{4}q}{(2\\pi)^{4}}\\,{\\rm e}^{-{\\rm i}qx}\\, \\bar{\\psi}(q)\\] (B.3) for the fermions. As a consequence, the Fourier modes of the field \\(\\Phi\\) and \\(\\Phi^{T}\\) in Eq. (8) are then given by \\(\\Phi(q)=(A(q),\\phi(q),\\phi^{*}(-q),\\psi(q),\\bar{\\psi}^{T}(-q))\\) (column vector) and \\(\\Phi^{T}(-q)=(A^{T}(-q),\\phi(-q),\\phi^{*}(q),\\psi^{T}(-q),\\bar{\\psi}(q))\\) (row vector). Owing to the sign difference in the arguments of \\(\\psi\\) and \\(\\bar{\\psi}\\), the inverse propagator \\({\\cal P}\\) in Eq. (B.1) is symmetric under transposition \\(T\\). Concerning the field-dependent part, the matrix \\({\\cal F}\\) is also diagonal in momentum space for constant \"background\" fields and antisymmetric under transposition in all fermion-related components: \\[{\\cal F} = \\left(\\begin{array}{cccc}0&0&0&-e\\bar{\\psi}\\gamma_{\\mu}&e\\psi^ {T}\\gamma_{\\mu}^{T}\\\\ 0&0&0&\\bar{h}_{k}\\bar{\\psi}_{\\rm R}&-\\bar{h}_{k}\\psi_{\\rm L}^{T}\\\\ 0&0&0&-\\bar{h}_{k}\\bar{\\psi}_{\\rm L}&\\bar{h}_{k}\\psi_{\\rm R}^{T}\\\\ e\\gamma_{\\mu}^{T}\\bar{\\psi}^{T}&-\\bar{h}_{k}\\bar{\\psi}_{\\rm R}^{T}&\\bar{h}_{k }\\bar{\\psi}_{\\rm L}^{T}&\\bar{H}&-F^{T}\\\\ -e\\gamma_{\\mu}\\psi&\\bar{h}_{k}\\psi_{\\rm L}&-\\bar{h}_{k}\\psi_{\\rm R}&F&H\\end{array} \\right),\\] (B.4) where \\[H = -\\bar{\\lambda}_{\\sigma,k}\\big{[}\\psi\\psi^{T}-\\gamma_{5}\\psi\\psi^ {T}\\gamma_{5}\\big{]},\\quad H^{T}=-H,\\] \\[\\bar{H} = -\\bar{\\lambda}_{\\sigma,k}\\big{[}\\bar{\\psi}^{T}\\bar{\\psi}-\\gamma_ {5}\\bar{\\psi}^{T}\\bar{\\psi}\\gamma_{5}\\big{]},\\quad\\bar{H}^{T}=-\\bar{H},\\] (B.5) \\[F = \\bar{h}_{k}(P_{\\rm L}\\phi-P_{\\rm R}\\phi^{*})+\\bar{\\lambda}_{ \\sigma,k}\\big{[}(\\bar{\\psi}\\psi)-\\gamma_{5}(\\bar{\\psi}\\gamma_{5}\\psi)+\\psi\\bar {\\psi}-\\gamma_{5}\\psi\\bar{\\psi}\\gamma_{5}\\big{]},\\] and \\(\\gamma_{\\mu}^{T}\\) is understood as transposition in Lorentz and/or Dirac space. The projectors \\(P_{\\rm L}\\) and \\(P_{\\rm R}\\) are defined as \\(P_{\\rm L,R}=(1/2)(1\\pm\\gamma_{5})\\). In Eq. (B.5), we have dropped the \\(A_{\\mu}\\) dependence of the quantity \\(F\\) which is not needed for our computation. ## Appendix C Exact flow equation for flowing field variables In the standard formulation of the flow equation [2], the field variables of the \\(k\\)-dependent effective action \\(\\Gamma_{k}[\\phi]\\) correspond to the so-called classical field defined via \\[\\phi=\\frac{\\delta W_{k}[j]}{\\delta j}\\equiv\\phi_{\\Lambda},\\] (C.1)where all explicit \\(k\\) dependence is contained in the cutoff dependence of \\(W_{k}\\), the generating functional for connected Green's functions. The last identity in Eq. (C.1) symbolizes that no explicit \\(k\\) dependence occurs for this classical field, and thereby the field \\(\\phi\\) at any scale is identical to the one at the ultraviolet cutoff \\(\\Lambda\\). (The functional dependence of \\(\\phi\\) on \\(j\\) contains, of course, an implicit \\(k\\) dependence.) In the present work, we would like to study the flow of the effective action, now depending on a field variable that is allowed to vary during the flow. For an infinitesimal change of \\(k\\), \\(\\phi_{k}\\) also varies infinitesimally: \\[\\phi_{k-dk}(q)=\\phi_{k}(q)+\\delta\\alpha_{k}(q)\\,F[\\phi_{k},\\dots](q),\\quad \\partial_{k}\\phi_{k}=-\\partial_{k}\\alpha_{k}\\,F[\\phi_{k},\\dots],\\] (C.2) where \\(\\delta\\alpha_{k}\\) is infinitesimal and \\(F\\) denotes some functional of possibly all fields of the system. The desired effective action \\(\\Gamma_{k}[\\phi_{k}]\\) is derived from a modified functional \\(W_{k}\\): \\[\\mathrm{e}^{W_{k}[j,\\dots]}=\\int\\mathcal{D}\\chi\\,\\mathcal{D}(\\dots)\\,\\mathrm{e }^{-S[\\chi]-\\Delta S_{k}[\\chi_{k}]+\\int j\\chi_{k}+\\dots}.\\] (C.3) The dots again indicate the contributions of further fields, suppressed in the following, and we assume the quantum field \\(\\chi\\) to be a real scalar for simplicity. In contrast to the common formulation [2] the source \\(j\\) multiplies a \\(k\\)-dependent nonlinear field combination \\(\\chi_{k}\\) which obeys \\[\\partial_{k}\\chi_{k}=-\\partial_{k}\\alpha_{k}\\,G[\\chi_{k}, ].\\] (C.4) We also modify the infrared cutoff \\[\\Delta S_{k}[\\chi_{k}]=\\frac{1}{2}\\int\\chi_{k}R_{k}\\chi_{k},\\] (C.5) which ensures that the momentum modes \\(\\sim k\\) of the actual field \\(\\chi_{k}\\) contribute to the flow at the scale \\(k\\), regardless of its different form at other scales. Furthermore, the cutoff form of Eq. (C.5) shall lead us to a simple form of the flow equation. The \\(k\\)-dependent classical field is given by \\[\\phi_{k}:=\\langle\\chi_{k}\\rangle=\\frac{\\delta W_{k}}{\\delta j},\\] (C.6) and, as a consequence, the higher derivatives of \\(W_{k}[j]\\) are now related to correlation functions of \\(\\chi_{k}\\) and no longer of \\(\\chi_{\\Lambda}\\).3 The desired effective action is finally defined in the usual way via a Legendre transformation including a subtraction of the cutoff: Footnote 3: Eq. (C.6) implies the relation \\(F[\\phi_{k},\\dots]=\\langle G[\\chi_{k},\\dots]\\rangle\\). However, the definition of \\(\\chi_{k}\\) is often not needed explicitly. For our purposes it suffices to define \\(F[\\phi_{k},\\dots]\\). \\[\\Gamma_{k}[\\phi_{k}]=-W_{k}\\big{[}j[\\phi_{k}]\\big{]}+\\int j[\\phi_{k}]\\,\\phi_{k }-\\Delta S_{k}[\\phi_{k}].\\] (C.7)Its flow equation is obtained by taking a derivative with respect to the RG scale \\(k\\), \\[\\partial_{k}\\Gamma_{k}[\\phi_{k}] = \\partial_{k}\\Gamma_{k}[\\phi_{k}]\\big{|}_{\\phi_{k}}+\\int\\frac{ \\delta\\Gamma_{k}[\\phi_{k}]}{\\delta\\phi_{k}}\\,\\partial_{k}\\phi_{k}\\] (C.8) \\[= \\frac{1}{2}{\\rm Tr}\\,\\frac{\\partial_{k}R_{k}}{\\Gamma_{k}^{(2)}[ \\phi_{k}]+R_{k}}-\\int\\frac{\\delta\\Gamma_{k}[\\phi_{k}]}{\\delta\\phi_{k}}\\,F[\\phi_ {k},\\dots]\\,\\partial_{k}\\alpha_{k}.\\] The first term of this flow equation is evaluated for fixed \\(\\phi_{k}\\) and hence leads to the form of the standard flow equation with \\(\\phi_{\\Lambda}\\) replaced by \\(\\phi_{k}\\); the second term describes the contribution arising from the variation of the field variable under the flow. Some comments should be made: 1) The variation (C.2) of the field during the flow is a priori arbitrary; therefore, Eq. (C.8) (together with some boundary conditions) determines \\(\\Gamma_{k}[\\phi_{k}]\\) completely only if \\(\\alpha_{k}\\) is fixed. 2) This redundancy can be used to arrive at a simple form for \\(\\Gamma_{k}[\\phi_{k}]\\) adapted to the problem under consideration. For example, one may determine \\(\\alpha_{k}\\) (and \\(F[\\phi_{k},\\dots]\\)) in such a way that some unwanted coupling vanishes. 3) This program can be generalized straightforwardly to a whole set of transformations \\(\\alpha_{k}^{i}\\) for different fields \\(i\\). Furthermore, the whole functional dependence may be \\(k\\) dependent by replacing \\(\\partial_{k}\\phi_{k}^{i}=-\\partial_{k}\\alpha_{k}^{i}F^{i}\\to-\\hat{\\cal F}_{k}^ {i}\\). 4) The generating functional of \\(\\phi_{\\Lambda}\\) 1PI Green's functions \\(\\Gamma_{k=0}[\\phi_{\\Lambda}]\\) can be obtained from \\(\\Gamma_{k=0}[\\phi_{k=0}]\\) by choosing \\(\\alpha_{k=0}=0\\). In practice, however, it is often more convenient to use \"macroscopic degrees of freedom\" \\(\\phi_{k=0}\\) different from the \"microscopic\" ones \\(\\phi_{\\Lambda}\\). Their respective relation then needs the computation of the flow of \\(\\alpha_{k}\\). 5) The present definition of the average action \\(\\Gamma_{k}[\\phi_{k}]\\) is different from the effective action \\(\\Gamma_{k}[\\hat{\\phi}_{k}]\\) that is obtained by a field transformation of the flow equation with fixed fields as described in appendix D. More precisely, consider the flow of the effective action \\(\\Gamma_{k}[\\phi_{\\Lambda}]\\) for fixed \\(\\phi_{\\Lambda}\\) and perform a finite \\(k\\)-dependent field transformation \\(\\hat{\\phi}_{k}=\\hat{\\phi}_{k}[\\phi,\\alpha_{k}]\\); then, even if the transformation was chosen in such a way that \\(\\hat{\\phi}_{k}\\) were identical with \\(\\phi_{k}\\) of the present method, these effective actions would not coincide. The cutoff term acts differently in the two cases. In the case of a field transformation, the cutoff involves \\(\\phi_{\\Lambda}\\), which is subsequently expressed in terms of the new variables, whereas, in the present case, the cutoff is readjusted at each scale and involves \\(\\chi_{k}\\). Although this does not affect physical results for exact solutions of the flow, this might lead to differences in approximate solutions of the flow, even if the approximation is implemented in the same way in either case. ## Appendix D Fermion-boson translation by field transformations with fixed cutoff Here, we shall present a third approach to fermion-boson translation relying on the standard formulation of the flow equation in addition to a finite field transformation. We intend to identify a field transformation of the type \\[\\hat{\\phi} = \\phi+\\hat{\\alpha}_{k}\\bar{\\psi}_{\\rm L}\\psi_{\\rm R}\\quad\\Longleftrightarrow \\quad\\phi=\\hat{\\phi}-\\hat{\\alpha}_{k}\\bar{\\psi}_{\\rm L}\\psi_{\\rm R},\\] \\[\\hat{\\phi}^{*} = \\phi^{*}-\\hat{\\alpha}_{k}\\bar{\\psi}_{\\rm R}\\psi_{\\rm L}\\quad \\Longleftrightarrow\\quad\\phi^{*}=\\hat{\\phi}^{*}+\\hat{\\alpha}_{k}\\bar{\\psi}_{ \\rm R}\\psi_{\\rm L},\\] (D.1) so that an appropriate choice of a finite \\(\\hat{\\alpha}_{k}\\) can transform the four-fermion coupling to zero. For simplicity, we work in the limit of a point-like interaction and dispense with an additional transformation of the type \\(\\sim\\beta_{k}\\,\\phi_{k}\\). Within these restrictions, we shall not find the physical infrared behavior described in Sects. 4 and 5. The present study is intended only for a quantitative comparison of the different approaches, which can be done by restricting the field redefinitions in Sect. 4 to Eq. (28) with \\(q\\)-independent \\(\\alpha_{k}\\). In contrast to the modified flow equation of Sect. 4 and appendix C, the source term and the infrared cutoff considered here involve the original fields. This approach therefore corresponds simply to a variable transformation in a given differential equation (exact flow equation). The transformed effective action for the hatted fields is obtained by simple insertion, \\(\\Gamma_{k}[\\hat{\\phi},\\psi,A]:=\\Gamma_{k}[\\phi[\\hat{\\phi}],\\psi,A]\\). Except for additional derivative terms arising from the scalar kinetic term, the two actions are formally equivalent, where the new \"hatted\" couplings read in terms of the original ones \\[\\hat{m}_{k}^{2} = \\bar{m}_{k}^{2},\\] \\[\\hat{h}_{k} = \\bar{h}_{k}+\\bar{m}_{k}^{2}\\hat{\\alpha}_{k},\\] (D.2) \\[\\hat{\\lambda}_{\\sigma,k} = \\bar{\\lambda}_{\\sigma,k}-\\bar{h}_{k}\\hat{\\alpha}_{k}-{{ 1\\over 2}}\\bar{m}_{k}^{2}\\hat{\\alpha}_{k}^{2}.\\] Again, the transformation function \\(\\hat{\\alpha}_{k}\\) is finally fixed by demanding that the beta function \\(\\hat{\\beta}_{\\lambda_{\\sigma}}\\) for the hatted four-fermion coupling \\(\\hat{\\lambda}_{\\sigma,k}\\) vanishes, \\[\\hat{\\beta}_{\\lambda_{\\sigma}}(\\hat{m}_{k}^{2},\\hat{h}_{k},\\hat{\\lambda}_{ \\sigma,k},\\hat{\\alpha}_{k},\\partial_{t}\\hat{\\alpha}_{k})=0,\\] (D.3) with the boundary conditions \\(\\bar{\\lambda}_{\\sigma,k=\\Lambda}=0\\) and \\(\\bar{\\alpha}_{k=\\Lambda}=0\\), which express complete bosonization at \\(\\Lambda\\) (this also implies \\(\\hat{\\lambda}_{\\sigma,k=\\Lambda}=0\\)). The new beta functions can now be determined from the standard flow equation, being subject to the field transformation. Following appendix A of [5], the basic equation is \\[\\partial_{t}\\Gamma_{k}[\\hat{\\Phi}] = \\partial_{t}\\Gamma_{k}\\big{|}_{\\Phi}-\\partial_{t}\\hat{\\phi}^{*} \\big{|}_{\\Phi}\\frac{\\delta}{\\delta\\hat{\\phi}^{*}}\\Gamma_{k}[\\hat{\\Phi}]- \\partial_{t}\\hat{\\phi}\\big{|}_{\\Phi}\\frac{\\delta}{\\delta\\hat{\\phi}}\\Gamma_{k} [\\hat{\\Phi}]\\] (D.4) \\[= \\frac{1}{2}\\,{\\rm STr}\\,\\tilde{\\partial}_{t}\\,\\ln\\!\\left(\\Gamma_ {k}^{(2)}+R_{k}\\right)-\\hat{\\alpha}_{k}\\left[\\bar{\\psi}_{\\rm L}\\psi_{\\rm R} \\frac{\\delta\\Gamma_{k}}{\\delta\\hat{\\phi}}-\\bar{\\psi}_{\\rm R}\\psi_{\\rm L}\\frac{ \\delta\\Gamma_{k}}{\\delta\\hat{\\phi}^{*}}\\right].\\] Although there seems to be a formal resemblance to Eq. (30), there is an important difference: Eq. (D.4) is equivalent to the standard flow equation, whereas Eq. (30) is not; the latter is derived with a different cutoff term! Without resorting to the calculation of Sect. 2, we can evaluate this equation completely from the transformed truncation \\(\\Gamma_{k}[\\hat{\\phi},\\psi,A]\\) and the field transformations (D.1) according to \\[\\left(\\Gamma_{k}^{(2)}\\right)_{ab}^{T} \\equiv \\frac{\\overrightarrow{\\delta}}{\\delta\\Phi_{a}^{T}}\\Gamma_{k}\\frac{ \\overleftarrow{\\delta}}{\\delta\\Phi_{b}}\\] \\[= \\left(\\frac{\\overrightarrow{\\delta}}{\\delta\\Phi_{a}^{T}}\\hat{\\Phi }_{i}^{T}\\right)\\frac{\\overrightarrow{\\delta}}{\\delta\\hat{\\Phi}_{i}^{T}}\\, \\Gamma_{k}\\,\\frac{\\overleftarrow{\\delta}}{\\delta\\hat{\\Phi}_{j}}\\left(\\hat{ \\Phi}_{j}\\frac{\\overleftarrow{\\delta}}{\\delta\\Phi_{b}}\\right)+(-1)^{(\\hat{ \\Phi},\\Phi^{T})}\\left(\\Gamma_{k}\\,\\frac{\\overleftarrow{\\delta}}{\\delta\\hat{ \\Phi}_{i}}\\right)\\left(\\frac{\\overrightarrow{\\delta}}{\\delta\\Phi_{a}^{T}}\\, \\hat{\\Phi}_{i}\\,\\frac{\\overleftarrow{\\delta}}{\\delta\\Phi_{b}}\\right),\\] where \\((\\Phi_{l},\\Phi_{m})=1\\) iff fermionic components in \\(\\Phi_{l}\\) as well as \\(\\Phi_{m}\\) are considered, and \\((\\Phi_{l},\\Phi_{m})=0\\) otherwise; the indices \\(a,b,i,j\\) label the different field components of \\(\\Phi,\\hat{\\Phi}\\). From Eq. (D.4), or equivalently Eq. (D.2), we deduce that the desired hatted beta functions are related to the original ones by \\[\\begin{array}{rcl}\\partial_{t}\\hat{m}_{k}^{2}\\equiv&\\hat{\\beta}_{m}&=&\\beta _{m},\\\\ \\partial_{t}\\hat{h}_{k}\\equiv&\\hat{\\beta}_{h}&=&\\beta_{h}+\\hat{\\alpha}_{k}\\beta _{m}+\\hat{m}_{k}^{2}\\partial_{t}\\hat{\\alpha}_{k},\\\\ \\partial_{t}\\hat{\\lambda}_{\\sigma,k}\\equiv\\hat{\\beta}_{\\lambda_{\\sigma}}&=& \\beta_{\\lambda_{\\sigma}}-\\hat{\\alpha}_{k}\\beta_{h}-{{1\\over 2}}\\hat{ \\alpha}_{k}^{2}\\beta_{m}-\\hat{h}_{k}\\partial_{t}\\hat{\\alpha}_{k},\\end{array}\\] (D.6) where the right-hand sides of Eq. (D.6) have to be expressed in terms of the hatted couplings by means of the relations (D.2). Now we determine \\(\\hat{\\alpha}_{k}\\) by demanding that \\(\\hat{\\beta}_{\\lambda_{\\sigma}}\\) vanishes for vanishing \\(\\hat{\\lambda}_{\\sigma,k}\\), so that no four-fermion coupling arises during the flow. Introducing dimensionless quantities for the hatted couplings, \\(\\tilde{\\alpha}_{k}=k^{2}Z_{\\phi,k}^{1/2}\\hat{\\alpha}_{k}\\), \\(\\epsilon_{k}=k^{-2}Z_{\\phi,k}^{-1}\\hat{m}_{k}^{2}\\), \\(h_{k}=Z_{\\phi,k}^{-1/2}\\hat{h}_{k}\\), we end up with the flow equations \\[\\begin{array}{rcl}\\partial_{t}\\epsilon_{k}&=&-2\\epsilon_{k}+ \\frac{1}{8\\pi^{2}}(h_{k}-\\epsilon_{k}\\tilde{\\alpha}_{k})^{2},\\\\ \\partial_{t}(h_{k}-\\epsilon_{k}\\tilde{\\alpha}_{k})&=&\\left[- \\frac{e^{2}}{2\\pi^{2}}-\\frac{1}{4\\pi^{2}}\\tilde{\\alpha}_{k}(h_{k}-{{1\\over 2 }}\\epsilon_{k}\\tilde{\\alpha}_{k})\\right](h_{k}-\\epsilon_{k}\\tilde{\\alpha}_{k}),\\\\ \\partial_{t}\\tilde{\\alpha}_{k}&=& 2\\tilde{\\alpha}_{k}-\\frac{9}{8 \\pi^{2}}\\frac{e^{4}}{h_{k}}-\\frac{1}{2\\pi^{2}}\\,e^{2}\\,\\tilde{\\alpha}_{k}+ \\frac{1}{16\\pi^{2}}(h_{k}-2\\epsilon_{k}\\tilde{\\alpha}_{k}+\\frac{\\epsilon_{k}^ {2}\\,\\tilde{\\alpha}_{k}^{2}}{2}\\tilde{h}_{k})\\tilde{\\alpha}_{k}\\\\ &&+\\frac{1}{8\\pi^{2}}\\frac{2+\\epsilon_{k}}{(1+\\epsilon_{k})^{2}} \\frac{1}{h_{k}}(h_{k}-{{1\\over 2}}\\epsilon_{k}\\tilde{\\alpha}_{k})(h_{k}- \\epsilon_{k}\\tilde{\\alpha}_{k})^{2}\\tilde{\\alpha}_{k}\\\\ &&+\\frac{1}{32\\pi^{2}}\\frac{3+\\epsilon_{k}}{(1+\\epsilon_{k})^{3}} \\frac{1}{h_{k}}(h_{k}-\\epsilon_{k}\\tilde{\\alpha}_{k})^{4},\\end{array}\\] (D.7) where we have inserted the threshold-function values as given in appendix E for illustrative purposes. These equations have to be read side by side with Eqs. (39) and (46). Contrary to the latter, the present flow equations are completely coupled; in particular, the flow for \\(\\tilde{\\alpha}_{k}\\) is not disentangled as it is in the case of Eqs. (39) and (46). In the flow equation for the mass, we again observe a critical mass-to-Yukawa-coupling ratio at the bosonization scale, corresponding to the infrared unstable fixed point \\(\\tilde{\\epsilon}_{1}^{*}\\) mentioned in Eq. (22): from a numerical solution, we find that \\(\\tilde{\\epsilon}_{\\Lambda}|_{\\rm crit}=\\epsilon_{\\Lambda}/h_{\\Lambda}^{2}|_{\\rm crit }\\simeq\\tilde{\\epsilon}_{1}^{*}\\) is hardly influenced by the \\(\\bar{h}_{k}^{4}\\) term. The actual initial value of this ratio at the bosonization scale with respect to \\(\\tilde{\\epsilon}_{\\Lambda}|_{\\rm crit}\\) hence determines whether the system flows towards the phase with dynamical symmetry breaking or not. In order to compare the present method with the one employed in Sect. 4, we plot a numerical solution of Eqs. (D.7) in Fig. 3 (solid lines) and compare it to a solution of the corresponding equations (39) and (46) (dashed lines) without those terms arising from the additional transformation \\(\\sim\\partial_{t}\\beta_{k}\\), which is not considered in Eqs. (D.7). In this figure, it becomes apparent that both methods do not only agree qualitatively, but also quantitatively to a high degree - as they should. The minor differences in these approaches can be attributed to the different formulation of the cutoff, and thereby reflect the inherent cutoff dependence for approximative solutions to the otherwise exact flow equation. The same conclusion can be drawn from the flow equation for the dimensionless combination \\(\\tilde{\\epsilon}_{k}\\) as defined in Eq. (20). Although the \\(\\tilde{\\epsilon}_{k}\\) flow equation derived from Eqs. (D.7) is comparably extensive (we shall not write it down here) and not identical to Eq. (21), the fixed-point structure remains nevertheless the same, and the \\(\\tilde{\\epsilon}_{k}\\) flow reduces exactly to Eq. (21) for \\(k\\to\\Lambda\\), where all our approaches agree. Moreover, the position of the infrared stable fixed point \\(\\tilde{\\epsilon}_{2}^{*}\\) also remains the same in the infrared to leading order in \\(e\\), so that the different approaches describe the same physics. To summarize, employing the method of field transformation in the flow equation for fixed cutoff, the same properties of the system can be derived with a similar numerical accuracy in comparison to the flow equation proposed in Sect. 4 and appendix C. However, the structure of the resulting flow equations derived in this appendix appears to be more involved, and we expect this to be a generic feature of field transformation in the flow equation for fixed cutoff - at least within the usual approximation schemes. ## Appendix E Cutoff Functions For concrete computations, we have to specify the cutoff functions. Here we shall use optimized cutoff functions as proposed in [14], which furnish a fast convergence behavior and provide for simple analytical expressions. Employing the nomenclature of [10], we use Figure 3: Flows of \\(\\epsilon_{k}\\), \\(h_{k}\\) and \\(\\tilde{\\alpha}_{k}\\) in the symmetric phase (\\(h_{\\Lambda}=1\\), \\(e=1\\), \\(\\epsilon_{\\Lambda}=1.16\\cdot[1/(16\\pi^{2})]\\)). The solid lines represent a solution to Eqs. (D.7); the dashed lines correspond to the analogous flow employing the method of Sect. 4 and appendix C (without the \\(\\sim\\partial_{t}\\beta_{k}\\) transformation). The plots are representative for a wide range of initial conditions. the dimensionless cutoff functions (\\(y=q^{2}/k^{2}\\)) \\[r_{B}(y) = \\left(\\frac{1}{y}-1\\right)\\theta(1-y),\\quad p(y)=y(1+r_{B}(y))=y+(1 -y)\\,\\theta(1-y),\\] \\[r_{\\rm F}(y) = \\left(\\frac{1}{\\sqrt{y}}-1\\right)\\theta(1-y),\\quad p_{\\rm F}(y)=y (1+r_{\\rm F}(y))^{2}\\to p(y).\\] (E.1) Here we have set the normalization constants \\(c_{\\rm B}\\) and \\(c_{\\rm F}\\) mentioned in [14] to the values \\(c_{\\rm B}=1/2\\) and \\(c_{\\rm F}=1/4\\), so that fermionic and bosonic fluctuations are cut off at the same momentum scale \\(q^{2}=k^{2}\\). This is natural in our case in order to avoid the situation in which fermionic modes which are already integrated out are transformed into bosonic modes which still have to be integrated out or vice versa. For these cutoff functions, the required threshold functions evaluate to \\[l_{n}^{(F)\\,d}(\\omega) = (\\delta_{n,0}+n)\\frac{2}{d}\\frac{1}{(1+\\omega)^{n+1}},\\] (E.2) \\[l_{n_{1},n_{2}}^{(FB)\\,d}(\\omega_{1},\\omega_{2}) = \\frac{2}{d}\\frac{1}{(1+\\omega_{1})^{n_{1}}(1+\\omega_{2})^{n_{2}} }\\left[\\frac{n_{1}}{1+\\omega_{1}}+\\frac{n_{2}}{1+\\omega_{2}}\\right].\\] (E.3) ## Acknowledgment H.G. would like to thank D.F. Litim for discussions on optimized cutoff functions and acknowledges financial support by the Deutsche Forschungsgemeinschaft under contract Gi 328/1-1. ## References * [1] V. A. Miransky, M. Tanabashi and K. Yamawaki, Phys. Lett. B **221**, 177 (1989); Mod. Phys. Lett. A **4**, 1043 (1989); W. A. Bardeen, C. T. Hill and M. Lindner, Phys. Rev. D **41**, 1647 (1990). * [2] C. Wetterich, Phys. Lett. B **301**, 90 (1993); Nucl. Phys. B **352**, 529 (1991); Z. Phys. C **48**, 693 (1990). * [3] C. Wetterich, Phys. Lett. B **462**, 164 (1999) [hep-th/9906062]; hep-ph/0008150, to appear in Phys. Rev. D. * [4] U. Ellwanger and C. Wetterich, Nucl. Phys. B **423**, 137 (1994) [hep-ph/9402221]; D. U. Jungnickel and C. Wetterich, in \"The Exact Renormalization Group\", eds. A. Krasnitz, Y. Kubyshin, R. Potting and P.. Sa, World Scientific, Singapore (1999) [hep-ph/9902316]. * [5] C. Wetterich, Z. Phys. C **72**, 139 (1996) [hep-ph/9604227]. * [6] J. I. Latorre and T. R. Morris, JHEP **0011**, 004 (2000) [hep-th/0008123]. * [7] Y. Nambu and G. Jona-Lasinio, Phys. Rev. **122**, 345 (1961); _ibid._**124**, 246 (1961). * [8] V. A. Miransky, _\"Dynamical symmetry breaking in quantum field theories\"_, World Scientific, Singapore (1993). * [9] M. Reenders, Phys. Rev. D **62**, 025001 (2000) [hep-th/9908158]. * [10] D. U. Jungnickel and C. Wetterich, Phys. Rev. D **53**, 5142 (1996) [hep-ph/9505267]. * [11] C. N. Leung, S. T. Love and W. A. Bardeen, Nucl. Phys. B **273**, 649 (1986). * [12] K. Aoki, K. Morikawa, J. Sumi, H. Terao and M. Tomoyose, Prog. Theor. Phys. **97**, 479 (1997) [hep-ph/9612459]. * [13] for a recent review of various topics of the gauged NJL model, see M. Reenders, hep-th/9906034. * [14] D. F. Litim, Phys. Lett. B **486**, 92 (2000) [hep-th/0005245]; hep-th/0103195.
A renormalization group flow equation with a scale-dependent transformation of field variables gives a unified description of fundamental and composite degrees of freedom. In the context of the effective average action, we study the renormalization flow of scalar bound states which are formed out of fundamental fermions. We use the gauged Nambu-Jona-Lasinio model at weak gauge coupling as an example. Thereby, the notions of bound state or fundamental particle become scale dependent, being classified by the fixed-point structure of the flow of effective couplings. CERN-TH/2001-196 HD-THEP-01-31 **Renormalization Flow of Bound States** Holger Gies\\({}^{a}\\) and Christof Wetterich\\({}^{b}\\) \\({}^{a}\\) _CERN, Theory Division, CH-1211 Geneva 23, Switzerland_ _E-mail: [email protected]_ \\({}^{b}\\) _Institut fur theoretische Physik, Universitat Heidelberg,_ _Philosophenweg 16, D-69120 Heidelberg, Germany_ _E-mail: [email protected]_
Summarize the following text.
arxiv-format/0107381v1.md
# Relativistic effects in the solar EOS A. Bonanno 1Osservatorio Astrofisico di Catania, Citta Universitaria, I-95123 Catania, Italy 1 A.L. Murabito 2Dipartimento di Fisica e Astronomia dell'Universita, Sezione Astrofisica, Citta Universitaria, I-95123 Catania, Italy 2 L. Paterno 2Dipartimento di Fisica e Astronomia dell'Universita, Sezione Astrofisica, Citta Universitaria, I-95123 Catania, Italy 2 Received 18 April 2001 / Accepted 14 June 2001 ## 1 Introduction It has recently been shown (Elliott & Kosovichev 1998) that the inclusion of relativistic effects in the equation of state (EOS) leads to a very good agreement between the solar models and the seismic Sun. In particular, the inversions of SOI-MDI/SOHO \\(p\\)-mode frequencies for the adiabatic exponent \\(\\Gamma_{1}\\) show that MHD EOS reproduces the interior of the Sun with great accuracy, when the relativistic contribution to the Fermi-Dirac statistics is included. It is thus interesting to approach the same problem by means of the forward analysis by comparing the theoretical eigenfrequencies with the observed ones. Unfortunately this method is not directly applicable since our description of the outer layers of the Sun is still far from complete and many theoretical uncertainties would influence our conclusions. However, since such small effects in solar EOS are most important only in the deep interior, it is possible to make use of the acoustic mode frequency small separation diagnostic, \\(\\delta\ u_{\\ell,n}=\ u_{\\ell,n}-\ u_{\\ell+2,n-1}\\), for spherical harmonic degrees \\(\\ell=0,1\\) and radial order n \\(\\gg\\ell\\) (Tassoul 1980). The main property of this quantity is that it is strongly sensitive to the sound speed gradient near the solar centre while it is weakly dependent on the details of the treatment of the outer layers. Since the relativistic effects manifest themselves mainly through a depletion of \\(0.1\\%-0.2\\%\\) of the adiabatic index \\(\\Gamma_{1}\\), we expect a quantitatively similar change of sound speed gradient in the solar core. The acoustic mode frequency small separation analysis has recently been used for estimating the seismic age of the Sun (Dziembowski _et al._ 1998) and the related implications of the uncertainties in the S\\({}_{11}\\) astrophysical factor determinations (Bonanno & Paterno 2001). Here we show that the mentioned above analysis can also be used to verify how the different physical characteristics of the MHD and OPAL EOS reflect on the accuracy of the description of the stratification of the internal layers of the Sun. On performing a \\(\\chi^{2}\\) analysis of the latest published GOLF/SOHO data for different solar models, we confirm the main conclusion of Elliott & Kosovichev (1998), based on an inversion analysis, that the inclusion of the relativistic effects in the EOS is in any case required to improve the accuracy of solar models, independent of which EOS is used. ## 2 The solar model In our analysis we used the GARching SOlar Model (GARSOM) code which has been described in detail in Schlattl _et al._ (1997). It includes the latest OPAL-opacities and either OPAL or MHD EOS, and it takes into account the microscopic diffusion of the elements heavier than hydrogen. Our standard solar model has been verified in detail in Turck-Chieze _et al._ (1998) and found in good agreement with other up-to-date solar models, and, in particular, it is consistent with the observed L\\({}_{\\odot}\\) and T\\({}_{\\rm eff}\\) within\\(10^{-4}\\), at an age of 4.60 Gy, adopting the surface value \\(\\rm Z/X=0.0245\\). We then included the relativistic correction leading term to the adiabatic index \\(\\Gamma_{1}\\) derived from the relativistic evaluation of the Fermi-Dirac integrals of the EOS in the solar core by means of the expression (Eliott & Kosovichev 1998): \\[\\frac{\\delta\\Gamma_{1}}{\\Gamma_{1}}\\simeq-\\widetilde{\\rm T}\\,\\frac{2+2{\\rm X}} {3+5{\\rm X}} \\tag{1}\\] where \\(\\widetilde{\\rm T}\\) is a dimensionless temperature in units of m\\({}_{\\rm e}\\)c\\({}^{2}/k\\), with m\\({}_{\\rm e}\\) the electron mass, \\(c\\) the light speed in vacuum, \\(k\\) the Boltzmann constant, and X the hydrogen abundance by mass. As expected, the correction to \\(\\Gamma_{1}\\) is negative, namely \\(\\Gamma_{1,{\\rm rel}}<\\Gamma_{1,{\\rm nr}}\\), since \\(\\Gamma_{1}\\) tends to shift from the non-relativistic value of 5/3 to the extremely relativistic one of 4/3. The corresponding relativistic corrections to the leading terms for sound speed, \\(c_{s}\\), and density, \\(\\varrho\\), are respectively: \\[\\frac{\\delta c_{s}}{c_{s}}\\simeq\\frac{1}{2}\\,\\frac{\\delta\\Gamma_{1}}{\\Gamma_ {1}}-\\frac{15}{64\\sqrt{2}}\\,\\widetilde{\\rm T}{\\rm e}^{\\psi} \\tag{2}\\] and \\[\\frac{\\delta\\varrho}{\\varrho}\\simeq\\frac{15}{8}\\,\\widetilde{\\rm T}\\left(1+ \\frac{{\\rm e}^{\\psi}}{4\\sqrt{2}}\\right) \\tag{3}\\] where \\(\\psi\\) is the degeneracy parameter, that is about -1.14 at the Sun's centre, and decreases noticeably toward the surface, the partial degeneracy being completely removed at 0.4 R\\({}_{\\odot}\\). The behaviour, as functions of the fractional radius, of the relative differences between the quantities \\(\\Gamma_{1}\\), \\(c_{s}\\) and \\(\\varrho\\) calculated with relativistic corrections and without them is shown in Fig. 1. The term \\(-(15/64\\sqrt{2})\\widetilde{\\rm T}{\\rm e}^{\\psi}=\\delta{\\rm P}/{\\rm P}-\\delta \\varrho/\\varrho\\), in Eq.(2) is negligible with respect to \\(\\delta\\Gamma_{1}/\\Gamma_{1}\\) indicating that the relativistic corrections to the pressure, P, and density, \\(\\varrho\\), cancel each other almost completely and the correction to \\(c_{s}\\) is entirely dominated by the correction to \\(\\Gamma_{1}\\). Also the term \\({\\rm e}^{\\psi}/4\\sqrt{2}\\) in Eq.(3) is negligible with respect to unity, indicating that in the solar case the coupling between degeneracy and relativistic effects is weak. ## 3 Results with GOLF/SOHO data We used the latest GOLF/SOHO data for \\(\\ell=0,1,2,3\\) obtained with long time series and by taking into account the asymmetric line profile in data reduction (Thiery _et al._ 2000). In particular, we determined the acoustic mode small spacing difference \\(\\delta\ u_{\\rm t,n}\\) for \\(\\ell=0,1\\) and n \\(\\gg\\ell\\) for our solar model, and studied the difference \\(\\delta\ u_{\\rm i,n,\\odot}-\\delta\ u_{\\rm i,n,model}\\) between data and model. We then constructed the two \\(\\chi^{2}\\) indicators (Dziembowski _et al._ 1998, Schlattl _et al._ 1997) \\[\\chi^{2}_{i}=\\frac{1}{{\\rm M}-{\\rm m}+1}\\sum_{\\rm n=m}^{\\rm M}\\frac{(\\delta \ u_{\\rm i,n,\\odot}-\\delta\ u_{\\rm i,n,model})^{2}}{\\sigma^{2}_{i,{\\rm n}}+ \\sigma^{2}_{2+i,{\\rm n}-1}} \\tag{4}\\] where \\(i\\) stands for \\(\\ell=0,1\\), m = 10 and M = 26. Fig. 2 and Fig. 3 show the behaviour of the terms in the sum defined in Eq.(4) in non-relativistic and relativistic cases for MHD and OPAL EOS respectively. The difference between the relativistic and non relativistic case is larger for \\(\\ell=0\\) in the frequency range beetwen 2000 and 2500 \\(\\mu\\)Hz, and for \\(\\ell=1\\) beetwen 2500 and 3000 \\(\\mu\\)Hz. The \\(\\chi^{2}\\) results are shown in Table 1 where it is possible to note that the models with relativistic corrections have rather smaller \\(\\chi^{2}\\) s and there is no significant difference between \\(\\chi^{2}_{0}\\) and \\(\\chi^{2}_{1}\\) calculated for OPAL and MHD EOS. However, MHD EOS appears to be slightly favoured with respect to OPAL EOS. Figure 1: Behaviour, as functions of the fractional radius, of the relative differences between relativistic and non-relativistic quantities \\(\\delta{\\rm y}/{\\rm y}=({\\rm y_{rel}}-{\\rm y_{nr}})/{\\rm y_{nr}}\\), where the \\({\\rm y}\\)s stand for \\(\\Gamma_{1}\\) (continuous line), \\(c_{s}\\) (dashed line), and \\(\\varrho\\) (dashed-dotted line) respectively. Figure 2: Relativistic (continuous line) and non-relativistic (dashed line) contribution to the \\(\\chi^{2}\\) calculation for MHD EOS. ## 4 Conclusions Our results show that the acoustic mode frequency small separations are sensitive to the inclusion of the relativistic effects. It would be interesting to discuss the relevance of these effects in the helioseismic determination of the solar age and related problems with S\\({}_{11}\\) uncertainties. We plan to address this issue in a forthcoming communication. ###### Acknowledgements. We are most grateful to H. Schlattl for useful discussions during the preparation of the manuscript. ## References * () Bonanno A., Paterno L., 2001, Mem. Soc. Astron. Ital., in press * () Dziembowski W.A., Fiorentini G., Ricci B., Sienkiewicz R., 1999, A&A 343, 990 * () Elliott J.R., Kosovichev A.G., 1998, ApJ 500, L199 * () Schlattl H., Weiss A., Ludwig H.G., 1997, A&A 200, L5 * () Tassoul M. 1980, ApJS 43, 469 * () Thiery S., Boumier P., Gabriel A.H., et al., 2000, A&A 355, 743 * () Turck-Chieze S., Basu S., Bertomieu G., et al., 1998, ESA SP-418, p.555 \\begin{table} \\begin{tabular}{l c c c c} \\hline EOS & \\(\\chi^{2}_{0}\\)(NR) & \\(\\chi^{2}_{0}\\)(REL) & \\(\\chi^{2}_{1}\\)(NR) & \\(\\chi^{2}_{1}\\)(REL) \\\\ \\hline MHD & 1.73 & 1.41 & 2.13 & 1.67 \\\\ OPAL & 1.91 & 1.41 & 2.32 & 1.94 \\\\ \\hline \\end{tabular} \\end{table} Table 1: \\(\\chi^{2}\\) results in non relativistic (NR) and relativistic (REL) cases for MHD and OPAL EOS. Figure 3: Relativistic (continuous line) and non-relativistic (dashed line) contribution to the \\(\\chi^{2}\\) calculation for OPAL EOS.
We study the sensitivity of the sound speed to relativistic corrections of the equation of state (EOS) in the standard solar model by means of a helioseismic forward analysis. We use the latest GOLF/SOHO data for \\(\\ell=0,1,2,3\\) modes to confirm that the inclusion of the relativistic corrections to the adiabatic exponent \\(\\Gamma_{1}\\) computed from both OPAL and MHD EOS leads to a more reliable theoretical modelling of the innermost layers of the Sun. Sun: interior - Sun: oscillations - Equation of state 06(06.09.1; 06.15.1;02.18.8) A. Bonanno
Summarize the following text.
arxiv-format/0108011v1.md
# Percolation in real Wildfires Guido Caldarelli 1 Sezione INFM di Roma1 and Dipartimento di Fisica, Universita \"La Sapienza\", P.le A.Moro 2 00185 Roma, Italy. Department of Geography, University of Cambridge, Downing Place, Cambridge, CB2 3EN, UK Departamento di Biologia Vegetale, Universita di Roma \"La Sapienza\", P.le A. Moro 2 00185 Roma, Italy Remote Sensing Department, University of Trier, Behringstrasse 54286 Trier Germany 1 Raffaella Frondoni 22 Andrea Gabrielli 1 Sezione INFM di Roma1 and Dipartimento di Fisica, Universita \"La Sapienza\", P.le A.Moro 2 00185 Roma, Italy. Department of Geography, University of Cambridge, Downing Place, Cambridge, CB2 3EN, UK Departamento di Biologia Vegetale, Universita di Roma \"La Sapienza\", P.le A. Moro 2 00185 Roma, Italy Remote Sensing Department, University of Trier, Behringstrasse 54286 Trier Germany 1 Marco Montori 1 Sezione INFM di Roma1 and Dipartimento di Fisica, Universita \"La Sapienza\", P.le A.Moro 2 00185 Roma, Italy. Department of Geography, University of Cambridge, Downing Place, Cambridge, CB2 3EN, UK Departamento di Biologia Vegetale, Universita di Roma \"La Sapienza\", P.le A. Moro 2 00185 Roma, Italy Remote Sensing Department, University of Trier, Behringstrasse 54286 Trier Germany 1 Rebecca Retzlaff 44 Carlo Ricotta 33 ###### pacs: 05.45.Df pacs: 89.60.Ec pacs: 89.75.Fb Fractals Environmental safety Structures and organization in complex systems In recent times the introduction of satellite imaging facilitated the coarse-scale analysis of wildfires [1]. Being inspired by the self-similar aspect of the fire scars, we want here to provide an explanation for this lack of characteristic length scale. We want to link wildfires spreading with the evolution of diffusive systems whose scale invariance has been widely analyzed [2]. In particular we focus here on the simplest example of fractal growth model, the model of percolation [3]. Percolation has been extensively studied, and it proved to be extremely successful in explaining some of the statistical properties of several propagation phenomena ranging from the polymer gelation [4] to superconductors [5]. In this paper we present some evidence that percolation models could be fruitfully applied to describe properties of wildfires [6]. In particular, the dynamical version of percolation, known as Dynamical Percolation (DyP), may provide effective insights into the fire control. We based our analysis on the comparison between several statistical properties of wildfires with those of the percolation clusters. We report here the measures of the fractal dimensionof areas, _accessible_ perimeters and hulls (defined as the set of the most external sites of the cluster) [3], along with the lacunarity (i.e. the void distribution inside the cluster). As a result, we can conclude that, within the error bars, statistical properties of wildfires can be accurately described by a self-organized version of DyP [7]. The data set shown here, consists of Landsat TM satellite imagery (30m \\(\\times 30\\)m ground resolution) of wildfires, acquired respectively: over the Biferno valley (Italy) in August 1988; over the Serrania Baja de Cuenca (Spain) in July 1994; and over the mount Penteli (Greece) in July 1995. In all the cases the image was acquired a few days after fire. The burnt surfaces were respectively 58, 60, 156 square Kilometers. Bands TM3 (red), TM4 (near infrared) and TM5 (mid infrared) of the post-fire subscene are classified using an unsupervised algorithm and 8 _classes_[8]. This means that in the above three bands any pixel of the image is characterised by a value related to the luminosity of that area. By clusterising in _classes_ those values one can describe different type of soil, and in particular the absence or presence of vegetation. In particular the maps of post-fire areas have been transformed into binary maps where black corresponds to burned areas. These maps are shown in Fig.1. In order to quantify the possible scale invariance, we measure the following properties: **(1)** the fractal dimension of the burned area; **(2)** the fractal dimension of the _accessible_ perimeter; **(3)** the fractal dimension of the hull (defined as the set of burned sites on the boundary of the system); **(4)** the variance of the relative point density fluctuations (i.e. a measure of the lacunarity of the system). We compare these values with the corresponding ones of self-organized Dynamical Percolation that we believe to represent the phenomenon. To measure fractal dimensions we apply two different methods: the average mass-length relation and the box counting method. Box counting method is performed overlapping a grid of size \\(r\\) over the data set and counting the number \\(N(r)\\) of boxes occupied by the cluster at the scale \\(r\\). For fractal objects \\(N(r)\\propto r^{-D_{f}}\\) (for \\(r\\to 0\\)) where \\(D_{f}\\) is the fractal dimension. The average mass-length relation measures the relationship between the average number \\(M(r)\\) of points of the date set at distance \\(r\\) around any other point of the data set itself. This can be achieved, by measuring \\(M(r)\\) in a circle of radius \\(r\\) centered around a point of the system. For scale-free objects \\(M(r)\\propto r^{D_{f}}\\) (for \\(r\\rightarrow\\infty\\)). To avoid any bias in the result, circles should be fully included in the cluster. The two methods gave equal results within the error bars. In general this means that \\(D_{f}\\) is a well defined property of the system. These results for the box counting are shown in Fig.2 and summarized in table 1. This table also reports the exact values that characterize critical percolation clusters. All wildfire measures but the hull are in very good agreement with percolation data. We believe that this peculiar behaviour for the hull may depend on the coarse resolution of the remotely sensed burnt area (30 m), resulting in a kind of Grossman-Aharony effect [9] which reduces the hull of the critical percolation cluster to the accessible perimeter. Consequently, the hull fractal dimension (equal to 7/4) is reduced to the fractal dimension of the accessible perimeter (equal to 4/3). This effect can be induced by different redefinition of the connectivity criterium on the hull sites. In this case in particular we refer to the studies of Ref. [10, 11] where a smaller than expected hull exponent is related to the low resolution of image with respect to the characteristic distance of the percolation process (see also Ref. [12]). To test this assumption, a critical percolation cluster computer simulation with \\(3\\times 10^{5}\\) sites was undertaken. Results show that, whereas most statistical properties do not change, the hull tends to behave as the accessible perimeter if a coarse graining procedure is applied in such a way to reduce the resolution between first and second neighbors (see Fig.3). The last measure we perform is the computation of the variance \\(\\sigma(r)\\) of the normalized point density fluctuations of the burnt sites, i.e. \\(\\sigma(r)=\\sqrt{<M(r)^{2}>/<M(r)>^{2}-1}\\). Generally \\(\\sigma(r)\\) is a function of the radius \\(r\\); for a \"simple\" fractal, the variance \\(\\sigma(r)\\) is a constant \\(\\sigma_{intrinsic}\\). At large scale \\(r\\), for values near to the spatial extension of the data set, the measure of statistical quantities \\((M(r),\\sigma(r)\\) etc.) is affected by finite size effects. Such effects produce a decrease of the \\(\\sigma(r)\\) for increasing values of \\(r\\). As we can see in Fig.2**d**, the measures of the \\(\\sigma(r)\\) versus \\(r\\) both for the fire data and the computer simulated percolation cluster are in good agreement. Moreover, they fit the values reported in Ref. [13], where the same quantity is estimated for an ordinary percolation cluster. The value \\(\\sigma_{intrinsic}^{2}\\) can be considered as a measure of the morphology of a fractal data set. The larger is \\(\\sigma_{intrinsic}^{2}\\), the larger is the probability that the fractal set has large voids. This is evident from Fig.2, where the variance has lower values for the the wildfires with smaller voids (set **b** and **c**). As a last remark on the data interpretation, we checked that the fuel load distribution before fires was rather uniform in the analyzed areas. Therefore, we can exclude that fractal properties of fire depend on pre-fire vegetation distribution. Nevertheless additional work is underway to quantify the effects of pre-fire fuel load distribution on fire behaviour. From the above data analysis it seems that percolating clusters could describe reasonably well the process of fire spreading. Unfortunately in the original formulation, percolation is a static model where one considers sites on a lattice that can be selected with a certain probability \\(p\\). If \\(p=1\\) all points are selected and there are plenty of spanning paths in the system. When \\(p=0\\) no point is selected and there is no way to form a spanning path. By increasing \\(p\\) step by step from zero, small clusters of connected areas are generated, until for a particular value of probability \\(p=p_{c}\\), called _percolation threshold_, a part of the small clusters coalesce and form a spanning cluster. Even if most of the properties measured in real fires are reproduced by percolation we need a model whose dynamics could mimic in a reasonable way that of the wildfires. We then propose here to use the Self-Organised version of Dynamical Percolation. Dynamical percolation [14] was introduced to study the propagation of epidemics in a population and its definition is the following: each site of a square lattice can be in one of three possible states: (i) ignited sites, (ii) green sites susceptible to be ignite in the future, and (iii) immune sites (i.e. burned sites non susceptible to be ignite again). At time \\(t=0\\) a localized seed of ignited sites is located at the center of an otherwise empty (green) lattice. The dynamics proceeds in discrete steps either by parallel or by sequential updating as follows: at each time-step every ignited site can ignite a (green) randomly chosen neighbor with probability \\(p\\) or, alternatively, burn completely and become immune to re-ignition with complementary probability \\(1-p\\). Any system state with no burning site is an _absorbing configuration_, i.e., a configuration in which the system is trapped and from which it cannot escape [15, 16]. It is clear that depending on the value of \\(p\\) the fire generated by the initial ignited seed will either spread in the lattice (for large values of \\(p\\)) or die out (for small values of \\(p\\)). The two previous phases, are divided at the percolation threshold \\(p_{c}\\), where the fire propagates marginally, leaving behind a fractal cluster of immunized sites. Interestingly, it can be shown using field theoretical tools that this is a critical percolation cluster [14]. In this way we have a dynamical model which, at criticality, reproduces the (static) properties of standard percolation. Clearly this model presents extensive fractal properties only if \\(p=p_{c}\\). The tuning of this parameter exactly to \\(p_{c}\\) is however quite unlikely. For that reason we present here then a _self-organized_ version of this model assuming a time-dependent form for the ignition probability \\(p(t)\\) decreasing from an initial value \\(p_{0}>p_{c}\\) with time constant \\(\\tau\\) (e.g. \\(p_{0}\\exp(-t/\\tau)\\) or \\(p_{0}/[1+(t/\\tau)^{n}]\\)). In the optics of fires, \\(p_{0}\\) represent the initial \"force\" of the fire, and \\(\\tau\\) is its characteristic duration. This observation came from the experience in fire control. Even without human activity fires eventually stops. It is then fair to introduce a fire extinction probability increasing with time. Fire will then invade new regions and will be able to continue until the percolation probability is larger or equal to the critical value \\(p_{c}\\). This peculiar process is also able to reproduce in a qualitative way the results of the fire clusters. Indeed the fire will grow almost in a compact way at the beginning, leaving a fractal boundary at the end of the activity. Let us suppose to start the dynamics at \\(p_{0}>p_{c}\\). At the beginning the dynamics is the same of DyP with constant \\(p>p_{c}\\), i.e. the ignited region is quite compact, leaving only small holes of vegetation. However, with the time passing \\(p(t)\\) reduces and then the diameter of islands in the burning cluster increases. Finally, after a certain time proportional to \\(\\tau\\) one has \\(p(t)<p_{c}\\), then the dynamics arrests _spontaneously_ in some time-steps. In particular one can see that if \\(\\tau\\gg 1\\), at the arrest time \\(t_{f}\\), \\(p(t_{f})\\simeq p_{c}\\). Therefore, the geometrical features of the final burnt cluster become more and more irregular (fractal) going towards the hull, representing an effective spatial (radial) probability of ignition. One can show that the final hull and the accessible perimeter have the same fractal dimensions for ordinary percolation. However, this fractality is extended only up to a characteristic scale \\(\\xi\\sim\\tau^{\\alpha_{\\xi}}\\) with \\(\\alpha_{\\xi}=1/D_{h}\\). \\(\\xi\\) gives also the characteristic scale of the voids nearby the hull, which are the largest in the cluster. Moreover, \\(p_{c}-p(t_{f})\\sim\\tau^{-\\alpha_{p}}\\) where \\(\\alpha_{p}=(D_{h}-1)/D_{h}\\). In few words, the hull presents the main features find in another static percolation model known as Gradient Percolation [17]. We believe that these properties of the self-organized DyP explain why in the largest analyzed wildfires (i.e starting with a larger \\(p_{0}\\) or a larger \\(\\tau\\)) we have an effective increase of the global fractal dimension \\(D_{f}\\) towards 2 and the appearance of large voids only nearby the hull. Instead for the smallest ones one can think that \\(p_{0}\\) is too near \\(p_{c}\\) (with respect to the value of \\(\\tau\\)) to realize the spatial gradient of ignition probability. This would result in fires cluster as large as \\(\\xi\\), and then with the same features of ordinary clusters of percolation near criticality. The importance of such a result lies in the particular growth dynamics shown by DyP. As pointed out in Ref. [7, 15] DyP grows mainly by selecting sites newly added to the system. If this applies also to the external boundaries of a wildfire, one could in principle think to focus the activities of fire control where the fire invasion is faster. Here it is also worth discussing the features of the so called \"forest fire model\" (see [18] for a complete review on the model). In this model at successive time steps trees (sites) are removed trough simple rules of ignition from nearest neighbours or by burning through external lighting. At the same time new trees grow on the empty sites left by the fire. This model not only presents the unrealistic assumptions of fast re-growing trees in the system, but also produces almost compact clusters of wildfires that fail to reproduce the statistical properties we observe in the data. We believe that despite the name, this model fails to reproduce the behaviour of real wildfires representing instead a nice statistical model to show the properties of Self-Organised systems. In conclusion, we present here some of the statistical properties of wildland fires. Results indicate that the cluster formed by fire shows at least on boundaries well defined fractal properties strictly related to percolation at criticality. In particular we believe that the most suitable model to describe fire dynamics is a self-organized version of Dynamical Percolation. Unfortunately, due to the coarse spatial and temporal resolution of available satellites, it is very difficult to check the dynamical properties of the forest fires. Nevertheless, the very good agreement between DyP and real data and the similar evolution of growth suggests that DyP could indeed represent a suitable model in most cases. The assignment of random probability values to the links between sites, which is performed in the model construction, can effectively model the broad-scale cumulative effect of interacting features (terrain, vegetation, etc.). Since the dynamical properties of DyP have been extensively studied recognition of DyP dynamics for fire spread has important consequences for fire control. This should focus on the latest zones attacked by fire, since the zones left behind have a small probability to keep the fire alive. It is indeed a well known result that DyP grows through the most recent areas entered in the system. This work has been supported by the EU Contract No.FMRXCT980183. This work was also partially supported by the LUCIFER (Land Use Change Interactions with Mediterranean Landscapes) project of the European Union (ENV-CT96-0320). ## References * [1] C. Ricotta, R. Retzlaff International Journal of Remote Sensing **21**, 2113 (2000). * [2] L. Pietronero, E. Tosatti eds. _Fractal in Physics_, North Holland Amsterdam (1985). * [3] D. Stauffer A. Aharony _Introduction to Percolation Theory_, Taylor and Francis London (1991). * [4] P. J. Flory J. Am. Chem. Soc., **63** 3083, 3091, 3906 (1941) * [5] J. M. Normand, H. J. Herrmann Int. Journ. Mod. Phys. C, **1** 207 (1990). * [6] G. Albinet, G. Searby, and D. Stauffer, J. Physique **47**, 1 (1986); T. Beer, I. G. Enting, Mathl. Comput. Modelling, **13**, 77 (1990). * [7] J. L. Cardy, J. Phys. A: Math. Gen., **16**, L709 (1983); J. L. Cardy and P. Grassberger, J. Phys. A: Math. Gen., **18**, L267 (1985). * [8] To evaluate robustness of results, classification has been repeated with different algorithms and number of classes (ranging from 6 to 10). Mapping of fire scar was mostly unchanged, proving high robustness of the spectral signature of fire. Images of pre- and post-fire areas were then processed to conform to each other. Transformation was based on measurements of ten ground control points, that could be located unambiguously on both images. The maximum RMS error was 0.31 (around 10 meters); the mean RMSE was 0.17. * [9] T. Grossman and A. Aharony, J. Phys. A, **20**, L1193 (1987). * [10] M. Kolb, Phys. Rev. A **41**, 5725 (1990). * [11] M. Kolb, M. Rosso, Phys Rev. E **47**, 3081 (1993). * [12] A. Gabrielli, A. Baldassarri, and B. Sapoval, Phys. Rev. E. **62**, 3103 (2000). * [13] R. C. Ball, G. Caldarelli, A. Flammini Phys. Rev. Lett. **85**, 5134 (2000). * [14] H. K. Janssen, Z. Phys. B, **58**, 311 (1985). * [15] A. Gabrielli, M. A. Munoz, and B. Sapoval, _Self-organized field theory of dynamical etching_, in preparation. * [16] G. Grinstein and M. A. Munoz, _The Statistical Mechanics of Systems with Absorbing States_, in \"Fourth Granada Lectures in Computational Physics\", edited by P. Garrido and J. Marro, Lecture Notes in Physics, Vol. 493 (Springer, Berlin 1997), p. 223, and references therein. * [17] B. Sapoval, M. Rosso, and J. F. Gouyet, J. Phys. Lett. (Paris), **46**, L149 (1985); M. Rosso, J. F. Gouyet, and B. Sapoval, Phys. Rev. B **32**, 6053 (1985). * [18] H. Jensen _Self-Organized Criticality_, Cambridge University Press, Cambridge (1998). \\begin{table} \\begin{tabular}{|c|c|c|c|c|} \\hline & **a** Biferno & **b** Penteli & **c** Cuenca & **d** DyP \\\\ \\hline \\(D_{f}\\) & 1.90(5) & 1.93(5) & 1.95(5) & 91/48 \\\\ \\(D_{h}\\) & 1.30(5) & 1.32(5) & 1.31(4) & 7/4 \\\\ \\(D_{p}\\) & 1.30(5) & 1.33(4) & 1.34(5) & 4/3 \\\\ \\(\\mathcal{L}(1)\\) & 0.037(3) & 0.036(3) & 0.034(3) & 0.040(2) \\\\ \\hline \\end{tabular} \\end{table} Table 1: _Fractal dimensions, for the data and for the DyP model. Exact values for DyP are computed on hierarchical lattices._Figure 1: Binary map of the burnt areas **(a)** for the valley of Biferno, **(b)** for the Penteli wildfire, **(c)** for the Cuenca wildfire. In **(d)** we plot a cluster of Self-Organised Dynamical Percolation whose dimensions are comparable with case **(c)**. Each pixel corresponds to an area of \\(900m^{2}\\). Figure 3: Numerical Simulation of an Dyp cluster. In (a) we show the cluster, in (b) the hull of this cluster. in (c) we show the hull of the coarse-grained picture of cluster in plot (a). The coarse-grained version is obtained by using cells with linear size twice of the original one. After only two steps the statistical properties of the hull become similar to those of the perimeter.
This paper focuses on the statistical properties of wild-land fires and, in particular, investigates if spread dynamics relates to simple invasion model. The fractal dimension and lacunarity of three fire scars classified from satellite imagery are analysed. Results indicate that the burned clusters behave similarly to percolation clusters on boundaries and look more dense in their core. We show that Dynamical Percolation reproduces this behaviour and can help to describe the fire evolution. By mapping fire dynamics onto the percolation models the strategies for fire control might be improved.
Summarize the following text.
arxiv-format/0108056v1.md
# Some Statistical Physics Approaches for Trends and Predictions in Meteorology Kristinka Ivanova 1Pennsylvania State University, University Park PA 16802, USA 1 Marcel Ausloos 2SUPRAS & GRASP, B5, University of Liege, B-4000 Liege, Belgium 2 Thomas Ackerman 3Pacific Northwest National Laboratory, Richland, WA 99352, USA3 Hampton Shirer 1Pennsylvania State University, University Park PA 16802, USA 1 Eugene Clothiaux 1Pennsylvania State University, University Park PA 16802, USA 1 ## 1 Introduction Earth's climate is determined by complex interactions between sun, oceans, atmosphere, land and biosphere [1, 2]. The composition of the atmosphere is particularly important because certain gases, including water vapor, carbon dioxide, etc., absorb heat radiated from the Earth's surface. As the atmosphere warms, it in turn radiates heat back to the surface that increases the earth's mean surface temperature by some 30 K above the value that would occur in the absence of a radiation-trapping atmosphere [1]. Perturbations in the concentration of these radiatively active gases alter the intensity of this effect on the earth's climate. Climate change, a major concern of everyone, is a focus of current atmospheric research. Understanding the processes and properties that effect atmospheric radiation and, in particular, the influence of clouds and the role of cloud radiative feedback, are issues of scientific interest. This leads to efforts to improve not only models of the earth's climate but also predictions of climate change [3, 4], whence weather prediction and climate models. Lorenz's [5] famous pioneering work on chaotic systems using a simple set of nonlinear differential equations was motivated by considerations of weather prediction. However, predicting the results of complex nonlinear interactions that are taking place in an open system is a difficult task. Yet physicists haveonly the Navier-Stokes equations [6] at hand for describing fluid motion, in terms of such quantities as mass, pressure, temperature, humidity, velocity, energy exchange, whence for describing the variety of processes that take place in the atmosphere. Since controlled experiments cannot be performed on the climate system, we rely on use of models to identify cause-and-effect relationships. It is also essential to concentrate on predicting the uncertainty in forecast models of weather and climate [7, 8]. Modeling the impact of clouds is difficult because of their complex and differing effects on weather and climate. Clouds can reflect incoming sunlight and, therefore, contribute to cooling, but they also absorb infrared radiation leaving the earth and contribute to warming. High cirrus clouds, for example, may have the impact of warming the atmosphere. Low-lying stratus clouds, which are frequently found over oceans, can contribute to cooling. In order to successfully model and predict climate, we must be able to both describe the effect of clouds in the current climate and predict the complex chain of events that might modify the distribution and properties of clouds in an altered climate. Much attention has been paid recently [9] to the importance of the main substance of the atmosphere and clouds, water in its three forms -- vapor, liquid and solid, for buffering the global temperature against reduced or increased solar heating [10]. Owing to its special properties, it is believed, that water establishes lower and upper boundaries on how far the temperature can drift from current values. The role of clouds and water vapor in climate change is not well understood; yet water vapor is the most abundant greenhouse gas and directly affects cloud cover and the propagation of radiant energy. In fact, there may be positive feedback between water vapor and other greenhouse gases. Carbon dioxide and other gases from human activities slightly warm the atmosphere, increasing its ability to hold water vapor. Increased water vapor can amplify the effect of an incremental increase of other greenhouse gases. Other studies suggest that the heliosphere influences the climate on Earth via global mechanism that affects cloud cover [11, 12]. Surprisingly the influence of solar variability is found to be strongest in low clouds (3 km), which points to a microphysical mechanism involving aerosol formation that is enhanced by ionization due to cosmic rays. Beyond the scientifically sound and highly sophisticated computer models, there is still space for simple approaches, based on standard statistical physics techniques and ideas, in particular based on the scaling hypothesis [13], phase transitions [14] and percolation theory aspects [15]. Analogies can be found between meteorological and other phenomena in social or natural science [16]. However to distinguish cases and patterns due to \"external field\" influences or self-organized criticality [17] is not obvious indeed. The coupling between human activities and deterministic physics is hard to model on simple terms. There have been several reports that long-range power-law correlations can be extracted from apparently stochastic time series in meteorology [18; 19] and multi-affine properties [20; 21] can be identified related to atmospheric turbulence [22]. The same type of investigations has already appeared and seems promising in atmospheric science. In the following we touch upon a brief review of some statistical physics approaches for testing scaling hypothesis in meteorology and for identifying the self-affine or multi-affine nature of atmospheric quantities. We apply useful numerical statistical techniques on real time data measurements; for illustration we have selected stratus clouds. Restricting ourselves to cloud physics and fractal geometry ideas, leads to many questions, such as on the perimeter-area relationship of rain and cloud areas [23], the fractal dimension of their shape or ground projection [24] or modelization of fractally homogeneous turbulence [25]. The cloud inner structure, content, temperature, life time and effects on ground level phenomena or features are of constant interest and prone to physical modelisation [26]. Recently, we reported about long-range power-law correlations [27; 28] and multi-affine properties [29] of stratus cloud liquid water fluctuations. ### Techniques of time series analysis The variety of systems that apparently display scaling properties ranges from base-pair correlations in DNA and inter-beat intervals of the human heart, to large, spatially extended geophysical processes, such as earthquakes, and signals produced by complex systems, such as financial indices in economics. The current paradigm is that these systems obey \"universal\" laws due to the underlying nonlinear dynamics and are independent of the microscopic details. Therefore one can consider in meteorology to obtain characteristic quantities using the same modern statistical physics methods as done in all of the other cases. Whence we will focus on several techniques to describe the scaling properties of meteorological time series, like the Fourier power spectrum of the signal [30], detrended fluctuation analysis (DFA) method [31] and its extension local DFA method [27], and multi-affine and singularity analysis [29; 32]. One can go beyond these methods using wavelet techniques [33] or Zipf diagrams [34; 35; 36]. The Fokker-Planck equation [37] for describing the liquid water path [38], which is studied here below, is also of interest. ## 2 Experimental techniques and data acquisition Quantitative observations of the atmosphere are made in many different ways. Experimental/observational techniques to study the atmosphere rely on physical principles. One important type of observational techniques is that of _remote sensing_, which depends on the detection of electromagnetic radiation emitted, scattered or transmitted by the atmosphere. The instruments can be placed at aircrafts, on balloons or on the ground. Remote-sensing techniques can be divided into _passive_ and _active_ types. In passive remote sensing,the radiation measured is of natural origin, for example the thermal radiation emitted by the atmosphere, or solar radiation transmitted or scattered by the atmosphere. Most space-born remote sensing methods are passive. In active remote sensing, a transmitter, e.g. a radar, is used to direct pulses of radiation into the atmosphere, where they are scattered by atmospheric molecules, aerosols or inhomogeneities in the atmospheric structure. Some of the scattered radiation is then detected by some receiver. Each of these techniques has its advantages and disadvantages. Remote sensing from satellites can give near-global coverage, but can provide only averaged values of the measured quantity over large regions, of order of hundreds of kilometers in horizontal extent and several kilometers in the vertical direction. Satellite instruments are expensive to put into orbit and cannot usually be repaired if they fail. Ground-based radars can provide data with very high vertical resolution (by measuring small differences in the time delays of the return pulses), but only above the radar site. For illustrative purposes, we will use microwave radiometer data obtained from the Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) program [39] site located at the Southern Great Plains (SGP) central facility [40]. For detailed presentation of other remote sensing techniques the reader can consult Andrews [1] and/or Rees [41]. In this study we focus on stratus cloud data. For comparison the cumulus cloud scale is too small to be represented individually in today's numerical models [42]. Due to their relatively small sizes cumulus clouds produce short time series when remote sensing measurements are applied. Therefore they are not particularly suitable for the techniques that are outlined in this report. However their role in the transport of heat, moisture and momentum must be considered in numerical models. The data used in this study are the vertical column amounts of cloud liquid water that are retrieved from the radiances, recorded as brightness temperatures, measured with a Radiometrics Model WVR-1100 microwave radiometer at frequencies of 23.8 and 31.4 GHz [43, 44, 45]. The microwave radiometer is equipped with a Gaussian-lensed microwave antenna whose small-angle receiving cone is steered with a rotating flat mirror [40]. The microwave radiometer is located at the DOE ARM program SGP central facility and is operated in the vertically pointing mode. In this mode the radiometer makes sequential 1 s radiance measurements in each of the two channels while pointing vertically upward into the atmosphere. After collecting these radiances the radiometer mirror is rotated to view a blackbody reference target. For each of the two channels the radiometer records the radiance from the reference immediately followed by a measurement of a combined radiance from the reference and a calibrated noise diode. This measurement cycle is repeated once every 20 s. A shorter measurement cycle does not necessarily lead to a larger number of independent samples. For example, clouds at 2 km altitude moving at 10\\(\\rm m\\,s^{-1}\\) take 15 s to advect through a radiometer field-of-view of approximately \\(5^{\\circ}\\). Note that the 1 s sky radiance integration time ensures that the retrieved quantities correspond to a specific column of cloud above the instrument, as opposed to some longer time average of the cloud properties in the column above the instrument. The field of view of the microwave radiometer is \\(5.7^{\\circ}\\) at 23.8 GHz and \\(4.6^{\\circ}\\) at 31.4 GHz. Based on a standard model [43; 45] (see Appendix), the microwave radiometer measurements at the two frequency channels of 23.8 and 31.4 GHz are used to obtain time series of liquid water path (LWP) that corresponds to the total amount of liquid water within the vertical column of the atmosphere that has been remotely sounded. The error for the liquid water retrieval is estimated to be less than about 0.005 \\(g/cm^{2}\\)[45]. The liquid water path (LWP) data \\(y(t)\\) considered in this study are obtained on April 3-5, 1998 and are shown in Fig. 1a. ## 3 Nonstationarity and Spectral density Fluctuations of the LWP signal \\(y(t)\\) (data in Fig. 1a) are plotted in Fig. 1b for the time interval equal to the discretization step of the data, i.e. \\(\\Delta t=20\\) sec. This time series is also called the small-scale gradient field. Other values of time intervals to study fluctuations of a signal can be of interest to search for changes in the type and strength of the correlations [46]. This approach will not be pursued here. Figure 1: **(a)** Time dependence of liquid water path as obtained at the ARM Southern Great Plains site with time resolution of 20 s during the period from April 3 to 5, 1998. The time series contains \\(N=10740\\) data points. On x-axis t=24 h marks midnight on April 3, t=48 h corresponds to midnight on April 4 and t=72 h corresponds to midnight on April 5, 1998. **(b)** Small-scale gradient field of the LWP signal, e.g. fluctuations of LWP for a time interval equal to the discretization step of the measurements. One approach to test the type of the LWP fluctuations is to estimate the nonstationarity of the signal. The power spectral density \\(S(f)\\) of the time series \\(y(t)\\) is defined as the Fourier transform of the signal. For supposedly self-affine signals \\(S(f)\\) is expected to follow a power-law dependence in terms of the frequency \\(f\\), \\[S(f)\\ \\sim\\ f^{-\\beta}. \\tag{1}\\] Equation (1) allows one to put the phenomena that produce the time series into the class of _self-affine_ phenomena. It has been argued [47, 48] that the spectral exponent \\(\\beta\\) contains information about the degree of stationarity of the signal \\(y(t)\\). Depending on the value of \\(\\beta\\) the time series is called stationary or not; for \\(\\beta<1\\), the signal is statistically invariant by transition in time, thus called stationary. If \\(\\beta>1\\), the signal is nonstationary. In addition, if \\(\\beta<3\\) the increments of the signal form a stationary series, in particular the small-scale gradient field is stationary. Many geophysical fields are nonstationary with stationary increments (\\(1<\\beta<3\\)) over some scaling range. The upper bound of the nonstationary regime is required to keep the field values within their physically accessible range by limiting the amplitude of the large scale fluctuations, which corresponds to a flatter part of the spectrum at low frequencies. Brownian motion is characterized by \\(\\beta=2\\), and white noise by \\(\\beta=0\\). Indeed the Brownian motion or random walk \\(z(x)\\) is a classical example of a nonstationary process. We know that its variance \\(<z^{2}(x)>\\) is proportional to \\(x\\), which proves the nonstationarity in the one-point statistics. However, in the framework of two-point statistics, this result has a different interpretation. The variance of the \"increment\" \\(z(x+\\xi)-z(x)\\) increases linearly with \\(\\xi\\), independently of \\(x\\), which is an indication of the stationarity of the increments. The range over which the \\(\\beta\\) exponent is well defined in Eq. (1) indicates the range over which the scaling properties of the time series are invariant. The power spectral density \\(S(f)\\) of the liquid water path data measured on April 3-4, 1998 is shown in Fig. 2. The spectral exponent \\(\\beta=1.56\\pm 0.03\\) indicates a nonstationary time series. ## 4 Roughness and Detrended Fluctuation Analysis The fractal dimension [13, 49, 50, 51]\\(D\\) is often used to characterize the roughness of profiles [52]. Several methods are used for measuring \\(D\\), like the box counting method, though not quite efficient; many others are found in the literature as seen in [13, 49, 50, 51] and here below. For topologically one dimensional systems, the fractal dimension \\(D\\) is related to the exponent \\(\\beta\\) by \\[\\beta=5-2D. \\tag{2}\\]Another \"measure\" of the signal roughness is sometimes given by the Hurst \\(Hu\\) exponent, first defined in the \"rescale range theory\" (of Hurst [53; 54]) who suggested a method to estimate the persistence of the Nile floods and droughts. The Hurst method consists of listing the differences between the observed value at a discrete time \\(t\\) over an interval with size \\(N\\) on which the mean has been taken. The upper (\\(y_{M}\\)) and lower (\\(y_{m}\\)) values in that interval define the range \\(R_{N}=y_{M}-y_{m}\\). The root mean square deviation \\(S_{N}\\) being also calculated, the \"rescaled range\" is \\(R_{N}/S_{N}\\) is expected to behave like \\(N^{Hu}\\). This means that for a (discrete) self-affine signal \\(y(t)\\), the neighborhood of a particular point on the signal can be rescaled by a factor \\(b\\) using the roughness (or Hurst [49; 50]) exponent \\(Hu\\) and defining the new signal \\(b^{-Hu}y(bt)\\). For the exponent value \\(Hu\\), the frequency dependence of the signal so obtained should be undistinguishable from the original one, i.e. \\(y(t)\\). The roughness (Hurst) exponent \\(Hu\\) can be calculated from the height-height correlation function \\(c_{1}(\\tau)\\) or first order structure function that supposed to behave like \\[c_{1}(\\tau)=\\left\\langle\\left|y(t_{i+\\tau})-y(t_{i})\\right|\\right\\rangle_{\\tau }\\sim\\tau^{H_{1}} \\tag{3}\\] whereas \\[Hu=1+H_{1}, \\tag{4}\\] Figure 2: Power spectral density for data measured on April 3-4, 1998. rather than from the box counting method. For a _persistent_ signal, \\(H_{1}>1/2\\); for an _anti-persistent_ signal, \\(H_{1}<1/2\\). Flandrin has theoretically proved [55] that \\[\\beta=2Hu-1, \\tag{5}\\] thus \\(\\beta=1{+}2\\)\\(H_{1}\\). This implies that the classical random walk (Brownian motion) is such that \\(Hu=3/2\\). It is clear that \\[D=3-Hu. \\tag{6}\\] Fractional Brownian motion values in other fields [56; 57; 58] are practically found to lie between 1 and 2. Since a white noise is a truly random process, it can be concluded that \\(Hu=1.5\\) implies an uncorrelated time series [51]. Thus \\(D>1.5\\), or \\(Hu<1.5\\) implies antipersistence and \\(D<1.5\\), or \\(Hu>1.5\\) implies persistence. From preimposed \\(Hu\\) values of a fractional Brownian motion series, it is found that the equality here above usually holds true in a very limited range and \\(\\beta\\) only slowly converges toward the value \\(Hu\\)[30; 59]. The above exponents and parameters can be obtained within the detrended fluctuations analysis (DFA) method [31]. The DFA method is a tool used for sorting out correlations in a self-affine time series with stationary increments [58; 60; 61]. It provides a simple quantitative parameter - the scaling exponent \\(\\alpha\\), which is a signature of the correlation properties of the signal. The advantages of DFA over many methods are that it permits detection of long-range correlations embedded in seemingly non-stationary time series, and also that inherent trends are avoided at all time scales. The DFA technique consists in dividing a time series \\(y(t)\\) of length \\(N\\) into \\(N/\\tau\\) nonoverlapping boxes (called also windows), each containing \\(\\tau\\) points [31]. The local trend \\(z(n)\\) in each box is defined to be the ordinate of a linear least-square fit of the data points in that box. The detrended fluctuation function \\(F^{2}(\\tau)\\) is then calculated following: \\[F^{2}(\\tau)=\\frac{1}{\\tau}\\sum_{n=k\\tau+1}^{(k+1)\\tau}\\left[y(n)-z(n)\\right]^{ 2}\\qquad k=0,1,2,\\ldots,\\left(\\frac{N}{\\tau}-1\\right) \\tag{7}\\] Averaging \\(F^{2}(\\tau)\\) over the \\(N/\\tau\\) intervals gives the mean-square fluctuations \\[<F^{2}(\\tau)>^{1/2}\\sim\\tau^{\\alpha}. \\tag{8}\\] The DFA exponent \\(\\alpha\\) is obtained from the power law scaling of the function \\(<F^{2}(\\tau)>^{1/2}\\) with \\(\\tau\\), and represents the correlation properties of the signal: \\(\\alpha=1/2\\) indicates that the changes in the values of a time series are random and, therefore, uncorrelated with each other. If \\(\\alpha<1/2\\) the signal is anti-persistent (anti-correlated), while \\(\\alpha>1/2\\) indicate positive persistency (correlation) in the signal. Results of the DFA analysis of liquid water path data measured on April 3-4, 1998 are plotted in Fig. 3a. The DFA function is close to a power law with an exponent \\(\\alpha=0.34\\pm 0.01\\) holding from 3 to 60 minutes. This scaling range is somewhat shorter than the 150 min scaling range we obtained [28] for a stratus cloud during the period Jan. 9-14, 1998 at the ARM SGP site. A crossover to \\(\\alpha=0.50\\pm 0.01\\) is readily seen for longer correlation times [61] to about 2 h, after which the statistics of the DFA function is not reliable. One should note that for cloud data the lower limit of the scaling range is determined by the resolution and discretization steps of the measurements. Since such clouds move at an average speed of _ca._ 10 m/s and the instrument is always directed toward the same point of the atmosphere, the \\(20s\\) discretization step is chosen to ensure ergodic sampling for an about \\(5^{\\circ}\\) observation angle of the instrument. The upper scaling range limit depends on the cloud life time. The value of \\(\\alpha\\approx 0.3\\) can be interpreted as the \\(H_{1}\\) parameter of the multifractal analysis of liquid water content [32] and of liquid water path [29]. The existence of a crossover suggests two types of correlated events as in classical fracture processes: (i) On one hand, the nucleation part and the growth of diluted droplets occur in \"more gas-like regions\". This process is typically slow and is governed by long range Brownian-like fluctuations; it is expected to follow an Eden model-like [62] growth, with a trivial scaling exponent, as \\(\\alpha=0.5\\) (Fig. 3b); (ii) The faster processes with more Levy-like fluctuations are those which link together various fracturing parts of the cloud, and are necessarily antipersistent as long as the cloud remains thermodynamically stable; they occur at shorter correlation times, and govern the cloud breaking final regime as in any percolation process [14], - with an intrinsic non-trivial scaling exponent \\(\\sim\\) 0.3. Figure 3: (a) Detrended fluctuation function \\(<F^{2}(\\tau)>^{1/2}\\) for data measured on April 3-4, 1998. (b) DFA-function for Brownian walk signal scales with \\(\\alpha=0.50\\pm 0.01\\) and is plotted for comparison. Several remarks are in order. Recently a rigorous relation between detrended fluctuation analysis and power spectral density analysis for stochastic processes is established [64]. Thus, if the two scaling exponents \\(\\alpha\\) and \\(\\beta\\) are well defined, \\(\\beta=2\\alpha+1\\) holds for \\(0<\\alpha<1\\) (\\(1<\\beta<3\\)) for fractional Brownian walks [59, 63]. establishing the relation between detrended fluctuation analysis and power spectral density analysis for stochastic processes In terms of the exponents (\\(\\alpha\\) and \\(\\beta\\)) of the signal, we can talk about pink noise \\(\\alpha=0\\) (\\(\\beta=1\\)), brown noise \\(\\alpha=1/2\\) (\\(\\beta=2\\)) or black noise \\(\\alpha>1/2\\) (\\(\\beta>2\\)) [13]. Black noise is related to persistence. In contrast, inertial subrange turbulence for which \\(\\beta=5/3\\) gives \\(\\alpha=1/3\\)[65], which places it in the antipersistence regime. The two scaling exponents \\(\\alpha\\) and \\(\\beta\\) for the liquid water path signal are only approximately close to fulfilling the relation \\(\\beta=2\\alpha+1\\). This can be interpreted to be due to the peculiarities of the spectral method [66]. In general, the Fourier transform is inadequate for non-stationary signals. Also it is sensitive to possible trends in the data. There are different techniques suggested to correct these deficiencies of the spectral method [67, 68], like detrending the data before taking the Fourier transform. However, this may lead to questions about the accuracy of the spectral exponent [69]. ## 5 Time dependence of the correlations In previous section we study the type of correlations that exist in the liquid water path signal measured during cloudy atmospheric conditions, on April 3-4, 1998. Here we focus on the evolution of these correlations during the same time interval but also continuing on the next day, April 5, when the stratus cloud disappears. In doing so we can further study the influence of the time lag on correlations in the signal. In order to probe the existence of so called _locally correlated_ and _decorrelated_ sequences [58], one can construct a so-called observation box with a certain width, \\(\\tau\\), place the box at the beginning of the data, calculate \\(\\alpha\\) for the data in that box, move the box by \\(\\Delta\\tau\\) toward the right along the signal sequence, calculate \\(\\alpha\\) in that box, a.s.o. up to the \\(N\\)-th point of the available data. A time dependent \\(\\alpha\\) exponent may be expected. We apply this technique to the liquid water path data signal and the result is shown in Fig. 4. For this illustration we have chosen two window sizes, i.e. 4 h and 6 h, moving the window with a step of \\(\\Delta\\tau=1\\) h. Since the value of _local_\\(\\alpha\\) can only be known after all data points are taken into account in a box, the reported value corresponds to that at the upper most time value for that given box in Fig. 4. One clearly observes that the \\(\\alpha\\) exponent value does not vary much when the value of \\(\\tau\\) and \\(\\Delta\\tau\\) are changed. As could be expected there is more roughness if the box is narrower. The local \\(\\alpha\\) exponent value is always significantly below 1/2. By analogy with financial and biological studies, this is interpreted as a phenomenon related to the \\(fractional\\)\\(Brownian\\)\\(motion\\) process mentioned above. The results from this local DFA analysis applied to LWP data (Fig. 4) indicate two well defined regions of scaling with different values of \\(\\alpha\\). The first region corresponds to the first two days when a thick stratus cloud existed. The average value of the local scaling exponent over this period is \\(\\alpha=0.34\\pm 0.01\\); it is followed by a sharp rise to 0.5, then by a sharp drop below \\(\\alpha=0.1\\) when there is a clear sky day. These values of local \\(\\alpha\\) are well defined for a scaling time (range) interval extending between 2 and 25 minutes for the various \\(\\tau\\) and \\(\\Delta\\tau\\) combinations. The value of \\(\\alpha\\), close to 0.3, indicates a very large antipersistence, thus a set of fluctuations tending to induce a great stability of the system and great antipersistence of the prevailing meteorology, - in contrast to the case in which there would be a persistence of the system which would be dragged out of equilibrium; it would equally imply good predictability. This implies that specific fluctuation correlation dynamics could be usefully inserted as ingredients in _ad hoc_ models. The appearance of a patch of clouds and clear sky following a period of thick stratus can be interpreted as a non equilibrium transition. The \\(\\alpha=1/2\\) value in financial fluctuations [58] was observed to indicate a period of relative economic calm. The appropriately called thunderstorm of activities and other bubble explosions in the financial field correspond to a value different from Figure 4: Local \\(alpha\\)-exponent from DFA analysis for data in Fig. 1a. 1/2 [70]. Thus we emphasize here that stable states can occur for \\(\\alpha\\) values that do not correspond to the Brownian 1/2 value. We conclude that the fluctuation behavior is an observational feature more important than the peak appearance in the raw data. Moreover, from a fundamental point of view, it seems that the variations of \\(\\alpha\\) are as important as the value itself [58]. From the point of view of predictability, \\(\\alpha\\) values significantly different from 1/2 are to be preferred because such values imply a great degree of predictability and stability of the system. ## 6 Multi-affinity and Intermittency The variations in the local \\(\\alpha\\)-exponent suggest that the nature of the correlations changes with time. As a consequence the evolution of the time series can be decomposed into successive persistent and anti-persistent sequences [58], and multi-affine behavior can be expected. Multi-affine properties of a time dependent signal \\(y(t)\\) are described by the so-called \"q-th\" order structure functions \\[c_{q}=\\langle|y(t_{i+r})-y(t_{i})|^{q}\\rangle\\qquad i=1,2,\\ldots,N-r \\tag{9}\\] where the average is taken over all possible pairs of points that are apart from each other a distance \\(\\tau=y(t_{i+r})-y(t_{i})\\). Assuming a power law dependence of the structure function, the \\(H(q)\\) spectrum is defined through the relation [71, 72] \\[c_{q}(\\tau)\\sim\\tau^{qH(q)}\\qquad q\\geq 0 \\tag{10}\\] The _intermittency_ of the signal can be studied through the so-called singular measure analysis. The first step that this technique require is defining a basic measure \\(\\varepsilon(1;l)\\) as \\[\\varepsilon(1;l)=\\frac{|\\Delta y(1;l)|}{<\\Delta y(1;l)>},\\qquad l=0,1,\\ldots,N-1 \\tag{11}\\] where \\(\\Delta y(1;l)=y(t_{i+1})-y(t_{i})\\) is the small-scale gradient field and \\[<\\Delta y(1;l)>=\\frac{1}{N}\\sum_{l=0}^{N-1}|\\Delta y(1;l)|. \\tag{12}\\] This is indeed deriving a stationary nonnegative field from a nonstationary data and this is the simplest procedure to do so. Other techniques involve \"fractional\" derivatives [73] or second derivatives [74]. Also one can consider taking squares [75] rather than the absolute values but that leads to a linear relation between the exponents of these two measures. It is argued elsewhere [76] that the details of the procedure do not influence the final results of the singularity analysis. We use a spatial/temporal average in Eq. (12) rather than an ensemble average, thus making an ergodicity assumption [77; 78] that is our only recourse in empirical data analysis. Next we define a series of ever more coarse-grained and ever shorter fields \\(\\varepsilon(r;l)\\) where \\(0<l<N-r\\) and \\(r=1,2,4,\\ldots,N=2^{m}\\). Thus the average measure in the interval \\([l;l+r]\\) is \\[\\varepsilon(r;l)=\\frac{1}{r}\\sum_{l^{\\prime}=l}^{l+r-1}\\varepsilon(1;l^{\\prime })\\qquad l=0,\\ldots,N-r \\tag{13}\\] The scaling properties of the generating function are then searched for through the equation \\[\\chi_{q}(\\tau)=<\\varepsilon(r;l)^{q}>\\sim\\tau^{-K(q)},\\quad q\\geq 0, \\tag{14}\\] with \\(\\tau=y(t_{i+r})-y(t_{i})\\). It should be noted that the intermittency of a signal is related to existence of extreme events, thus a distribution of events away from a Gaussian distribution, in the evolution of the process that has generated the data. If the tails of the distribution function follow a power law, then the scaling exponent defines the critical order value after which the statistical moments of the signal diverge [48]. Therefore it is of interest to probe the distribution of the fluctuations of a time dependent signal \\(y(t)\\) prior investigating its intermittency. The distribution of the fluctuations of liquid water path signal measured on April 3-4, 1998 at the ARM Southern Great Plains site is shown in Fig. 5. The frequency distribution is not Gaussian but is rather symmetrical. The tails of the distribution follow a power law \\[P(x)\\sim\\frac{1}{x^{\\mu}} \\tag{15}\\] with an exponent \\(\\mu=2.75\\pm 0.12\\) away from the Gaussian \\(\\mu=2\\) value. This scaling law gives support to the argument in favor of the existence of self-affine properties, as established in section 4 for the LWP signal, when applying the DFA method. The extreme events that form the tails of the probability distribution also characterize the intermittency of the signal. In Fig. 6 the multi-fractal properties of the LWP signal are expressed by two sets of scaling functions, the \\(H(q)\\) hierarchy of functions describing the roughness of the signal and the \\(K(q)\\) hierarchy of functions describing its intermittency as defined in Eq.(10) and Eq. (14) respectively. For \\(q=1\\), \\(H(1)\\) is the value that is given by the DFA analysis. ## 7 Conclusions Scaling properties of the liquid water path in stratus clouds have been analyzed to demonstrate the application of several methods of statistical physics for analyzing data in atmospheric sciences, and more generally in geophysics. We have found that the breaking up of a stratus cloud is related to changes in the type of correlations in the fluctuations of the signal, that represents the total vertical amount of liquid water in the stratus cloud. We have demonstrated that the correlations of LWP fluctuations exist indeed in a more complex way than usually known through their multi-affine dependence. ## 8 Acknowledgements Thanks to Luc T. Wille for inviting us to present the above results and enticing us into writing this report. Thanks to him and the State of Florida for some financial support during the conference. This research was partially supported by Battelle grant number 327421-A-N4. We acknowledge collaboration of the U.S. Department of Energy as part of the Atmospheric Radiation Measurement Program. Figure 5: Distribution of the frequency of LWP fluctuations \\(\\Delta y/\\sigma=(y(t_{i+1})-y(t_{i}))/\\sigma\\), where \\(\\sigma=0.0011g/cm^{2}\\) is the standard deviation of the fluctuations for LWP signal measured on April 3-4, 1998 (data in Fig. 1a) ## 9 Appendix For nonprecipitating clouds, i.e., clouds having drops sufficiently small that scattering is negligible, measurements of the microwave radiometer brightness temperature \\(T_{\\rm B\\omega}\\) can be mapped onto an opacity \\(\ u_{\\omega}\\) parameter by \\[\ u_{\\omega}=\\ln\\left[\\frac{(T_{\\rm mr}-T_{\\rm c})}{(T_{\\rm mr}-T_{\\rm B\\omega}) }\\right], \\tag{16}\\] where \\(T_{\\rm c}\\) is the cosmic background \"big bang\" brightness temperature equal to 2.8 K and \\(T_{\\rm mr}\\) is an estimated \"mean radiating temperature\" of the atmosphere. Writing \\(\ u_{\\omega}\\) in terms of atmospheric constituents, we have \\[\ u_{\\omega}=\\kappa_{\\rm V\\omega}V+\\kappa_{\\rm L\\omega}L+\ u_{\\rm d\\omega}, \\tag{17}\\] Figure 6: The \\(H(q)\\) and \\(K(q)\\) functions for the LWP data obtained on April 3-4, 1998. where \\(\\kappa_{\\rm V\\omega}\\) and \\(\\kappa_{\\rm L\\omega}\\) are _water vapor and liquid water path_-averaged mass absorption coefficients and \\(\ u_{\\rm d\\omega}\\) is the absorption by dry atmosphere constituents (e.g., oxygen). Next, define \\[\ u_{\\omega}^{*}=\ u_{\\omega}-\ u_{\\rm d\\omega}=\\ln\\left[\\frac{(T_{\\rm mr}-T_{ \\rm c})}{(T_{\\rm mr}-T_{\\rm B\\omega})}\\right]-\ u_{\\rm d\\omega}. \\tag{18}\\] The 23.8 GHz channel is sensitive primarily to water vapor while the 31.4 GHz channel is sensitive primarily to cloud liquid water. Therefore two equations for the opacity can be written for each frequency and then solved for the two unknowns \\(L\\) and \\(V\\), i.e. \\[L=l_{1}\ u_{\\omega_{1}}^{*}+l_{2}\ u_{\\omega_{2}}^{*}\\hskip 42.679134pt(LWP) \\tag{19}\\] and \\[V=v_{1}\ u_{\\omega_{1}}^{*}+v_{2}\ u_{\\omega_{2}}^{*},\\hskip 42.679134pt(WVP) \\tag{20}\\] where \\[l_{1}=-\\left(\\kappa_{\\rm L\\omega_{2}}\\frac{\\kappa_{\\rm V\\omega_{1}}}{\\kappa_{ \\rm V\\omega_{2}}}-\\kappa_{\\rm L\\omega_{1}}\\right)^{-1}, \\tag{21}\\] \\[l_{2}=\\left(\\kappa_{\\rm L\\omega_{2}}-\\kappa_{\\rm L\\omega_{1}}\\frac{\\kappa_{ \\rm V\\omega_{2}}}{\\kappa_{\\rm V\\omega_{1}}}\\right)^{-1}, \\tag{22}\\] \\[v_{1}=\\left(\\kappa_{\\rm V\\omega_{1}}-\\kappa_{\\rm V\\omega_{2}}\\frac{\\kappa_{ \\rm L\\omega_{1}}}{\\kappa_{\\rm L\\omega_{2}}}\\right)^{-1}, \\tag{23}\\] \\[v_{2}=-\\left(\\kappa_{\\rm V\\omega_{1}}\\frac{\\kappa_{\\rm L\\omega_{2}}}{\\kappa_{ \\rm L\\omega_{1}}}-\\kappa_{\\rm V\\omega_{2}}\\right)^{-1}. \\tag{24}\\] ## Bibliography * (1) D. Andrews: _An Introduction to Atmospheric Physics_ (Cambridge University Press, Cambridge, 2000) * (2) R.A. Anthens, H.A. Panofsky, J.J. Cahir, A. Rango: _The Atmosphere_ (Bell & Howell Company, Columbus, OH, 1975) * (3) R.R. Rogers: _Short Course in Cloud Physics_ (Pergamon Press, New York, 1976) * (4) C.F. Bohren: _Clouds in a Glass of Beer_ (John Wiley & Sons, New York, 1987) * (5) E. N. Lorenz: J. Atmos. Sci. **20**, 130 (1963) * (6) L. D. Landau, E.M. Lifshitz: _Fluid Mechanics_ (Addison-Wesley, Reading, MA, 1959) * (7) T.N. Palmer: Phys. Rep. **63**, 71 (2000) * (8) S.G. Philander: Phys. Rep. **62**, 123 (1999) * (9) A. Maurellis: Physics World **14**, 22 (2001); D. Rosenfeld, W. Woodley: Physics World **14**, 33 (2001)* (10) H.-W. Ou: J. Climate **14**, 2976 (2001) * (11) N.D. Marsh, H. Svensmark: Phys. Rev. Lett. **85**, 5004 (2000) * (12) H. Svensmark: Phys. Rev. Lett. **81**, 5027 (1998) * (13) M. Schroeder: _Fractals, Chaos and Power Laws_ (W.H. Freeman and Co., New York, 1991) * (14) H. E. Stanley: _Phase transitions and critical phenomena_ (Oxford Univ. Press, Oxford, 1971) * (15) D. Stauffer, A. Aharony: _Introduction to Percolation Theory_ (Taylor & Francis, London, 1992) 2nd printing * (16) P. Bak: _How Nature Works_ (Springer, New York, 1996). * (17) D.L. Turcotte: Phys. Rep. **62**, 1377 (1999) * (18) E. Koscielny-Bunde, A. Bunde, S. Havlin, H. E. Roman, Y. Goldreich, H.-J. Schellnhuber: Phys. Rev. Lett. **81**, 729 (1998) * (19) E. Koscielny-Bunde, A. Bunde, S. Havlin, Y. Goldreich: Physica A **231**, 393 (1993) * (20) C.R. Neto, A. Zanandrea, F.M. Ramos, R.R. Rosa, M.J.A. Bolzan, L.D.A. Sa: Physica A **295**, 215 (2001) * (21) H.F.C. Velho, R.R. Rosa, F.M. Ramos, R.A. Pielke, C.A. Degrazia, C.R. Neto, A. Zanadrea: Physica A **295**, 219 (2001) * (22) H.A. Panofsky, J.A. Dutton: _Atmospheric turbulence_ (John Wiley & Son Inc. New York 1983) * (23) S. Lovejoy: Science **216**, 185 (1982) * (24) S. Lovejoy, D. Schertzer: Ann. Geophys. B **4**, 401 (1986) * (25) H.G.E. Hentchel, I. Procaccia: Phys. Rev. A **27**, 1266 (1983) * (26) K. Nagel, E. Raschke: Physica A **182**, 519 (1992) * (27) K. Ivanova, M. Ausloos: Physica A **274**, 349 (1999) * (28) K. Ivanova, M. Ausloos, E.E. Clothiaux, T.P. Ackerman: Europhys. Lett. **52**, 40 (2000) * (29) K. Ivanova, T. Ackerman: Phys. Rev. E **59**, 2778 (1999) * (30) B.D. Malamud, D.L. Turcotte: J. Stat. Plann. Infer. **80**, 173 (1999) * (31) C.-K. Peng, S.V. Buldyrev, S. Havlin, M. Simmons, H.E. Stanley A.L. Goldberger: Phys. Rev. E **49**, 1685 (1994) * (32) A. Davis, A. Marshak, W. Wiscombe, R. Cahalan: J. Geophys. Research. **99**, 8055 (1994) * (33) N. Decoster, S.G. Roux, A. Arneodo: Eur. Phys. J B **15**, 739 (2000) * (34) G.K. Zipf: _Human Behavior and the Principle of Least Effort_ (Addisson-Wesley, Cambridge, MA, 1949) * (35) N. Vandewalle, M. Ausloos: Physica A **268** 240 (1999) * (36) M. Ausloos, K. Ivanova: Physica A **270**, 526 (1999) * (37) R. Friedrich, J. Peinke, Ch. Renner: Phys. Rev. Lett. **84**, 5224 (2000) * (38) K. Ivanova, M. Ausloos, unpublished * (39) G.M. Stokes, S.E. Schwartz: Bull. Am. Meteorol. Soc. **75**, 1201 (1994) * (40)[http://www.arm.gov](http://www.arm.gov) * (41) W.G. Rees: _Physical Principles of Remote Sensing_ (Cambridge University Press, Cambridge, 1990) * (42) J.R. Garratt: _The Atmospheric Boundary Layer_ (Cambridge University Press, 1992) * (43) E.R. Westwater: Radio Science **13**, 677 (1978)* [44] E.R. Westwater: 'Ground-based microwave remote sensing of meteorological variables', in: _Atmospheric Remote Sensing by Microwave Radiometry_, ed. by M.A. Janssen (John Wiley and Sons, New York 1993) pp. 145-213 * [45] J.C. Liljegren, B.M. Lesht: 'Measurements of integrated water vapor and cloud liquid water from microwave radiometers at the DOE ARM Cloud and Radiation Testbed in the U.S. Southern Great Plains', in _IEEE Int. Geosci. and Remote Sensing Symp._, **3**, Lincoln, Nebraska, (1996) pp. 1675-1677 * [46] R. Friedrich, J. Peinke: Phys. Rev. Lett. **78**, 863 (1997) * [47] B.B. Mandelbrot: _The Fractal Geometry of Nature,_ (W.H. Freeman,New York, 1982) * [48] D. Schertzer, S. Lovejoy: J. Geophys. Res. **92**, 9693 (1987) * [49] P. S. Addison: _Fractals and Chaos_ (Inst. of Phys., Bristol, 1997) * [50] K. J. Falconer: _The Geometry of Fractal Sets_ (Cambridge Univ. Press, Cambridge, 1985) * [51] B. J. West, B. Deering: _The Lure of Modern Science: Fractal Thinking_ (World Scient., Singapore, 1995) * [52] B.B. Mandelbrot, D.E. Passoja, A.J. Paulay: Nature **308**, 721 (1984) * [53] H. E. Hurst: Trans. Amer. Soc. Civ. Engin. **116**, 770 (1951) * [54] H. E. Hurst, R.P. Black, Y.M. Simaika: _Long Term Storage_ (Constable, London, 1965) * [55] P. Flandrin: IEEE Trans. Inform. Theory **35**, 197 (1989) * [56] M. Ausloos, N. Vandewalle, K. Ivanova: 'Time is money', in _Noise of frequencies in oscillators and dynamics of algebraic numbers_, ed. by M. Planat (Springer, Berlin, 2000) pp. 156-171 * [57] M. Ausloos, N. Vandewalle, Ph. Boveroux, A. Minguet, K. Ivanova: Physica A, **274**, 229 (1999) * [58] N. Vandewalle, M. Ausloos: Physica A **246**, 454 (1997) * [59] D.L. Turcotte: _Fractals and Chaos in Geology and Geophysics_ (Cambridge University Press, Cambridge 1997). * [60] M. Ausloos, K. Ivanova: Int. J. Mod. Phys. C **12,** 169 (2001) * [61] K. Hu, Z. Chen, P.Ch. Ivanov, P. Carpena, H.E. Stanley: Phys. Rev E (in press) (2001). * [62] R. Jullien, R. Botet: J. Phys. A **18**, 2279 (1985) * [63] A.S. Monin, A.M. Yaglom: _Statistical Fluid Mechanics_ (MIT Press, Boston 1975) Vol. 2. * [64] C. Heneghan, G. McDarby: Phys. Rev. E, **62**, 6103 (2000) * [65] U. Frisch: _Turbulence: The legacy of of A.N. Kolmogorov_ (Cambridge University Press, Cambridge 1995). * [66] P.F. Panter: _Modulation, Noise, and Spectral Analysis_ (McGraw-Hill Book Company, New York 1965) * [67] M.B. Priestley: _Spectral Analysis and Time Series_ (Academic Press, London 1981) * [68] D.B. Percival, A.T. Walden: _Spectral Analysis for Physical Applications: Multitaper and Conventional Univariate Techniques_ (Cambridge University Press, Cambridge 1994) * [69] J.D. Pelletier: Phys. Rev. Lett. **78**, 2672 (1997) * [70] N. Vandewalle, M. Ausloos: Int. J. Comput. Anticipat. Syst. **1**, 342 (1998) * [71] A. Davis, A. Marshak, W. Wiscombe, R. Cahalan: J. Atmos. Sci. **53** 1538 (1996)* [72] A. Marshak, A. Davis, W. Wiscombe, R. Cahalan: J. Atmos. Sci. **54**, 1423 (1997) * [73] F. Schmitt, D. La Vallee, D. Schertzer, S. Lovejoy: Phys. Rev. Lett. **68**, 305 (1992) * [74] Y. Tessier, S. Lovejoy, D. Schertzer: J. Appl. Meteorol. **32**, 223 (1993) * [75] C. Meneveau, K.R. Sreenivasan: Phys. Rev. Lett. **59**, 1424 (1987) * [76] D. La Vallee, S. Lovejpy, D. Schertzer, P. Ladoy: 'Nonlinear variability, multifractal analysis and simulation of landscape topography', in: _Fractals in Geography_. ed. by L. De Cola and N. Lam (Kluwer, Dodrecht-Boston 1993) * [77] A. A. Borovkov:_Ergodicity and Stability of Stochastic Processes_ (John Wiley, New York, 1998) * [78] R. Holley, E.C. Waymire: Ann. J. Appl. Prob. **2**, 819 (1993)
Specific aspects of time series analysis are discussed. They are related to the analysis of atmospheric data that are pertinent to clouds. A brief introduction on some of the most interesting topics of current research on climate/weather predictions is given. Scaling properties of the liquid water path in stratus clouds are analyzed to demonstrate the application of several methods of statistical physics for analyzing data in atmospheric sciences, and more generally in geophysics. The breaking up of a stratus cloud is shown to be related to changes in the type of correlations in the fluctuations of the signal that represents the total vertical amount of liquid water in the stratus cloud. It is demonstrated that the correlations of the liquid water path fluctuations exist indeed in a more complex way than usually known through their multi-affine dependence. Some Statistical Physics Approaches for Trends and Predictions in Meteorology
Provide a brief summary of the text.
arxiv-format/0110414v1.md
# Quantum resonances in a single plaquette of Josephson junctions: excitations of Rabi oscillations M. V. Fistul Max-Planck Institut fur Physik Komplexer Systeme, D-01187, Dresden, Germany November 4, 2021 ###### pacs: 74.50.+r,03.65.Yz,85.25.Cp Various quantum effects predicted and observed in _macroscopic_ systems [1, 2, 3, 4, 5, 6, 7] have attracted a great attention as it allows to understand the foundation of quantum mechanics [1] and the applicability of quantum mechanics to dissipative systems. [7, 8, 9] Moreover, the interest to this field has been boosted by the possibility to use macroscopic quantum coherent effects for quantum computation. [4, 6, 10] In this field of study Josephson coupled systems consisting of a few interacting Josephson junctions, are of special interest. These systems contain a large number of particles and still, their behaviour is determined by the macroscopic variables, namely Josephson phases \\(\\varphi_{i}(t)\\). Moreover, the dynamics of Josephson phases can be controlled by an externally applied magnetic field \\(H_{ext}\\) and dc bias \\(\\gamma\\), and at low temperatures the number of quasiparticles is extremely small and therefore, the dissipation caused by the quasi-particle current is also small. Indeed, the peculiar macroscopic quantum effects such as tunneling and resonant tunneling of Josephson phase, discrete energy levels have been observed in single Josephson junctions, SQUID systems etc. In the presence of an _externally_ applied microwave radiation the enhancement of both tunneling [3] and resonant tunneling of Josephson phase, have been also observed. [4, 6] These effects can be considered as an evidence of the quantum coherent dynamics, i. e. the presence of coherent Rabi oscillations in macroscopic systems. The majority of macroscopic quantum effects have been studied as the Josephson junctions were biased in the superconducting state, i. e. zero dc voltage state, and the quantum mechanical behaviour of the resistive state of Josephson coupled systems has not been analyzed. It is also well known that in the classical regime the resistive state of Josephson coupled systems displays intrinsic Josephson current oscillations. The oscillating Josephson current can excite the electromagnetic oscillations (EOs) in the superconducting loops, and in turn, these EOs resonantly interact with the time dependent Josephson current. In the case of a weak damping such an interaction leads to a pronounced resonant step in the current- voltage characteristics (\\(I\\)-\\(V\\) curves). The voltage position of the resonance \\(V\\) is determined by the characteristic frequency of EOs \\(\\omega_{0}\\) as \\[V\\ =\\ \\frac{\\hbar\\omega_{0}}{2e}\\ . \\tag{1}\\] The magnitude of the resonance depends on an externally applied magnetic field \\(H_{ext}\\) and the width of the resonance is determined by the damping parameter that, in the classical regime, is due to the presence of a quasi-particle (dissipative) current. In this paper I present a theoretical (semiclassical) analysis of the quantum coherent effects in the _resistive_ (whirling) state of a dc driven single anisotropic plaquette containing three small Josephson junctions. This system consists of two _vertical_ junctions parallel to the bias current \\(\\gamma\\) and a _horizontal_ junction in the transverse direction, as presented in Fig. 1. The dynamics of the system crucially depends on two parameters: the anisotropy \\(\\eta=\\frac{I_{eH}}{I_{eV}}\\), where \\(I_{eH}\\) and \\(I_{eV}\\) are respectively the critical currents of horizontal and vertical junctions, and the discreteness parameter (normalized inductance of the cell), \\(\\beta_{L}\\). Moreover, the quantum effects are enhanced in the limit of a small Josephson energy \\(E_{J}\\ =\\ \\frac{\\hbar L_{eV}}{2e}\\ \\leq\\ \\hbar\\omega_{p}\\), where \\(\\omega_{p}\\) is the plasma frequency. In this limit it is also naturally to apply an external charge \\(Q\\) to the horizontal junction (see, Fig. 1). This charge controls the frequency of transitions between different quantum levels of EOs. Such a system presents a simplest case allowing to couple the Josephson current oscillations with a nonlinear oscillator (horizontal junction), and therefore, to remove the quantum-classical correspondence of a harmonic oscillator [1, 3] and to observe the quantum effects in the resistive state. Note here, that the coherent quantum-mechanical behaviour of a single plaquette of Josephson junctions biased in the superconducting state, have been studied in details in Refs. [4, 5]We found that the quantum effects affect the resonant interaction between EOs and oscillating Josephson current, and the voltage positions of the resonant steps are determined by the discrete _energy levels_ of a nonlinear oscillator.[11] Moreover, the obtained peculiar dependence of the magnitude and the width of resonances on the externally applied magnetic field \\(H_{ext}\\) can be considered as fingerprints of _coherent Rabi oscillations_ excited as the quantum transitions in the spectrum of EOs occur. The dynamics of a single plaquette of three Josephson junctions is determined by time dependent Josephson phases of vertical junctions \\(\\varphi_{1,2}^{v}(t)\\), and the horizontal junction \\(\\varphi_{h}(t)\\). The dynamics of the Josephson phases is described by the Lagrangian \\[L = E_{J}\\{\\frac{1}{2\\omega_{p}{}^{2}}[(\\dot{\\varphi}_{1}^{v})^{2}+( \\dot{\\varphi}_{2}^{v})^{2}+\\eta(\\dot{\\varphi}_{h}-\\alpha v_{g})^{2}]+\\cos(\\varphi _{1}^{v})+ \\tag{2}\\] \\[\\quad\\quad+\\cos(\\varphi_{2}^{v})+\\eta\\cos(\\varphi_{h})+\\gamma( \\varphi_{1}^{v}+\\varphi_{2}^{v})-\\] \\[\\quad\\quad-\\frac{1}{\\beta_{L}}(\\varphi_{1}^{v}-\\varphi_{2}^{v}+ \\varphi_{h}+2\\pi f)^{2}\\}\\] Here, the dc bias \\(\\gamma\\) is normalized to the critical current of the junction \\(I_{cV}\\), and the normalized gate voltage \\(v_{g}~{}=~{}2\\pi V_{g}/\\Phi_{0}\\). The externally applied magnetic field \\(H_{ext}\\) is characterized by the frustration \\(f=\\frac{\\Phi_{ext}}{\\Phi_{0}}\\), i. e. the magnetic flux threading the cell normalized to the magnetic flux quantum. In the case of a low-inductance environment both quantum and temperature fluctuations weakly alter the Josephson phases of the vertical junctions that are in the whirling state.[12] Thus, the Josephson phases can be naturally decomposed \\[\\varphi_{1}^{v}(t) = \\omega t-\\pi f-\\xi(t)~{}~{},\\] \\[\\varphi_{2}^{v}(t) = \\omega t+\\pi f+\\xi(t)~{}~{}. \\tag{3}\\] The frequency \\(\\omega~{}=~{}2eV/\\hbar\\) is determined by the dc voltage \\(V\\) across the junction. As a result we find that the supercurrent flowing through the vertical junctions \\(I_{s}\\) is expressed in the form: \\[I_{s}~{}=~{}I_{cV}<\\sin(\\omega t)\\cos(\\pi f+\\xi(t))>~{}~{}, \\tag{4}\\] where \\(< >\\) means the time-average procedure. Next, to simplify the analysis, we consider a small plaquette of Josephson junctions as the discreteness parameter \\(\\beta_{L}~{}<<~{}1\\). In this case the relationship \\(\\xi(t)~{}=~{}\\varphi_{h}/2\\) is valid, and the system is characterized by one degree of freedom \\(\\xi\\). Introducing the canonical momentum \\(p_{\\xi}~{}=~{}\\partial L/\\partial\\dot{\\xi}\\) and the corresponding operator of momentum \\(\\hat{p}_{\\xi}~{}=~{}-i\\hbar\\partial/\\partial\\xi\\),[4] we arrive at the time-dependent Hamiltonian \\[\\hat{H}(t)~{}=\\hat{H}_{0}-2E_{J}\\cos(\\pi f+\\xi)cos(\\omega t)~{}~{},\\] \\[\\hat{H}_{0}~{}=~{}~{}\\frac{\\omega_{p}^{2}}{E_{J}(4+8\\eta)}(\\hat{p}_{\\xi}-4\\eta \\alpha v_{g})^{2}-E_{J}\\eta\\cos 2\\xi~{}~{}. \\tag{5}\\] Here, \\(\\hat{H}_{0}\\) is the Hamiltonian of the autonomous nonlinear oscillator, where the first term presents the total charging energy of the system and the second term is the Josephson energy of the horizontal junction. The last term in \\(\\hat{H}(t)\\) presents an _intrinsic_ magnetic field dependent coupling between the time dependent Josephson current and EOs. We are interested in the resonant interaction between the ac Josephson current and EOs, and thus, two relevant energy levels \\(E_{m}\\) and \\(E_{n}\\) of the Hamiltonian \\(\\hat{H}_{0}\\), namely \\(\\omega_{nm}(v_{g})~{}=~{}E_{n}(v_{g})-E_{m}(v_{g})~{}\\simeq~{}\\hbar\\omega\\), are important for our problem. These energy levels may be controlled by an externally applied gate voltage \\(v_{g}\\). Because a nonlinear oscillator has no coinciding frequency differences \\(\\omega_{nm}\\) we may truncate our system to the two-level system. With this crucial assumption the Hamiltonian \\(\\hat{H}(t)\\) is written in a simple form: \\[\\hat{H}(t) = \\frac{\\omega_{nm}}{2}\\hat{\\sigma}_{z}+E_{J}(a_{nn}-a_{mm})\\cos( \\omega t)\\hat{\\sigma}_{z} \\tag{6}\\] \\[-2E_{J}|a_{nm}|\\cos(\\omega t)\\hat{\\sigma}_{x}~{}~{},\\] where the matrix elements \\(a_{nm}\\) are \\[a_{nm}~{}=~{}\\int_{0}^{2\\pi}d\\xi\\psi_{n}^{*}(\\xi;v_{g})\\psi_{m}(\\xi;v_{g})cos( \\pi f+\\xi)~{}~{}. \\tag{7}\\] Here, \\(\\psi_{n,m}(x;v_{g})\\) are the gate- voltage dependent wave functions of the autonomous nonlinear oscillator, and \\(\\hat{\\sigma}_{x,z}\\) are the Pauli matrices. Next, we use the standard density matrix approach.[13] In the case of a weak damping the corresponding time-dependent equation for the density matrix \\(\\hat{\\rho}(t)\\) is taken in the form \\[\\hbar\\hat{\\rho}(t)~{}=~{}-i[\\hat{H}(t),\\hat{\\rho}(t)]+[\\hat{H}_{R},(\\hat{\\rho}( t)-\\rho_{\\beta})]~{}~{}, \\tag{8}\\] Figure 1: Sketch of the plaquette with three Josephson junctions (marked by crosses). Arrows indicate the directions of external current flow, (dc bias \\(\\gamma\\)). A gate voltage \\(V_{g}\\) allows to introduce a charge \\(Q\\) in the system, and the gate capacitors are, \\(C=\\alpha C_{H}\\). The externally applied magnetic field \\(H_{ext}\\) is also shown. Dashed circles denote junctions in the resistive (whirling) state. where \\(\\rho_{\\beta}\\) is the equilibrium density matrix, and the dissipative operator \\(\\hat{H}_{R}\\) characterizes the various relaxation processes. In a simplest case this operator is described by two damping parameters \\(\ u_{1,2}\\).[13, 14] By making use of (4) the supercurrent \\(I_{s}\\) is expressed through the quantum-mechanical average of of the operators \\(\\hat{\\sigma}_{x,z}\\) as \\[I_{s}\\ =\\ I_{eV}<|a_{nm}|sin(\\omega t)\\bar{T}r\\{\\hat{\\rho}(t)\\hat{ \\sigma}_{x}\\}+\\] \\[+ \\frac{a_{nn}-a_{mm}}{2}sin(\\omega t)\\bar{T}r\\{\\hat{\\rho}(t)\\hat{ \\sigma}_{z}\\}>\\ . \\tag{9}\\] Eq. (8) is a particular case of the well-known Bloch equations[13, 14], and by using the rotation wave approximation[14] we finally obtain \\(I_{s}\\) as \\[I_{s}\\ =\\ \\frac{2eE_{J}^{2}}{\\hbar^{2}}|a_{nm}|^{2}\\frac{\ u_{2}}{(\\omega- \\omega_{nm})^{2}+\ u_{2}^{2}+2(\\frac{E_{J}}{\\hbar})^{2}(\\frac{\ u_{2}}{\ u_{1} })|a_{nm}|^{2}} \\tag{10}\\] Thus, Eq. (10) shows that the current-voltage characteristics of the plaquette with three small Josephson junctions can display a number of resonances. The physical origin of these resonances is the resonant absorption of the ac Josephson oscillations by the horizontal Josephson junction being in the superconducting state. The voltage positions of the resonances \\(V\\ \\simeq\\ \\hbar\\omega_{nm}/2e\\) are mapped to the various transitions occurring in the spectrum of EOs (the Josephson phase of the horizontal junction). The width of the resonances is determined by the relaxation processes as \\(\ u\\ \\gg\\ E_{J}|a_{nm}|/\\hbar\\), or the frequency \\(\\omega_{R}\\ \\simeq\\ \\frac{E_{J}|a_{nm}|}{\\hbar}\\) of coherent Rabi oscillations in the opposite limit (\\(\ u\\ \\ll\\ E_{J}|a_{nm}|/\\hbar\\)). The maximum value of the magnitude of the resonance depends on the damping parameters \\(\ u_{1,2}\\), and may reach the value \\(I_{s}^{max}\\ \\simeq\\ e\ u_{2}\\). Note here that we assumed the low temperature regime (\\(T\\ \\leq\\ \\hbar\\omega_{nm}\\)) and did not take into account processes involving multi-photon interactions between the ac Josephson current and EOs. These multi-photon interactions lead to additional subharmonic resonances (\\(\\omega\\ \\simeq\\ \\omega_{nm}/k\\)) with smaller magnitude. The spectrum \\(E_{n}(v_{g})\\) and the corresponding wave functions are found as periodic solutions of the Schrodinger equation: \\[\\hat{H}_{0}\\psi_{n}(\\xi;v_{g})\\ =\\ E_{n}(v_{g})\\psi_{n}(\\xi;v_{g}). \\tag{11}\\] It is well known that the spectrum \\(E_{n}(v_{g})\\) of Eq. (11) contains an infinite number of bands and is controlled by the gate-voltage \\(v_{g}\\).[15, 16] Although the solutions of (11) can be analized by making use of the Mathieu functions[17] for arbitrary ratio \\(E_{J}/\\hbar\\omega_{p}\\), here we consider the regime of small Josephson energy, \\(E_{J}\\ \\ll\\ \\hbar\\omega_{p}\\), where all results are simplified. In this limit, and at low temperatures as the transitions between the ground state and the excited levels are important, we obtain \\[\\omega_{n0}\\ \\simeq\\ \\frac{\\omega_{p}^{2}}{E_{J}(4+8\\eta)}|n(n-8\\eta\\alpha v_{g})| \\tag{12}\\] The typical dependence of \\(E_{n}(v_{g})\\) and a number of transitions are shown in Fig. 2a. By making use of a perturbation theory the relevant matrix elements \\(a_{nm}\\) are obtained in this limit. Thus, e.g. \\(a_{\\pm 1}\\ 0\\ \\simeq\\ 1\\), \\(a_{\\pm 3\\ 0}\\ \\simeq\\ (\\frac{E_{J}}{\\hbar\\omega_{p}})^{2}\\). The transition \\(0\\to 2\\) is not appearing in the \\(I\\)-\\(V\\) curve because the matrix element \\(a_{2\\ 0}\\) is rather small. It is due to a specific symmetry of the potential energy (\\(\\propto\\cos 2\\xi\\)) in the Hamiltonian \\(\\hat{H}_{0}\\). The calculated resonant current-voltage characteristics is presented in Fig. 2b. As the Josephson energy \\(E_{J}\\) is small, the voltage positions of the resonances are strongly affected by the gate voltage \\(v_{g}\\) but the widths and the magnitudes of the resonances weakly depend on the externally applied magnetic field \\(H_{ext}\\). In the opposite case as the Josephson energy is large, \\(E_{J}\\ \\gg\\ \\hbar\\omega_{p}\\), the situation is reversed: the voltage positions of the resonances are weakly altered by \\(v_{g}\\) but the width of the resonances strongly depends on \\(H_{ext}\\). In the case of intermediate values of \\(E_{J}\\ \\simeq\\ \\hbar\\omega_{p}\\) the strong dependence of the resonant current-voltage characteristics on both parameters \\(v_{g}\\) and \\(H_{ext}\\) is found. In conclusion we have shown that a particular system of a single plaquette containing three small Josephson junctions display resonances in the \\(I\\)-\\(V\\) curve. These resonances are due to the resonant absorption of intrinsic ac Josephson oscillations and are the fingerprints of various transitions between the discrete energy levels of the macroscopic Josephson phase. The coherent quantum-mechanical dynamics of these transitions may be controlled by variation of the bias current \\(\\gamma\\), the gate voltage \\(V_{g}\\) and an externally applied magnetic field \\(H_{ext}\\). Finally, we note that similar quantum resonances may be found also in more complex Josephson (or mixed) coupled systems, e.g. the inductively coupled dc and RF SQUIDs, dc SQUID and quantum dots [18], etc. [11] The measurements of these resonances and their dependence on the parameters of the system should allow to study in detail the coherent quantum-mechanical dynamics of macroscopic variables. I thank S.-G. Chung, S. Flach, P. Hakonen, and A. V. Ustinov for useful discussions. ## References * [1] A. O. Caldeira and A. T. Legget, Ann. Phys. **149**, 374 (1983). * [2] M. Tinkham, _Introduction to Superconductivity_, McGrow-Hill, New York, 1996. * [3] Devoret M. H., J. M. Martinis and J. Clarke, Phys. Rev. Lett., **55** 1908 (1985). * [4] T. P. Orlando, J. E. Mooij, L. Tian, C. H. van der Wal, L. S. Levitov, S. Lloyd, J. J. Mazo, Phys. Rev. B, **60**, 15398 (1999). * [5] C, H. van der Wal, A. C. J. ter Haar, F. K. Wilhelm, R. N. Schouten, C. J. P. M. Harmans, T. P. Orlando, S. Lloyd and J. E. Mooij, Science, **290**, 773 (2000). * [6] Friedman J. R., Patel V., Chen W., Tolpygo S. K., and Lukens J. E. _Nature_, **406**, (2000) 43-46; Rouse R., Han S. Y., and Lukens J. E., _Phys. Rev. Lett._**75**, (1995) 1614 ; * [7] T. Dittrich, P. Hanggi, G. Ingold, B. Kramer, G. Schon, and W. Zwerger _Quantum Transport and Dissipation_, Wiley-VCH, 1998. * [8] H. Dekker, Phys. Rep., **80**, 1 (1981). * [9] A. O. Caldeira and A. J. Leggett, Physica **121A**, 587 (1983). * [10] 42. A. Wallraff, Y. Koval, M. Levitchev, M. V. Fistul, and A. V. Ustinov, J. Low Temp. Phys. **118**, 543 (2000) * [11] A similar spectroscopy of discrete levels of macroscopic Josephson phase has been carried out experimentally for a SQUID loop coupled to a single small Josephson junction, see R. Lindell, J. Penttila, M. Paalanen, and P. Hakonen, Bulletin of the APS, **46**, No. 1 (2001). * [12] G. -L. Ingold and Yu. V. Nazarov, in _Single Charge Tunneling Coulomb Blockade Phenomena in Nanostructures_, edited by H. Grabert and M. H. Devoret (Plenum Press, New York, 1992, Chap. 2. * [13] K. Blum, _Density matrix theory and applications_, Plenum, NY, (1981). * [14] R. Loudon, _The quantum theory of light_, Oxford, Clarendon Pr. (1997) * [15] K. K. Likharev and A. B. Zorin, J. Low. Temp. Phys. **59**, 347 (1985). * [16] G. Schon and A. D. Zaikin, Phys. Rep. **198**, 237 (1990). * [17] M. Abramovitz and I. Stegun, _Handbook of Mathematical functions_, Dover, NY, (1964). * [18] A. W. Holleitner, H. Qin, F. Simmel, B. Irmer, R. H. Blick, J. P. Kotthaus, A. V. Ustinov, and K. Eberl, New J. Phys. **2**, 3.1. (2000).
We present a theoretical study of a quantum regime of the _resistive_ (whirling) state of dc driven anisotropic single plaquette containing three small Josephson junctions. The current-voltage characteristics of such a system display resonant steps that are due to the resonant interaction between the time dependent Josephson current and the excited electromagnetic oscillations (EOs). The voltage positions of the resonances are determined by the quantum interband transitions of EOs. We show that in the quantum regime as the system is driven on the resonance, coherent Rabi oscillations between the quantum levels of EOs occur. At variance with the classical regime the magnitude and the width of resonances are determined by the frequency of Rabi oscillations that in turn, depends in a peculiar manner on an externally applied magnetic field and the parameters of the system.
Summarize the following text.
arxiv-format/0111414v1.md
# Radial Modes of Neutron Stars with a Quark Core P.K. Sahu, G.F. Burgio and M. Baldo Istituto Nazionale di Fisica Nucleare, Sezione di Catania Corso Italia 57, I-95129 Catania, Italy ###### dense matter -- equation of state -- stars: neutron -- stars: oscillations Nuclear matter at sufficiently high density and temperature is expected to undergo a phase transition to a quark-gluon plasma. The indirect evidences from heavy-ion experiments at CERN (Heinz 2000; Heinz & Jacobs 2000) and more recently at RHIC (Blaizot 2001) assume to confirm the formation of a quark-gluon plasma. Such a phase transition might occur inside neutron stars, because these are cold and very compact astrophysical objects. It is therefore very interesting to study the effects of possible phase transitions in neutron stars observable like maximum gravitational masses, radii, oscillation frequencies, etc. In the present letter, we analyze the consequences of a hadron-quark phase transition on the periods of radial oscillations in neutron stars. In fact, more than three decades ago, Cameron (1965) made a suggestion that vibration of neutron stars could excite motions that can have interesting astrophysical implications. There are several investigations of vibrating neutron stars and the simple dimensional analysis suggest that the period of fundamental mode would be the order of milliseconds. More than two decades later, Cutler et al. (1990) concluded that neutron stars of about one solar mass and radius about 10 km give periods (3-5) ms, and these turned out to be relatively insensitive to the exact value of central density. Also, Datta et al. (1992) tried to calculate the oscillation periods for strange quark stars and they found it to lie in a range of values 0.06-0.3 ms. These values were not substantially different from the ones of conventional neutron stars characterized by the periods \\(\\sim\\) 0.3 ms for primary mode and \\(\\leq\\) 0.2 ms for higher modes. Since the neutron stars were assumed to be composed of only hadron matter, the results may be different if a core made of quark matter is present. Of course, the results strictly depend on the construction of the equation of state along with some constraints on the parameters in both hadron and quark phases. Here we are mainly searching for possible signals from the onset of the quark phase, therefore the details of the EOS should not be relevant. In this letter, we adopt the equations of state for neutron stars with a quark core as developed very recently by Burgio et al. (2001), and then we estimate the period of radial oscillations by using pulsation equations of a nonrotating star in general relativistic formalism given by Chandrasekhar (1964). The equation of state used here may be divided into three components. The equations of state used in the hadron sector have been derived in the non-relativistic Brueckner-Hartree-Fock (BHF) and the relativistic mean field (RMF) approaches. In the BHF method (Baldo, 1999), the Brueckner-Bethe-Goldstone formalism has been used with realistic Paris two-body (Lacombe et al., 1980) and Urbana (Carlson et al., 1983; Schiavilla et al., 1986) three-body forces, to ensure the correct reproduction of nuclear matter saturation properties. This procedure has been extended to asymmetric nuclear matter (Baldo et al., 1998; Baldo et al., 2000; Vidana et al., 2000) including hyperons by implementing hyperon-nucleon potentials that are fitted with existing scattering data. The RMF theory has been derived (Serot & Walecka, 1986) from the many-body Lagrangian density in the mean field approximation. The parameters are fitted in such a way that it reproduces the correct values of nuclear matter properties at the saturation point (Ghosh et al., 1995; Sahu, 2000). Also hyperons are included in RMF at the mean field level for asymmetric nuclear matter with symmetry energy around 30 MeV at saturation, same as that of BHF theory. The nuclear incompressibility is around 260 MeV for BHF theory and is taken to be same in RMF theory for the sake of compatibility. The quark matter equation of state has a big uncertainty due to models dependence and its parameters. In the literature, there are models (Schertz et al., 1998) associated with different ad hoc parameters in quark masses and bag constant. We adopted here the simple MIT bag model (Chodos et al., 1974) both with density dependent and independent bag constant. The density dependent bag constant is fixed according to the hypothesis of a constant energy density along the transition line, compatible with the CERN data. Several parametrizations have been considered in recent calculations (Burgio et al., 2001). For a Wood-Saxon parametrization of the density dependent bag constant, the onset of quark phase in neutron stars takes place at about two times the saturation density, while for Gaussian like parametrizations the onset occurs at lower density. To be specific we choose the Wood-Saxon parametrization, but the results are expected to be similar for the other parametrizations. Once the equations of state both in the hadron and the quark sector are well established, we can construct the mixed phase by assuming a first order phase transition. As pointed out by Glendenning (1992), the first order phase transition in neutron stars differs from the one in ordinary matter, because of two conserved charges, i.e. the baryon charge and the global electrical charge. As a consequence, the pressure in the mixed phase varies continuously with the baryon density and is not a constant. The proportion of the hadron and quark component in the mixed phase is then calculated imposing the mechanical and chemical equilibrium, supplemented by the condition of global charge neutrality (Schertz et al., 1998; Burgio et al., 2001). Finally we can construct the total equation of state, which spans from hadron to mixed and quark phases. This is the main ingredient needed to calculate the frequencies of radial modes in neutron stars. It has to be noticed that this equation of state, according to the complete Glendenning' s construction, includes the mixed phase and there is no sharp surface, at a given density (and radius), separating the hadron and the quark phases. This is at variance with the calculations of Haensel et al. (1989) and represents one of the novelty of our calculations. See also Haensel et al. (1990). In agreement with this physical construction, during the star oscillations, the transition from one phase to the other cannot be limited by diffusion. Since the time scale of weak processes are surely much shorter than the period, matter will remain in beta equilibrium and the calculated equation of state is the relevant one for the study of the density oscillations. The equation for infinitesimal radial pulsations of a nonrotating star was given by Chandrasekhar (1964) and, in general relativity formalism, has the following form: \\[X\\frac{d^{2}\\xi}{dr^{2}}+Y\\frac{d\\xi}{dr}+Z\\xi=\\sigma^{2}\\xi. \\tag{1}\\] Here \\(\\xi\\)(r) is the Lagrangian fluid displacement and \\(c\\sigma\\) is the characteristic eigenfrequency (\\(c\\) is the speed of light). The quantities \\(X\\), \\(Y\\), \\(Z\\) depend on the equilibrium profiles of the pressure \\(p\\) and density \\(\\rho\\) of the star and are represented by \\[X = \\frac{-e^{-\\lambda}e^{\ u}}{p+\\rho c^{2}}\\Gamma p, \\tag{2}\\]\\[Y = \\frac{-e^{-\\lambda}e^{\ u}}{p+\\rho c^{2}}\\biggl{\\{}\\Gamma p\\biggl{(} \\frac{1}{2}\\frac{d\ u}{dr}+\\frac{1}{2}\\frac{d\\lambda}{dr}+\\frac{2}{r}\\biggr{)} \\tag{3}\\] \\[+ p\\frac{d\\Gamma}{dr}+\\Gamma\\frac{dp}{dr}\\biggr{\\}},\\] \\[Z = \\frac{e^{-\\lambda}e^{\ u}}{p+\\rho c^{2}}\\biggl{\\{}\\frac{4}{r} \\frac{dp}{dr}-\\frac{(dp/dr)^{2}}{p+\\rho c^{2}}-A\\biggr{\\}}\\] (4) \\[+ \\frac{8\\pi G}{c^{4}}e^{\ u}p\\] \\(\\Gamma\\) is the adiabatic index defined as \\[\\Gamma=(1+\\rho c^{2}/p)\\frac{dp}{d(\\rho c^{2})}, \\tag{5}\\] and \\[A = \\frac{d\\lambda}{dr}\\frac{\\Gamma p}{r}+\\frac{2p}{r}\\frac{d\\Gamma} {dr}+\\frac{2\\Gamma}{r}\\frac{dp}{dr}-\\frac{2\\Gamma p}{r^{2}} \\tag{6}\\] \\[- \\frac{1}{4}\\frac{d\ u}{dr}\\biggl{(}\\frac{d\\lambda}{dr}\\Gamma p+2 p\\frac{d\\Gamma}{dr}+2\\Gamma\\frac{dp}{dr}-\\frac{8\\Gamma p}{r}\\biggr{)}\\] \\[- \\frac{1}{2}\\Gamma p\\biggl{(}\\frac{d\ u}{dr}\\biggr{)}^{2}-\\frac{1 }{2}\\Gamma p\\frac{d^{2}\ u}{dr^{2}}.\\] To solve the pulsations equation (1), the boundary conditions are \\[\\xi(r=0) = 0, \\tag{7}\\] \\[\\delta p(r=R) = -\\xi\\frac{dp}{dr}-\\Gamma p\\frac{e^{\ u/2}}{r^{2}}\\frac{\\partial}{ \\partial r}(r^{2}e^{-\ u/2}\\xi)\\biggr{|}_{r=R}\\] (8) \\[= 0.\\] It is important to note that \\(\\xi\\) is finite, when \\(p\\) vanishes at \\(r=R\\). The pulsations equation (1) is a Sturm - Liouville eigenvalue equation for \\(\\sigma^{2}\\), subject to the boundary conditions Eq. (7) and (8). As a consequence the eigenvalues \\(\\sigma^{2}\\) are all real and form an infinite discrete sequence \\(\\sigma^{2}_{o}<\\sigma^{2}_{1}<\\ldots<\\sigma^{2}_{n}<\\ldots\\ldots\\), with the corresponding eigenfunction \\(\\xi_{0}(r),\\ \\xi_{1}(r),\\ ,\\xi_{n}(r)\\), where \\(\\xi_{n}(r)\\) has \\(n\\) nodes. It immediately follows that if the fundamental radial mode of a star is stable (\\(\\sigma^{2}_{o}>0\\)), then all the radial modes are stable. We note that Eqs.(2-6) depend on the pressure and density profiles, as well as on the metric functions \\(\\lambda(r),\\ \ u(r)\\) of the nonrotating star configuration. Those profiles are obtained by solving the Oppenheimer-Volkoff equations of hydrostatic equilibrium (Misner et al., 1970) \\[\\frac{dp}{dr} = -\\frac{G(\\rho+p/c^{2})(m+4\\pi r^{3}p/c^{2})}{r^{2}(1-2Gm/rc^{2})}, \\tag{9}\\]\\[\\frac{dm}{dr} = 4\\pi r^{2}\\rho, \\tag{10}\\] \\[\\frac{d\ u}{dr} = \\frac{2G}{r^{2}c^{2}}\\frac{(m+4\\pi r^{3}P/c^{2})}{(1-2Gm/rc^{2})},\\] (11) \\[\\lambda = -\\ln(1-2Gm/rc^{2}). \\tag{12}\\] Eqs. (9) - (12) can be numerically integrated for a given equation of state p(\\(\\rho\\)) and given central density to obtain the radius \\(R\\) and gravitational mass \\(M=m(R)\\) of the star. Therefore the basic input to solve the structure and pulsation equations is the equation of state, \\(p=p(\\rho)\\). It has been seen (Burgio et al., 2001) that structure parameters of neutron stars are mainly dominated by the equation of state at high densities, specifically around the core. Since the oscillation features are governed by structure profiles of neutron stars, it is expected to possess marked sensitivity on the high density equation of state as well. In this calculation, we adopted the equation of states, that have been derived from BHF theory in a non-relativistic limit (Baldo et al., 1998; Baldo et al., 2000) and from RMF theory in a relativistic limit (Ghosh et al., 1995; Sahu, 2000) without quark core. These are denoted by BHF and RMF respectively. Also, we took the equation of states of both BHF and RMF theories with quark core, considering the Wood-Saxon like density dependent parameterization of bag constant (Burgio et al., 2001) in the quark sector. They are represented by BHF+MW and RMF+MW. Another set of equation of states with quark phase were taken with constant bag parameters in quark sector (Datta et al., 1992) along with BHF and RMF theories, where the value of \\(B\\) = 110 MeV fm\\({}^{-3}\\) was chosen in order to ensure the presence of quark matter in the core, for completeness. These are correspondingly labeled by BHF+M and RMF+M. We employed these three sets of equation of states to calculate the oscillation period \\(P\\) (\\(=2\\pi/c\\sigma\\)) versus gravitational mass \\(M\\) (in units of solar mass \\(M_{\\odot}\\)). The results are shown in figures 1-2. In both figures, in the upper panel the oscillation periods in seconds are shown versus the total gravitational mass, while in the lower panel the total gravitational mass versus central density is displayed. For the sake of comparison, we show also the results for neutron stars without quark core. If we carefully examine figure 1, we notice that the period of oscillations in BHF+MW and RMF+MW displays a small kink around the point where the mixed phases start, in the primary as well as in the higher modes. In other words, the period increases and then decreases with respect to the usually decreasing trend observed in BHF and RMF. This happens within a small range of gravitational masses, \\(0.4<M/M_{\\odot}<0.7\\), which corresponds to central density, \\(0.2<\\rho_{c}/(10^{15}g~{}cm^{-3})<1.7\\), where the mixed phase regions are located. The periods are slightly smaller than 0.4 ms in BHF+MW and RMF+MW models compared with the periods in BHF and RMF models for the fundamental mode and the similar trend has been seen in higher modes also. When we compare BHF+M and RMF+M models with BHF and RMF models in figure 2, we notice that there are small kinks in the period of oscillation in the mixed phase regions with gravitational masses \\(0.5<M/M_{\\odot}<1.2\\) and corresponding central densities, \\(0.6<\\rho_{c}/(10^{15}g\\ cm^{-3})<1.5\\). But these kinks are not as prominent as seen in BHF+MW and RMF+MW models. This is due to the fact that the transition from hadron to quark phase is smooth in the case of a constant bag parameter. However, the periods of oscillations within these models are comparable with BHF and RMF models for both primary (larger than 0.4 ms) and higher modes. Thus significant kinks are observed in BHF+MW and RMF+MW realistic models, because the bag constant is density dependent. The fundamental mode of oscillation periods for neutron stars with and without quark core are found to have the range 0.2 -0.6 ms, the only difference being a significant kink in neutron stars which are associated with quark phase. The substantial difference is observed in the fundamental modes of the period of oscillations (\\(\\sim.22\\) ms ) in the neutron stars which are composed of quark cores with density dependence bag parameters (BHF+MW and RMF+MW) from that of normal neutron stars without quark core (BHF and RMF), at the maximum gravitational mass limit (see figure 1). For higher modes the periods are \\(\\leq 0.3\\) ms for all the cases. As another interesting point, we notice that all neutron stars with quark core have maximum gravitational masses around \\(1.5M_{\\odot}\\). In summary, we have presented a calculation of the period of oscillations of neutron stars by using the radial pulsation equations of nonrotating neutron star, as given by Chandrasekhar in general relativity formalism. To solve the radial pulsation equations, one needs a structure profile of nonrotating neutron stars by employing realistic equation of states. The equation of state we used here were derived from the non-relativistic and relativistic formalism with quark phase at higher densities. Since the quark matter is not well established, we explored the parameters of quark matter compatible with the heavy-ion experiments at the point of possible formation of quark-gluon plasma. Then the equation of state were constructed by using Glendenning condition for mechanical and chemical equilibrium as a function of baryon and electron density at the mixed phase, comprising with hadron, mixed and quark phases. The main conclusion of our work is that the period of oscillations shows some significant kink against gravitational mass, if one uses a realistic equation of state associated with density dependent bag constant. These type of kinks are not present in conventional neutron stars, constituted by only hadrons. These kinks can be considered a distinct signature of the quark matter onset in neutron stars. ## References * (1) * (2) Baldo, M., Burgio, G. F., & Schulze, H.-J. 1998, Phys.Rev.C, 58, 3688 * (3)Baldo, M. 1999, in Nuclear Methods and The Nuclear Equation of State, ed. M. Baldo (Singapore: World Scientific), 1 * () Baldo, M., Burgio, G. F., & Schulze, H.-J. 2000, Phys.Rev.C, 61, 055801 * () Blaizot, J. P. 2001, preprint(nucl-th/0107025) * () Burgio, G. F., Baldo, M., Sahu, P. K., Santra, A. B., & Schulze H. -J. 2001, Phys.Lett.B, submitted * () Cameron, A. G. W. 1965, Nature, 205, 787 * () Carlson, J., Pandharipande, V. R., & Wiringa, R. B. 1983, Nucl.Phys.A, 401, 59 * () Chandrasekhar, S. 1964, ApJ, 140, 417 * () Chodos, A., Jaffe, R. L., Johnson K., Thorn, C. B., & Weisskopf, V. F. 1974, Phys.Rev.D, 9, 3471 * () Cutler, C., Lindblom, L. & Splinter, R. J. 1990, ApJ, 363, 603 * () Datta, B., Sahu, P. K., Anand, J. D., & Goyal, A. 1992, Phys.Lett.B, 283, 313 * () Datta, B., Hasan, S. S., Sahu, P. K., & Prasanna, A. R. 1998, Int.Jour.Mod.Phys.D, 7, 49 * () Ghosh, S. K., Phatak, S. C. & Sahu, P. K. 1995, Z. Phys.A, 352, 457 * () Glendenning, N. K. 1992, Phys.Rev.D, 46, 1274 * () Haensel, P., Zdunik, J.L. & Schaeffer, R. 1989, A&A 217, 137 * () Haensel, P., Denissov, A. & Popov, S. 1990, A&A 240, 78 * () Heinz, U., & Jacobs, M. 2000, preprint (nucl-th/0002042) * () Heinz, U. 2000, preprint (hep-ph/0009170) * () Lacombe, M., Loiseau, B., Richard, J. M., Vinh Mau, R., Cote, J., Pires, P., & de Tourreil, R. 1980, Phys.Rev.C, 21, 861 * () Misner, C. W., Thorne, K. S., & Wheeler, J. A. 1970, Gravitation (San Francisco; W. H. Freeman ed.) * () Sahu, P. K. 2000, Phys.Rev.C, 62, 045801 * () Schertler, K., Greiner, C., Sahu, P. K., & Thoma M. H. 1998, Nucl.Phys.A, 637, 451 * ()* () Schiavilla R., Pandharipande, V. R., & Wiringa, R. B. 1986, Nucl.Phys.A, 449, 219 * () Serot, B. D., & Walecka, J. D. 1986, Adv.Nucl.Phys., 16, 1 * () Vidana, I., Polls, A., Ramos, A., Engvik, L., & Hjorth-Jensen, M. 2000, Phys.Rev.C, 62, 035801Figure 1: In the upper panels the oscillation period in seconds is displayed vs. the gravitational mass in units of the solar mass, whereas in lower panels the gravitational mass is shown vs. the central density. Panels (a) and (b) show results for BHF calculation for purely hadronic (solid line) and mixed hadron-quark matter (dotted line). The bag constant is assumed to be density dependent. Labels 1,2, etc. indicate the higher modes. Panels (c) and (d) correspond to RMF calculations. Figure 2: Same as figure 1, but for density independent bag constant. See text for details.
We make a first calculation of eigenfrequencies of radial pulsations of neutron stars with quark cores in a general relativistic formalism given by Chandrasekhar. The equations of state (EOS) used to estimate such eigenfrequencies have been derived by taking proper care of the hadron-quark phase transition. The hadronic EOS's have been obtained in the framework of the Brueckner-Hartree-Fock and relativistic mean field theories, whereas the quark EOS has been derived within the MIT bag model. We find that the periods of oscillations of neutron stars with a quark core show a kink, which is associated with the presence of a mixed phase region. Also, oscillation periods show significant differences between ordinary neutron stars and neutron stars with dynamical quark phases.
Provide a brief summary of the text.
arxiv-format/0111440v1.md
# Maximum mass of neutron stars with a quark core _G. F. Burgio\\({}^{1}\\), M. Baldo\\({}^{1}\\), P. K. Sahu\\({}^{1}\\), A. B. Santra\\({}^{2}\\) and H.-J. Schulze\\({}^{3}\\)_ \\({}^{1}\\)Istituto Nazionale di Fisica Nucleare, Sezione di Catania Corso Italia 57, I-95129 Catania, Italy \\({}^{2}\\)Nuclear Physics Division, Bhabha Atomic Research Center, Mumbai 400 085, India \\({}^{3}\\)Departament d'Estructura i Constituents de la Materia, Universitat de Barcelona, Av. Diagonal 647, E-08028 Barcelona, Spain # An ongoing active research area, both theoretical and experimental, concerns the properties of matter under extreme conditions of density and temperature, and the determination of the EOS associated with it. Its knowledge is of key importance for building models of neutron stars (NS's) [1]. The observed NS masses are typically \\(\\approx(1-2)M_{\\odot}\\) (where \\(M_{\\odot}\\) is the mass of the sun, \\(M_{\\odot}=1.99\\times 10^{33}\\)g), and the radius is of the order of 10 km. The matter in the core possesses densities ranging from a few times \\(\\rho_{0}\\) (\\(\\approx 0.17\\) fm\\({}^{-3}\\), the normal nuclear matter density) to one order of magnitude higher. Therefore, a detailed knowledge of the EOS is required for densities \\(\\rho\\gg\\rho_{0}\\), where a description of matter only in terms of nucleons and leptons may be inadequate. In fact, at densities \\(\\rho\\gg\\rho_{0}\\) several species of other particles, such as hyperons and \\(\\Delta\\) isobars, may appear, and meson condensations may take place; also, ultimately, at very high densities, nuclear matter is expected to undergo a transition to a quark-gluon plasma [2]. However, the exact value of the transition density to quark matter is unknown and still a matter of recent debate. In this letter, we propose to constrain the maximum mass of neutron stars taking into account the phase transition from hadronic matter to quark matter inside the neutron star. For this purpose, we describe the hadron phase of matter by using two different equations of state, _i.e._ a microscopic EOS obtained in the Brueckner -Bethe -Goldstone (BBG) theory [3], and a more phenomenological relativistic mean field model [4]. The deconfined quark phase is treated within the popular MIT bag model [5]. The bag constant, \\(B\\), which is a parameter of the bag model, is constrained to be compatible with the recent experimental results obtained at CERN on the formation of a quark-gluon plasma [6], recently confirmed by RHIC preliminary results [7]. This statement requires some clarification. In general, it is not obvious if the informations on the nuclear EOS from high energy heavy ion collisions can be related to the physics of neutron stars interior. The possible quark-gluon plasma produced in heavy ion collision is expected to be characterized by small baryon density and high temperature, while the possible quark phase in neutron stars appears at high baryon density and low temperature. However, if one adopts for the hadronic phase a non-interacting gas model of nucleons, antinucleons and pions, the original MIT bag model predicts that the deconfined phase occurs at an almost constant value of the quark-gluon energy density, irrespective of the thermodynamical conditions of the system [8]. For this reason, it is popular to draw the transition line between the hadronic and quark phase at a constant value of the energy density. density, which was estimated to fall in the interval between 0.5 and 2 GeV fm\\({}^{-3}\\)[9]. This is consistent with the value of about 1 GeV fm\\({}^{-3}\\) reported by CERN experiments. In this exploratory work we will assume that this is still valid, at least approximately, when correlations in the hadron phase are present. We will then study the predictions that one can draw from this hypothesis on neutron star structure. Any observational data on neutron stars in disagreement with these predictions would give an indication on the accuracy of this assumption. Indeed, the hadron phase EOS can be considered well established. The main uncertainty is contained in the quark phase EOS, since it can be currently described only by phenomenological models which contain few adjustable parameters. In the case of the MIT bag model, which is adopted in this work, the parameters are fixed to be compatible with the CERN data, according to the hypothesis of a constant energy density along the transition line. In practice, this means that all our calculations can be limited to zero temperature. We start with the description of the hadronic phase. It has been shown that the non-relativistic BBG expansion is well convergent [10], and the Brueckner-Hartree-Fock (BHF) level of approximation is accurate in the density range relevant for neutron stars. In the calculations reported here we have used the Paris potential [11] as the two-nucleon interaction and the Urbana model as three-body force [12]. This allows the correct reproduction of the empirical nuclear matter saturation point \\(\\rho_{0}\\)[13]. Recently the above procedure has been extended to the case of asymmetric nuclear matter including hyperons [14, 15] by utilizing hyperon-nucleon potentials that are fitted to the existing scattering data. To complete our analysis, we will consider also a hadronic EOS derived from relativistic mean field model (RMF) [16]. The BHF and the RMF EOS are both shown in Fig.1. The parameters of the RMF model have been taken in such a way that the compressibility at saturation is around 260 MeV, the same as in BHF calculations and close to estimates from monopole oscillations in nuclei [17]. The symmetry energy is also quite similar for the two EOS, about 30 MeV at saturation. For the deconfined quark phase, within the MIT bag model [5], the total energy density is the sum of a non-perturbative energy shift \\(B\\), the bag constant, and the kinetic energy for non-interacting massive quarks of flavors \\(f\\) with mass \\(m_{f}\\) and Fermi momentum \\(k_{F}^{(f)}[=(\\pi^{2}\\rho_{f})^{1/3}\\), with \\(\\rho_{f}\\) as the quarks'density of flavour \\(f\\)] \\[\\frac{E}{V}=B+\\sum_{f}\\frac{3m_{f}^{4}}{8\\pi^{2}}\\Big{[}x_{f}\\sqrt{x_{f}^{2}+1} \\left(2x_{f}^{2}+1\\right)-\\sinh^{-1}x_{f}\\Big{]}\\,, \\tag{1}\\] where \\(x_{f}=k_{F}^{(f)}/m_{f}\\). We consider in this work massless \\(u\\) and \\(d\\) quarks, whereas the \\(s\\) quark mass is taken equal to 150 MeV. The bag constant \\(B\\) can be interpreted as the difference between the energy densities of the perturbative vacuum and the physical vacuum. Inclusion of perturbative interaction among quarks introduces additional terms in the thermodynamic potential [18] and hence in the number density and the energy density; however, when taken into account in the first order of the strong coupling constant, these terms do not change our results appreciably. Therefore, in order to calculate the EOS for quark matter we restrict to Eq. (1). In the original MIT bag model the bag constant has the value \\(B\\approx 55\\,{\\rm MeV\\,fm^{-3}}\\), which is quite small when compared with the ones (\\(\\approx 210\\,{\\rm MeV\\,fm^{-3}}\\)) estimated from lattice calculations [19]. In this sense \\(B\\) can be considered as a free parameter. We try to determine a range of possible values for \\(B\\) by exploiting the experimental data obtained at the CERN SPS, where several experiments using high-energy beams of Pb nuclei reported (indirect) evidence for the formation of a quark-gluon plasma [6]. The resulting picture is the following: during the early stages of the heavy-ion collision, a very hot and dense state (fireball) is formed whose energy materializes in the form of quarks and gluons strongly interacting with each other, exhibiting features consistent with expectations from a plasma of deconfined quarks and gluons [20]. Subsequently, the \"plasma\" cools down and becomes more dilute up to the point where, at an energy density of about \\(1\\,{\\rm GeV\\,fm^{-3}}\\) and temperature \\(T\\approx 170\\,{\\rm MeV}\\), the quarks and gluons hadronize. The expansion is fast enough so that no mixed hadron-quark equilibrium phase is expected to occur, and no weak process can play a role. According to the analysis of those experiments, the quark-hadron transition takes place at about seven times normal nuclear matter energy density (\\(\\epsilon_{0}\\approx 156\\,{\\rm MeV\\,fm^{-3}}\\)). In the MIT bag model, the structure of the QCD phase diagram in the chemical potential and temperature plane is determined by only one parameter, \\(B\\), although the phase diagram for the transition from nuclear matter to quark matter is schematic and not yet completely understood, particularly in the light of recent investigations on a color superconducting phase of quark matter [21]. As discussed above, in our analysis we assume that the transition to quark-gluon plasma is determined by the value of the energy density only (for a given asymmetry). With this assumption and taking the hadron to quark matter transition energy density from the CERN experiments we estimate the value of \\(B\\) and its possible density dependence as given below. First, we calculate the EOS for cold asymmetric nuclear matter characterized by a proton fraction \\(x_{p}=0.4\\) (the one for Pb nuclei accelerated at CERN-SPS energies) in the BHF formalism with two-body and three-body forces as described earlier. The result is shown by the solid line in Fig. 1a). Then we calculate the EOS for \\(u\\) and \\(d\\) quark matter using Eq. (1). We find that at very low baryon density the quark matter energy density is higher than that of nuclear matter, while with increasing baryon density the two energy densities become equal at a certain point [indicated in Fig. 1a) by the full dot)], and after that the nuclear matter energy density remains always higher. We identify this crossing point with the transition density from nuclear matter to quark matter. To be more precise, this crossing fixes the density interval where the phase transition takes place. In fact, according to the Gibb's construction, the crossing must be located at the center of the mixed phase region, if it is present. To be compatible with the experimental observation at the CERN-SPS, we require that this crossing point corresponds to an energy density of \\(E/V\\approx 7\\epsilon_{0}\\approx 1.1\\,\\mbox{GeV}\\,\\mbox{fm}^{-3}\\). However, for no density independent value of \\(B\\), the two EOS' cross each other, satisfying the above condition. Therefore, we try a density dependent \\(B\\). In the literature there are attempts to understand the density dependence of \\(B\\)[22]; however, currently the results are highly model dependent and no definite picture has come out yet. Therefore, we attempt to provide effective parametrizations for this density dependence, trying to cover a wide range by considering some extreme choices. Our parametrizations are constructed in such a way that at asymptotic densities \\(B\\) has some finite value \\(B_{as}\\). We have found \\(B_{as}=50\\,\\mbox{MeV}\\,\\mbox{fm}^{-3}\\) for the BHF case, but have verified that our results do not change appreciably by varying this value, since at large densities the quark matter EOS is dominated by the kinetic term on the RHS of Eq. (1). First, we use a Gaussian parametrization given as \\[B(\\rho)=B_{as}+(B_{0}-B_{as})\\exp\\left[-\\beta\\Bigl{(}\\frac{\\rho}{\\rho_{0}} \\Bigr{)}^{2}\\right]\\;. \\tag{2}\\] The parameter \\(\\beta\\) has been fixed by equating the quark matter energy density from Eq. (1) with the nucleonic one at the desired transition density \\(\\rho_{c}=0.98\\,\\mbox{fm}^{-3}\\) (represented by the full dot in Fig. 1a)), i.e. \\(B_{0}=1.1\\,{\\rm GeV\\,fm}^{-3}\\). Therefore \\(\\beta\\) will depend only on the free parameter \\(B_{0}=B(\\rho=0)\\). However, the exact value of \\(B_{0}\\) is not very relevant for our purpose, since at low density the matter is in any case in the nucleonic phase. We attempt to cover the typical range by using the values \\(B_{0}=200\\,{\\rm MeV\\,fm}^{-3}\\) and \\(400\\,{\\rm MeV\\,fm}^{-3}\\), as shown in Fig. 1c). We also use another extreme, Woods-Saxon like, parametrization, \\[B(\\rho)=B_{as}+(B_{0}-B_{as})\\left[1+\\exp\\left(\\frac{\\rho-\\bar{\\rho}}{\\rho_{d} }\\right)\\right]^{-1}\\;, \\tag{3}\\] where \\(B_{0}\\) and \\(B_{as}\\) have the same meaning as described before for Eq. (2) and \\(\\bar{\\rho}\\) has been fixed in the same way as \\(\\beta\\) for the previous parametrization. For \\(B_{0}=400\\,{\\rm MeV\\,fm}^{-3}\\), we get \\(\\bar{\\rho}=0.8\\,{\\rm fm}^{-3}\\) for \\(\\rho_{d}=0.03\\,{\\rm fm}^{-3}\\). With this parametrization \\(B\\) remains practically constant at a value \\(B_{0}\\) up to a certain density and then drops to \\(B_{as}\\) almost like a step function, as shown by the long-dashed curve in Fig. 1c). It is an extreme parametrization in the sense that it will delay the onset of the quark phase in neutron star matter as much as possible. Both parametrizations Eqs. (2) and (3) yield the transition from nuclear matter to quark matter at the energy density compatible with the experiments. The same procedure has been followed for the RMF EOS, see Figs. 1b) and 1d). In this case the parameter \\(B_{as}\\) is slightly smaller, about 38 MeV fm\\({}^{-3}\\). With these parametrizations of the density dependence of \\(B\\) we now consider the hadron-quark phase transition in neutron stars. We calculate in the BHF framework and in the RMF approach the EOS of a conventional neutron star as composed of a chemically equilibrated and charge neutral mixture of nucleons, leptons and hyperons. The result is shown by the solid lines in Fig. 2a) and 2b) respectively. The other curves (with the same notation as in Fig. 1) represent the EOS' for beta-stable and charge neutral quark matter. We determine the range of baryon density where both phases can coexist by following the construction from ref. [23]. In this procedure both hadron and quark phases are allowed to be charged, still preserving the total charge neutrality. Pressure is the same in the two phases to ensure mechanical stability, while the chemical potentials of the different species are related to each other to ensure chemical and beta stability. The resulting EOS for neutron star matter, according to the different bag parametrizations, is reported in Fig. 3, where the shaded area indicates the mixed phase region. A pure quark phase is present at densities above the shaded area and a pure hadronic phase is present below it. The onset density of the mixed phase turns out to be slightly smaller than the density for hyperons formation in pure hadronic matter. Of course hyperons are still present in the hadron component of the mixed phase. For the Wood-Saxon parametrization of the bag constant the mixed phase persists up to high baryon density. As previously anticipated, this is in agreement with the delayed crossing of the energy density curves for the hadron and quark phases, as can be seen from Fig. 2. Finally, we solve the Tolman-Oppenheimer-Volkoff equations [1] for the mass of neutron stars with the EOS' of Figs. 3 as input. The calculated results, the NS mass vs. central density, for all cases are shown in Fig. 4a) and 4b). The EOS with nucleons, leptons and hyperons gives a maximum mass of neutron stars of about \\(1.26\\,M_{\\odot}\\) in the BHF case. In the case of the RMF model, the corresponding EOS produces values of the maximum mass close to \\(1.7\\,M_{\\odot}\\). It is commonly believed that the inclusion of the quark component should soften the NS matter EOS. This is indeed the case in the RMF model, as apparent in Fig. 4b). However the situation is reversed in the BHF case, where the EOS becomes, on the contrary, stiffer. Correspondingly, the inclusion of the quark component has the effect of increasing the maximum mass in the BHF case and of decreasing it in the RMF case. As a consequence, the calculated maximum masses fall in any case in a relatively narrow range, \\(1.45\\,M_{\\odot}\\leq M_{\\rm max}\\leq 1.65\\,M_{\\odot}\\), slightly above the observational lower limit of \\(1.44\\,M_{\\odot}\\)[24]. As one can see from Fig. 4, the presence of a mixed phase produces a sort of plateau in the mass vs. central density relationship, which is a direct consequence of the smaller slope displayed by all EOS in the mixed phase region, see Fig. 3. In this region, however, the pressure is still increasing monotonically, despite the apparent smooth behaviour, and no unstable configuration can actually appear. We found that the appearance of this slow variation of the pressure is due to the density dependence of the bag constant, in particular the occurrence of the density derivative of the bag constant in the pressure and chemical potentials, as required by thermodynamic consistency. To illustrate this point we calculate the EOS for quark matter with a density independent value of \\(B=90\\) MeV fm\\({}^{-3}\\), see Fig. 5, and the corresponding neutron star masses. The EOS is now quite smooth and the mass vs. central density shows no indication of a plateau. More details on this point will be given elsewhere [25]. Finally, it has to pointed out that the maximum mass value, whether B is density dependent or not, is dominated by the quark EOS at densitieswhere the bag constant is much smaller than the quark kinetic energy. The constraint coming from heavy ion reactions, as discussed above, is relevant only to the extent that it restricts B at high density within a range of values, which are commonly used in the literature. This can be seen also from Fig. 5, where the (density independent) value of \\(B=90MeV\\) produces again a maximum value around 1.5 solar mass. In conclusion, under our hypothesis, we found first that a density dependent \\(B\\) is necessary to understand the CERN-SPS findings on the phase transition from hadronic matter to quark matter. Then, taking into account this observation, we calculated NS maximum masses, using an EOS which combines reliable EOS's for hadronic matter and a bag model EOS for quark matter. The calculated maximum NS masses lie in a narrow range in spite of using very different parametrizations of the density dependence of \\(B\\). Other recent calculations of neutron star properties employing various RMF nuclear EOS' together with either effective mass bag model [26] or Nambu-Jona-Lasinio model [27] EOS' for quark matter, also give maximum masses of only about \\(1.7\\,M_{\\odot}\\), even though not constrained to reproduce simultaneously the CERN-SPS data. The value of the maximum mass of neutron stars obtained according to our analysis appears robust with respect to the uncertainties of the nuclear EOS. Therefore, the experimental observation of a heavy (\\(M>1.6M_{\\odot}\\)) neutron star, as claimed recently by some groups [28]( \\(M\\approx 2.2M_{\\odot}\\) ), if confirmed, would suggest mainly two possibilities. Either serious problems are present for the current theoretical modelling of the high-density phase of nuclear matter, or the working hypothesis that the transition to the deconfined phase occurs approximately at the same energy density, irrespective of the thermodynamical conditions, is substantially wrong. In both cases, one can expect a well defined hint on the high density nuclear matter EOS. This work was supported in part by the programs \"Estancias de cientificos y tecnologos extranjeros en Espana\", SGR98-11 (Generalitat de Catalunya), and DGICYT (Spain) No. PB98-1247. ## References * [1] S. L. Shapiro and S. A. Teukolsky, _Black Holes, White Dwarfs, and Neutron Stars_ (John Wiley & Sons, New York, 1983). * [2] E. Witten, Phys. Rev. **D30**, 272 (1984); G. Baym, E. W. Kolb, L. McLerran, T. P. Walker, and R. L. Jaffe, Phys. Lett. **B160**, 181 (1985); N. K. Glendenning, Mod. Phys. Lett. **A5**, 2197 (1990). * [3] M. Baldo, _Nuclear Methods and the Nuclear Equation of State_ (World Scientific, Singapore, 1999). * [4] B. D. Serot and J. D. Walecka, Adv. Nucl. Phys. **16**, 1 (1986). * [5] A. Chodos, R. L. Jaffe, K. Johnson, C. B. Thorn, and V. F. Weisskopf, Phys. Rev. **D9**, 3471 (1974). * [6] U. Heinz and M. Jacobs, nucl-th/0002042; U. Heinz, hep-ph/0009170. * [7] See for instance \"Theoretical Conference Summary\", Quark Matter 2001, J.P. Blaizot, nucl-th/0107025. * [8] J. Cleymans, R.V. Gavai and E. Suhonen, Physics Rep., **130**, 217 (1986). * [9] B. Muller, Lecture Notes in Physics 225, Springer (1985). * [10] H. Q. Song, M. Baldo, G. Giansiracusa, and U. Lombardo, Phys. Rev. Lett. **81**, 1584 (1998), Phys. Lett. **473**, 1 (2000); M. Baldo and G. F. Burgio, _Microscopic Theory of the Nuclear Equation of State and Neutron Star Structure_, in \"Physics of Neutron Star Interiors\", Eds. D. Blaschke, N. Glendenning, and A. Sedrakian, Lectures Notes in Physics, Springer, vol. 578 (2001), pp. 1-30. * [11] M. Lacombe, B. Loiseau, J. M. Richard, R. Vinh Mau, J. Cote, P. Pires, and R. de Tourreil, Phys. Rev. **C21**, 861 (1980). * [12] J. Carlson, V. R. Pandharipande, and R. B. Wiringa, Nucl. Phys. **A401**, 59 (1983); R. Schiavilla, V. R. Pandharipande, and R. B. Wiringa, Nucl. Phys. **A449**, 219 (1986). * [13] M. Baldo, I. Bombaci, and G. F. Burgio, Astron. Astrophys. **328**, 274 (1997). * [14] M. Baldo, G. F. Burgio, and H.-J. Schulze, Phys. Rev. **C58**, 3688 (1998); Phys. Rev. **C61**, 055801 (2000). * [15] I. Vidana, A. Polls, A. Ramos, L. Engvik, and M. Hjorth-Jensen, Phys. Rev. **C62**, 035801 (2000). * [16] S. K. Ghosh, S. C. Phatak and P. K. Sahu, Z. Phys. **A352**, 457 (1995); P.K. Sahu, Phys. Rev. **C62**, 045801 (2000). * [17] D. Galetti and A. F. R. de Toledo Piza, J. Phys. G**27**, 33 (2001). * [18] E. Fahri and R. L. Jaffe, Phys. Rev. **D30**, 2379 (1984). * [19] H. Satz, Phys. Rep. **89**, 349 (1982). * [20] J. Rafelsky and B. Muller, Phys. Rev. Lett. **48**, 1066 (1982); T. Matsui and H. Satz, Phys. Lett. **B178**, 416 (1986). * [21] K. Rajagopal, Nucl. Phys. **A661**, 150c (1999), and references therein. * [22] C. Adami and G. E. Brown, Phys. Rep. **234**, 1 (1993); Xue-min Jin and B. K. Jennings, Phys. Rev. **C55**, 1567 (1997). * [23] N. K. Glendenning, Phys. Rev. **D46**, 1274 (1992). * [24] R. A. Hulse and J. H. Taylor, Astrophys. J. **195**, L51 (1975). * [25] G. Burgio, M. Baldo and P.K. Sahu, to be published. * [26] K. Schertler, C. Greiner, P. K. Sahu and M. H. Thoma, Nucl. Phys. **A637**, 451 (1998); K. Schertler, C. Greiner, J. Schaffner-Bielich, and M. H. Thoma, Nucl. Phys. **A677**, 463 (2000). * [27] K. Schertler, S. Leupold, and J. Schaffner-Bielich, Phys. Rev. **C60**, 025801 (1999). * [28] P. Kaaret, E. Ford, and K. Chen, Astrophys. J. Lett. **480**, L27 (1997); W. Zhang, A. P. Smale, T. E. Strohmayer, and J. H. Swank, Astrophys. J. Lett. **500**, L171 (1998). **Figure captions.** Figure 1: (a,b) The energy density \\(E/V\\) vs. the baryon density \\(\\rho\\) for nuclear matter and quark matter of charge fraction \\(x_{p}=0.4\\). The dot indicates the common intersection of the curves. (c,d) Density dependence of the bag constant \\(B\\) (see text for details). Figure 2: The energy density vs. baryon density for pure hadron matter (full lines) are reported for the BHF (left panel) and RMF (right panel) schemes, in comparison with the quark energy densities (broken lines) with different parametrizations of the bag constant. Figure 3: Total EOS including both hadronic and quark components. Different prescriptions for the quark phase are considered, see the text and Figs. 1 and 2. Fo the hadron component the BHF (left panel) and the RMF (right panel) schemes are considered. In all cases the shaded region indicates the mixed phase MP, while HP and QP label the portion of the EOS where pure hadron and pure quark phases, respectively, are present. Figure 4: The gravitational mass of neutron stars vs. the central density for the EOS’s shown in Fig.3. Figure 5: In the left panel is shown the EOS for neutron star matter (dashed lines labeled by HP + QP) for a density independent value of the bag constant \\(B=90MeV\\,{\\rm fm}^{-3}\\), with BHF (a) anf RMF (c) hadron equations of state. The shaded areas indicate the mixed phase region. The corresponding masses vs. central density are shown on the right panels. In all cases the thin and thick lines correspond to the results obtained for pure quark and pure hadron EOS respectively. Fig.1 Fig.3. Fig.4.
Massive neutron stars (NS) are expected to possess a quark core. While the hadronic side of the NS equation of state (EOS) can be considered well established, the quark side is quite uncertain. While calculating the EOS of hadronic matter we have used the Brueckner-Bethe-Goldstone formalism with realistic two-body and three-body forces, as well as a relativistic mean field model. For quark matter we employ the MIT bag model constraining the bag constant by exploiting the recent experimental results obtained at CERN on the formation of a quark-gluon plasma. We calculate the structure of NS interiors with the EOS comprising both phases, and we find that the NS maximum masses fall in a relatively narrow interval, \\(1.45\\,M_{\\odot}\\leq M_{\\rm max}\\leq 1.65\\,M_{\\odot}\\), near the lower limit of the observational range.
Provide a brief summary of the text.
arxiv-format/0201032v3.md
# Solar Activity and Cloud Opacity Variations: A Modulated Cosmic-Ray Ionization Model David Marsden Scripps Institution of Oceanography University of California, San Diego 9500 Gilman Dr., Dept. 0242 La Jolla, California 92093-0242 email: [email protected] Richard E. Lingenfelter Center for Astrophysics and Space Sciences University of California, San Diego ###### Introduction The primary source of energy for the Earth's atmosphere is the Sun, so it is reasonable to explore whether changes in the global climate result from solar variability. It was first suggested by the astronomer William Herschel (Herschel 1801) that variations in the solar irradiance caused by sunspots could lead to climatic changes on Earth, and he cited the variation of British wheat prices with sunspot number as evidence for this link. The occurrence of the \"Little Ice Age\" during the 1645-1715 Maunder sunspot minimum (Eddy 1976), the correlation between the long-term solar cycle variations and tropical sea surface temperatures (Reid 1987), polar stratospheric temperatures (Labitske 1987), and the width of tree rings (Zhou and Butler 1998), along with many other studies also support a link between solar variations and the Earth's climate. A direct link between the Sun and these phenomena is tenuous, however, because the magnitude of the solar irradiance variation over the 11-year solar cycle is very small. Over the 1979-1990 solar cycle, for example, the variation in the irradiance was only \\(\\sim 0.1\\%\\) (Frohlich 2000), or \\(\\sim 0.3\\) W m\\({}^{-2}\\) globally-averaged at the top of the atmosphere. This is insufficient to power the sea surface temperature changes associated with the solar cycle by a factor of \\(3-5\\) (Lean 1997), and is significantly smaller than the globally-averaged forcings due to clouds (\\(\\sim 28\\) W m\\({}^{-2}\\); e.g. Hartmann 1993), anthropogenic greenhouse gases (\\(\\sim 2\\) W m\\({}^{-2}\\); Wigley and Raper 1992), and anthropogenic aerosols (\\(\\sim 0.3-2.0\\) W m\\({}^{-2}\\); Charlson et al. 1992; Kiehl and Briegleb 1993), suggesting that any direct atmospheric forcing from solar irradiance variations would be relatively unimportant. An indirect link between solar cycle variations and the Earth's climate appears more likely, especially given the discovery of a link between the flux of Galactic cosmic rays (GCRs) and global cloudiness (Svensmark and Friis-Christensen 1997) in the ISCCP cloud database (Rossow and Schiffer 1999). The Sun modulates the GCR flux at the Earththrough the action of the solar wind, which scatters and attenuates the GCRs in times of heightened solar activity (solar maximum; e.g. Jokipii 1971). Using 3.7 \\(\\mu\\)m infrared (IR) cloud amounts from the ISCCP database for the years 1983-1993, Marsh and Svensmark (2000) and Palle Bago and Butler (2000) showed that there is evidence of a positive GCR-cloud correlation only for low (\\(<\\) 3 km) clouds, and that the effect of the cosmic rays on global cloud amount appears to be greatest at the low to mid latitudes. The globally-averaged forcing due to the increase in low clouds associated with the solar cycle GCR variations is estimated (Kirkby and Laaksonen 2000) to be approximately \\(-\\)1.2 W m\\({}^{-2}\\), which is sufficient to power the sea surface temperature variations (Lean 1997). This is also comparable in magnitude (but opposite in sign) to the forcing due to anthropogenic CO\\({}_{2}\\) emission over the last century (Svensmark and Friis-Christensen 1997). Decreasing local cloud amounts correlated with short-term Forbush decreases in cosmic-ray rates were observed by Pudovkin and Veretenko (1995). The reality of the GCR-cloud connection has been questioned by a number of authors (Kernthaler, Toumi, and Haigh 1999; Jorgensen and Hansen 2000; Norris 2000). These objection can be distilled into three main points: 1) The GCR-cloud correlation should be seen prominently in high (cirrus) clouds at high latitudes where the cosmic-ray intensity is highest, 2) the increased cloudiness can be more plausibly attributed to other phenomena instead of GCRs, and 3) the correlation is an artifact of the ISCCP analysis. The first objection is addressed by the theory of ion-mediated nucleation (IMN: Yu and Turco 2001; Yu 2002), in which the efficiency of the cosmic-ray interaction is limited at high altitudes by the lack of aerosol precursor vapors such as H\\({}_{2}\\)SO\\({}_{4}\\) relative to the ion concentration. For the second objection, the temporal profile of the GCR-cloud correlation may be inconsistent with the profiles of the dominant volcanic and El Nino/Southern Oscillation (ENSO) events during the same time period (Kirkby and Laaksonen 2000), although no quantitative study of the various temporal signatures in the data has been undertaken. Finally, the ISCCPartifacts pointed out by Norris (2000) are troubling, but it is not clear that they are of sufficient magnitude to produce the observed GCR-cloud correlation, and it doesn't explain why the correlation exists only for low clouds and not the other cloud types in the ISCCP database. The linkage between cosmic rays and cloud formation has been recently investigated by a number of authors (Yu, 2002; Yu and Turco, 2001; Tinsley, 2000 and references therein). Here we apply a perturbative approach to quantify the effects of variations in the cosmic-ray rate on the optical thicknesses, or opacities, of clouds, and use the observed cloud opacity variations to constrain the microphysical models of ion-mediated ultrafine particle formation. The paper is organized as follows. In the next section we discuss how the effect of cosmic rays could alter the optical thickness and emissivity of clouds by affecting the nucleation of condensation nuclei (CN). The search for variations in cloud optical properties using the ISCCP database and their correlation with cosmic-ray flux variations are discussed in Section 3. A discussion of the results is given in Section 4., and finally we summarize our results in Section 5. ## 2 Effects of GCRs on Cloud Properties ### Nucleation Cosmic rays form water droplets in the supersaturated air of a classical cloud chamber (Wilson, 1901), and it seems plausible that they could also play a significant role in natural cloud formation. Yu and Turco (2000, 2001) and Yu (2002) have investigated the formation of ultrafine CN from charged molecular clusters formed from cosmic-ray ionization, and they find that the charged clusters grow more rapidly and are more stable than their neutral counterparts up to a size of \\(\\sim 10\\) nm. Although the subsequent growth of the cosmic-ray formed ultrafine CN to viable \\(\\sim 100\\) nm cloud condensation nuclei (CCN) has not been explored, the concentration of CCN should also reflect the CN concentration, as well as the direct influence of cosmic rays, if the cosmic-ray ionization rate does not affect other important nucleation efficiency parameters such as condensible vapor concentration, temperature, and pressure. We will make this assumption here although it may not be strictly true with respect to the condensible vapor concentration (see e.g. Turco, Yu, and Zhao 2000; Yu 2002). Althought the formation of CCN and ultimately cloud droplets is a function of many variable factors such as temperature, pressure, vapor concentration, and relative humidity, we can quantify the effects of small variations in the ionization rate (primarily due to cosmic rays above ocean and at altitudes \\(>1\\) km above land; e.g. Reiter 1992) on the number of CCN through a perturbation approach, i.e. \\[N_{\\rm CCN}(q+\\Delta q,V)\\approx N_{\\rm CCN}(q,V)+\\Delta q\\left.\\frac{\\partial N _{\\rm CCN}}{\\partial q}\\right|_{V}, \\tag{1}\\] where \\(N_{\\rm CCN}\\) is the concentration of CCN, \\(q\\) is the ionization rate, \\(V\\) refers to the set of parameters other than the ionization rate affecting \\(N_{\\rm CCN}\\), and the partial derivative is evaluated for fixed \\(V\\) (hereafter this will not be written explicitly). Along with the assumption discussed previously, this approach assumes that the quantity \\(\\Delta q|\\partial N_{\\rm CCN}/\\partial q|<<N_{\\rm CCN}(q,V)\\), which is probably true for solar cycle variations, where \\(q\\) typically varies by \\(<30\\%\\), but may not be true during periods of large scale changes in the geomagnetic field (e.g. Tric et al. 1992). To quantify the effect of varying CCN concentrations on cloud optical thicknesses, we envision the two idealized scenarios depicted in Figure 1. In both cloud formation scenarios, changes in the ionizing cosmic-ray flux cause changes in the number of cloud condensation nuclei through the process of ion-mediated nucleation on the formation of ultrafine CNin accordance with the assumptions mentioned above1. In the first case we assume that the nucleation of cloud droplets is limited by the available amount of water in the supersaturated air, so that the liquid water content (LWC), or density of water in droplets, is constant. Therefore the amount of water per droplet and hence the effective radii of cloud droplets will change with the cosmic-ray ionization rate. This is analogous to the \"Twomey Effect\" of enhanced aerosol pollution on droplet size distributions and the albedo of clouds (Twomey 1977; Rosenfeld 2000), and would primarily occur in environments where the amount of water in the air (and not the number of CCN) is the limiting factor. Thus, using (1), we would expect that the effective radius \\(R_{eff}\\) of the cloud droplet distribution resulting from a small change in the cosmic-ray ionization rate \\(\\Delta q\\) in any particular volume of air will be Footnote 1: We have assumed \\(\\partial N_{\\rm CCN}/\\partial q>0\\) in Figure 1, which need not be valid for all \\(q\\). \\[R_{eff}=\\left[\\frac{N_{\\rm CCN}(q,V)}{N_{\\rm CCN}(q+\\Delta q,V)}\\right]^{1/3}R_ {eff}^{0}\\approx\\left(1+\\frac{\\Delta q}{N_{\\rm CCN}}\\frac{\\partial N_{\\rm CCN }}{\\partial q}\\right)^{-1/3}R_{eff}^{0}, \\tag{2}\\] where \\(R_{eff}^{0}\\) is the effective radius of the unperturbed droplet distribution, which we will associate with the solar maximum period of the solar cycle. In the second case in Figure 1, we assume that the change in CCN concentration resulting from change in cosmic-ray ionization causes a proportionate change in the amount of water extracted from the supersaturated air, with the effective radius of the cloud droplet distribution remaining constant. This is the case where the formation of the cloud is limited by the local availability of CCN and not condensible water. This effect has been seen in the marine boundary layer in ship track clouds (Conover 1966), which have higher reflectivities (Coakley, Bernstein, and Durkee 1987) and liquid water contents (Radke, Coakley, and King 1989) due to the formation of additional ultrafine CN from ship exhaust. The perturbed liquid water content of a cloud in any particular volume of air will then be given by \\[{\\rm LWC}\\approx\\left(1+\\frac{\\Delta{\\rm q}}{{\\rm N_{CCN}}}\\frac{\\partial{\\rm N_{ CCN}}}{\\partial{\\rm q}}\\right){\\rm LWC}_{0}, \\tag{3}\\] where \\({\\rm LWC}_{0}\\) is the unperturbed cloud liquid water content associated with solar maximum as before. These two scenarios probably represent extremes of the direct cosmic-ray ionization effect on the clouds. As in the ship track clouds, the effect of the GCRs will probably be a combination of both LWC changes and \\(R_{eff}\\) changes, with the magnitude of the effect being bounded by the changes given in (2) and (3). ### Radiative Properties Changes in the cloud liquid water content and droplet effective radius, associated with changes in the ionization rate due to cosmic rays, will result in changes in cloud opacities. The optical thickness \\(\\tau\\) of a uniform cloud layer of thickness \\(\\Delta z\\) is given by (van den Hulst 1981): \\[\\tau=\\Delta z\\int_{0}^{\\infty}Q_{ext}\\,n(r)\\pi r^{2}dr, \\tag{4}\\] where \\(n(r)dr\\) is the concentration of cloud droplets with radii between \\(r\\) and \\(r+dr\\), \\(Q_{ext}\\) is the Mie extinction efficiency, and it is commonly assumed that \\[\\frac{\\int_{0}^{\\infty}Q_{ext}n(r)r^{2}dr}{\\int_{0}^{\\infty}n(r)r^{2}dr}=2, \\tag{5}\\] which is a good approximation when \\(2\\pi r/\\lambda>>1\\), where \\(\\lambda\\) is the wavelength (Stephens 1984). The effective radius of the cloud droplet distribution is given by \\[R_{eff}=\\frac{\\int_{0}^{\\infty}n(r)r^{3}dr}{\\int_{0}^{\\infty}n(r)r^{2}dr}, \\tag{6}\\]and the cloud liquid water content is given by \\[{\\rm LWC}=\\frac{4}{3}\\pi\\rho\\int_{0}^{\\infty}{\\rm n(r)r^{3}dr}, \\tag{7}\\] where \\(\\rho\\) is the density of liquid water. Combining these equations, we see that \\[\\tau\\approx\\frac{3}{2}\\frac{{\\rm LWC}\\,\\Delta z}{\\rho\\,R_{eff}}. \\tag{8}\\] Thus from (8) we would expect that an increase (decrease) in the mean \\(R_{eff}\\) and a decrease (increase) in the mean LWC, resulting from ionization variations due to cosmic rays, would result in a decrease (increase) the mean opacity of clouds. The change in cloud opacity with cosmic-ray rate can be quantified using the perturbation assumptions discussed in Section 2.1. and equations (2), (3), and (8). The fractional change in cloud opacity is then given by \\[\\frac{\\delta\\tau}{\\tau}\\sim\\frac{\\Delta q}{fN_{\\rm CCN}}\\frac{\\partial N_{\\rm CCN }}{\\partial q}, \\tag{9}\\] where \\(f=1(3)\\) for CCN (water) limited cloud formation, and the fractional change in the perturbed opacity \\(\\tau\\) (relative to the unperturbed opacity \\(\\tau_{0}\\)) is defined by \\(\\delta\\tau/\\tau=(\\tau-\\tau_{0})/\\tau_{0}\\). As mentioned previously, this derivation assumes that the right hand side of (9) is much less than one, which may not be the case for large changes in \\(q\\) and \\(N_{\\rm CCN}\\). As before we will assume that the unperturbed (perturbed) values of \\(q\\) and \\(N_{\\rm CCN}\\) refer to the values at solar maximum (minimum). At visible wavelengths from space, the primary consequence of the change in cloud opacity associated with cosmic rays will be an increase in cloud reflectivity, or albedo. To investigate this, we use the radiative transfer code SBDART (Ricchiazzi, Yang, and Gautier 1998) to calculate the top of the atmosphere broadband (\\(0.25-4.00\\)\\(\\mu\\)m) upward flux for three uniform low cloud models: 1) a 1 km thick cloud layer extending to a height of 2 km, 2) a 2 km thick cloud extending to a height of 3 km, and 3) a 0.5 km cloud layer extending to 1.5 km. These simulations were done with a tropical atmosphere profile (McClatchey et al. 1972) and an ocean surface albedo. The fractional increases in albedo, resulting from a 10% increase in the number of cloud droplets due to cosmic ray ionization variations, is shown in Figure 2 for the 1 km thick cloud case, for a wide range of LWC and \\(R_{eff}\\) in the variable LWC case (top panel) and the variable \\(R_{eff}\\) case (bottom panel). In both cases the contours of changing albedo approximately parallel the change in optical thickness calculated assuming \\(Q_{ext}=2.0\\) Figure 3 shows the fractional change in albedo directly as a function of opacity for all three cloud models. This figure clearly shows that the change in albedo is largest for clouds with opacities \\(\\tau\\) between 1 and 10, but is roughly independent of cloud geometrical thickness. Figures 2 and 3 indicate that the change in cloud optical thickness can be used to quantify the effects of the cosmic rays on cloud optical properties. Although the fractional change in albedo due to the cosmic rays is only \\(\\sim 2-5\\%\\) for a 10% variation in the number of cloud droplets, this can produce a significant forcing per cloud of \\(\\sim 7-16\\) W m\\({}^{-2}\\) at the top of the atmosphere for a solar zenith angle of \\(40^{\\circ}\\). The modulation of cloud opacity due to cosmic rays could therefore produce a similar modulation of the Earth's energy budget over the 11 year solar cycle, although the exact amount of forcing due to cosmic rays will depend sensitively on cloud amount variations, cloud opacity variations, and the efficiency at which changes in the cosmic ray rate are reflected in the number of cloud condensation nuclei. Because of the relationship between cloud opacity and emissivity, the cosmic rays should also produce an observable effect on cloud emission at infrared (IR) wavelengths. The effective IR emissivity \\(\\epsilon\\) can be parameterized by a relation of the form (Stephens 1978): \\[\\epsilon=1-\\exp(-a_{0}{\\rm LWC}\\,\\Delta{\\rm z}), \\tag{10}\\]where \\(a_{0}\\) is the mass absorption coefficient. Empirical fits to IR emission from water clouds yield \\(a_{0}=0.130\\) (Stephens 1978). The exponent in (10) is proportional to the cloud optical thickness for a given droplet effective radius, so the infrared emissivity increases with cloud opacity, with the change being most noticeable for optically thin clouds. Therefore one would expect a change in IR emission, along with the primary effect of changes in visible albedo, from clouds at solar minimum relative to clouds at solar maximum if the cosmic rays change the cloud liquid water contents. Interestingly, a correlation between cosmic ray rate and cloud top temperature for low clouds has been reported by Marsh and Svensmark (2000), supporting this hypothesis. ## 3 Cloud Opacity Variations ### ISCCP Data To search for systematic temporal changes in synoptic scale cloud optical properties, we used the International Cloud Climatology Project (ISCCP) monthly gridded cloud products (\"D2\") datasets, a compilation of cloud properties derived from satellite observations during the period 1983-1999 (Rossow and Schiffer 1999). The ISCCP D2 data used here consists of mean daytime cloud amount fractions and visible optical depths, as a function of time, for 6596 \"boxes\" with equal area covering the entire surface of the Earth. For a given time, the cloud amount fraction in each box is defined as the number of cloudy satellite image pixels, as determined by a cloud detection algorithm, divided by the total number of pixels in the box. The cloud optical thicknesses are derived from the visible satellite cloud albedos by using a radiative transfer model and assuming spherical droplets with droplet sizes characterized by a gamma distribution with variance 0.15 and \\(R_{eff}=10\\)\\(\\mu\\)m. ISCCP cloud top temperatures are simultaneously determined from the 3.7\\(\\mu\\)m IR radiances, allowingfor determination of cloud altitude and pressure, and the low, mid-level, and high clouds are defined as having cloud top pressures \\(P>680\\) mb, \\(440<P<680\\) mb, and \\(P<440\\) mb, respectively. Because we require the simultaneous visible and infrared radiances to determine the opacity and cloud height for our analysis, we only use the ISCCP daytime data. This is a different dataset than the diurnal 1983-1993 IR data used for the cloud amount analyses of Marsh and Svensmark (2000) and Palle Bago and Butler (2000). Detailed information on the distribution of cloud optical thicknesses is not preserved in the the ISCCP D2 database, and instead the mean optical thickness \\(\\bar{\\tau}_{i}\\) is recorded for three broad opacity bands \\(i\\): \\(0.0-3.6\\), \\(3.6-23.0\\), and \\(23.0-379.0\\). Thus a detailed analysis of the change in \\(\\tau\\) over the solar cycle is not possible using the D2 data, but a value of the _weighted_ mean cloud optical thickness \\(\\bar{\\tau}\\) can be calculated using \\[\\bar{\\tau}=\\frac{\\sum_{i=1}^{3}\\bar{A}_{i}\\bar{\\tau}_{i}}{\\sum_{i=1}^{n}\\bar{A }_{i}}, \\tag{11}\\] where the \\(\\bar{A}_{i}\\) are the total mean cloud amount fractions within each of the broad ISCCP optical thickness bins mentioned above. We calculated \\(\\bar{\\tau}\\) separately for the three cloud altitude levels and for two latitude bands with \\(|\\phi|\\leq 40.0^{\\circ}\\) (low latitude) and \\(|\\phi|>40^{\\circ}\\) (high latitude). The error associated with each \\(\\bar{\\tau}_{i}\\) was estimated by calculating the standard deviation of each ISCCP data point, from the scatter about the mean, and scaling by the square root of the number of data points. The mean optical thicknesses \\(\\bar{\\tau}\\) as a function of time for the low latitude clouds are shown in Figure 4, and the corresponding result for global high latitude clouds is shown in Figure 5. Shaded is the two year period in which the effects of the Mt. Pinatubo eruption appear to be most significant. Also shown for comparison are the mean counting rates from the Climax, Colorado neutron monitor run by the University of Chicago (obtained from [http://ulysses.uchicago.edu/NeutronMonitor/neutron_mon.html](http://ulysses.uchicago.edu/NeutronMonitor/neutron_mon.html)), which is a good measure of the local cosmic-ray ionization rate. In the low latitude case, the abrupt and large decrease in \\(\\bar{\\tau}\\) during 1991-1993 is due to the eruption of Mt. Pinatubo, and the subsequent plot scaling obscures smaller scale opacity variations. For comparison, we also plot the total mean cloud amount fractions \\(A=\\sum_{i=1}^{3}\\bar{A}_{i}\\) for the same two latitude bands in Figures 6 and 7. These plots show evidence for increases in mean cloud amount due to Mt. Pinatubo, as well as the smaller-scale temporal variations. ### Extracting The Cloud Variations Due to Cosmic-Rays To search for subtle variations in the ISCCP cloud opacities and amounts due to cosmic rays only, it is first necessary to eliminate the opacity variations in the data due to the Mt. Pinatubo volcanic eruption in June-September 1991 and strong ENSO events during the period of the ISCCP data. To separate out the various temporal signatures in the ISCCP data, we use a linear temporal model of the form \\[F(t)=\\sum_{k=0}^{3}b_{k}X_{k}(t), \\tag{12}\\] where \\(F(t)\\) is the mean ISCCP quantity of interest for the year \\(t\\), which for our purpose is either the visible cloud opacity \\(\\bar{\\tau}\\) or the mean cloud amount/fractional area \\(A\\). The model consists of four temporal basis vectors \\(X_{k}\\), which are functions of time, each scaled by a linear coefficient \\(b_{k}\\). For our temporal model we choose basis vectors corresponding to constant level of the given quantity (\\(k=0\\)) and variations due to ENSO events (e.g. Kuang, Jiang, and Yung 1998), the Mt. Pinatubo eruption of 1991, and cosmic rays (\\(k=\\) 1-3, respectively). Given the functional form of the basis vectors, the best-fit values of the linear coefficients can be determined through least squares minimization, and the fractional change in the time-varying ISCCP quantity over the data stretch is then given by \\(\\delta F/F=b_{k}/b_{0}\\), where \\(k=\\) 1-3. This model assumes a linear correlation between the quantity of interest and the basis vectors and assumes no time delays; more complicated models are possible but will not be considered here. The normalized basis vectors used in the temporal analysis of the ISCCP cloud data are shown in Figure 8. All of the vectors are scaled to values between zero and one. For the ENSO term \\(X_{1}\\) we use the scaled yearly-averaged Southern Oscillation Index (SOI) from the Australian Bureau of Meteorology (obtained from [http://www.bom.gov.au/climate/current/soihtm1.shtml](http://www.bom.gov.au/climate/current/soihtm1.shtml)). The SOI is a measure of the size of fluctuations in the sea level pressure difference between Tahiti and Darwin, Australia, and small values of the scaled SOI denote El Nino conditions and large values La Nina - both of which affect global weather (Rasmusson and Carpenter 1982). To parameterize the effect of the Mt. Pinatubo eruptions of 1991, we adopt a simple step function for \\(X_{2}\\), with identical non-zero intensities only for years 1991 and 1992. For the final term in the temporal model, \\(X_{3}\\), we use the scaled cosmic-ray rate from the Climax neutron monitor. Neutron monitor rates are directly proportional to the ionization rates due to cosmic rays because the neutrons are produced by the same cosmic ray cascade particles that produce the ionization, and the neutrons subsequently diffuse through less than 100 m of air before they are thermalized and captured by N to form \\({}^{14}\\)C (e.g. Lingenfelter, 1963). Neutron counters are thus unsusceptible to background ionizations due to terrestrial radiation from radioactive decays, which dominate the ionization signal from Galactic cosmic rays only below \\(\\sim 1\\) km in the atmosphere (Reiter 1992). The results of the temporal fitting of both the ISCCP visible cloud opacities and amounts are shown in Table 1. Formally most of the fits are not good, with reduced chi-squares ranging from \\(\\sim 0.7-7.8\\) for twelve degrees of freedom. There are a number of possible factors that could be contributing to this. For example the error bars on the data may have been underestimated, leading to an artificially large values of chi-squared. Another possibility is that our fitting model is missing other significant temporal drivers, or perhaps a non-linear model or different basis vectors may be required to fit the data. We tried to fit the ISCCP data with linear models composed of different combinations of our four basis vectors, and models with the cosmic-ray term provided a better fit to the data in general. Nevertheless it is possible that un-modeled phenomena mimic the temporal signature of cosmic rays in the data; more robust calculations of ISCCP error bars, inclusion of more ISCCP data, and exploration of more complicated temporal models in future work will help resolve this issue. The fractional variation in visible opacity \\(\\delta\\tau/\\tau\\) associated with the cosmic rays ranges from \\(\\sim+10\\%\\) for high clouds to \\(-7\\%\\) for low clouds. For the mean visible cloud amounts the variation due to cosmic rays is just the opposite - becoming greater in magnitude as the cloud height _decreases_ - qualitatively consistent with the positive correlation seen in the ISCCP IR data between cosmic ray rate and low clouds (Svensmark and Friis-Christensen, 1997; Marsh and Svensmark, 2000; Palle Bago and Butler, 2000). Therefore the high clouds appear to become thicker but smaller in response to increasing cosmic ray flux, while for the low clouds the response is the opposite. ## 4 Discussion The observed variation of cloud optical thicknesses with cosmic-ray rate can be used to constrain microphysical models of the cloud condensation nuclei concentration \\(N_{\\rm CCN}\\) using (9). Of crucial importance is the partial derivative \\(\\partial N_{\\rm CCN}/\\partial q\\), which determines the sign of the change in opacity with cosmic-ray rate. Recently Yu (2002) calculated \\(N_{\\rm CCN}\\) as a function of altitude and ionization rate using an ion-mediated nucleation code. Given this model and the vertical profiles of sulfuric acid vapor concentration, ionization rate, temperature, relative humidity, pressure, and surface area of pre-existing particles assumed therein (Yu, 2002), the value of \\(\\partial N_{\\rm CCN}/\\partial q\\) peaks at values of \\(q_{peak}=12\\), 8, and 4 ion pairs cm\\({}^{-3}\\) for low, mid-level, and high clouds, respectively, such that \\(\\partial N_{\\rm CCN}/\\partial q>0\\) for \\(q<q_{peak}\\) and \\(\\partial N_{\\rm CCN}/\\partial q<0\\) for \\(q>q_{peak}\\). Using the cosmic-ray ionization rates found by Neher (1961,1967) interpolated to geomagnetic latitude 40\\({}^{\\circ}\\), we find typical ionization rates of, respectively, \\(q\\sim\\) 3, 8, and 23 ion pairs cm\\({}^{-3}\\) for the low, mid-level, and high ISCCP clouds. Therefore from (9) we would expect a positive or zero correlation between opacity and cosmic-ray rate only for low clouds, and negative correlations for higher clouds for this model. We observe just the opposite, but the precision of the temporal model fits to the ISCCP data is not sufficient for us to rule out the Yu (2002) model based on the data. All three of the time-varying parameters in our temporal model show inverse correlations between mean visible cloud opacity and amount, suggesting a common origin for this behavior. These inverse correlations are illustrated in Figure 9. These are probably not artifacts of the averaging process because the quantities in Figure 9 have been normalized by their constant model terms in their temporal fits. One possible explanation for the inverse opacity-amount correlation is via a feedback mechanism. For the case of positive opacity variations, an increase in mean cloud opacity and albedo would result in increased energy loss to space and eventually less surface heating and subsequent water evaporation. Hence clouds would tend to be smaller and have smaller areas than they would otherwise. Conversely, for negative opacity variations clouds would tend to be larger. Global climate simulations (Chen and Ramaswamy 1996) indicate that global cloud albedo-increasing perturbations - similar to the changes induced by cosmic rays - decrease the global transport of moisture from the tropics, which then could conceivably produce fewer or smaller global clouds on average by this mechanism. Dynamical simulations of the response of global cloudiness to synoptic changes in the opacity are needed to investigate this. Here we consider a model in which Galactic cosmic rays alter the optical properties of clouds by changing the number of available cloud condensation nuclei. The main observational consequence of our model is a change in mean cloud opacity, with a secondary effect being a change in infrared emittance for optically thin clouds due to the relationship between cloud emissivity and opacity. We use the global ISCCP cloud database to search for variations in cloud properties due to cosmic rays, and after subtracting the background signals in the data due to Mt. Pinatubo and ENSOs, we find systematic variations in both opacity and cloud amount associated with changes in the cosmic-ray rate. The fractional variation in opacity attains a maximum positive value for high clouds and decreases with height, becoming negative or zero for low clouds. The fractional variation of the cloud amounts with cosmic-ray rate, however, show the opposite trend - increasing from a negative correlation at high altitudes to a positive correlation at low altitudes, which is consistent with the positive correlation between global low clouds clouds and cosmic-ray rate seen in the infrared (Svensmark and Friis-Christensen, 1997; Marsh and Svensmark, 2000; Palle Bago and Butler, 2000) Clearly more work is needed to model the opacity and cloud amount variations seen in the ISCCP data. Using our simple temporal model and perturbative approach, we have outlined a framework on which the variations in the data due to cosmic rays can be isolated and compared to model predictions. As the time span of the ISCCP data increases in length, more complicated models with additional components and nonlinear dependencies can be used, and the analysis can then be more robust. The ISCCP data requires the culling together and normalizing of many disparate satellite datasets (Rossow and Schiffer, 1999), and although this approach is necessary at the present time it is not ideal. One complement to the ISCCP global cloud data would be provided by the NASA deep space mission _Triana_, which would be able to retrieve cloud optical thicknesses simultaneously over the entire sunlit Earth from the L1 Lagrangian point between the Earth and the Sun. Continuous deep space observing of Earth's clouds would be ideal for detecting not only the solar cycle variations seen here but also the shorter duration but possibly more frequent variations in global cloud cover associated with Forbush decreases of Galactic cosmic rays and high energy solar proton events from the Sun. We thank the AVANTI article service of the Scripps Institution of Oceanography Library, and acknowledge the use of cosmic-ray data from the University of Chicago (National Science Foundation grant ATM-9912341) and Southern Oscillation Index data from the Australian Bureau of Meteorology. We also would like to thank the anonymous referees for very helpful comments. [MISSING_PAGE_POST] Jorgensen, T. S., and A. W. Hansen 2000: Comment on \"Variation of cosmic-ray flux and global cloud coverage -- a missing link in solar-climate relationship\" by Henrik Svensmark and Eigil Friis-Christensen [Journal of Atmospheric and Solar-Terrestrial Physics 59 (1997) 1225-1232]. _J. Atmos. Terrest. Phys.,_**62,** 73-77. * () Kernthaler, S. C., R. Toumi, and J. D. Haigh 1999: Some doubts concerning a link between cosmic-ray fluxes and global cloudiness. _Geophys. Res. Lett.,_**26,** 863-865. * () Kiehl, J. T., and B. P. Briegleb 1993: The relative roles of sulfate aerosols and greenhouse gases in climate forcing. _Science,_**260,** 311-314. * () Kirkby, J., and A. Laaksonen 2000: Solar variability and clouds. _Space Sci. Rev.,_**94,** 397-409. * () Kuang, Z., Jiang, Y., and Y. K. Yung 1998: Cloud optical thickness variations during 1983-1991: Solar cycle or ENSO? _Geophys. Res. Lett.,_**25,** 1415-1417. * () Labitzke, K. 1987: Sunspots, the QBO, and the stratospheric temperature in the north polar region. _Geophys. Res. Lett.,_**14,** 535-537. * () Lean, J. 1997: The Sun's variable radiation and its relevance for Earth. _Ann. Rev. Astron. Astrophys.,_**35,** 33-67. * () Lingenfelter, R. E. 1963: Production of carbon 14 by cosmic-ray neutrons, _Rev. of Geophys._, **1**, 35-55. * () Marsh, N. D., and H. Svensmark 2000: Low cloud properties influenced by cosmic rays. _Phys. Rev. Lett.,_**85,** 5004-5007. * () McClatchey, R. A., R. W. Fenn, J. E. A. Selby, F. E. Volz, and J.S. Garing 1972: Optical properties of the atmosphere. _Tech. Rep. AFCRL-72-0497,_ Air Force Cambridge Research Laboratories. * ()Neher, H. V. 1961: Cosmic-ray knee in 1958. _J. Geophys. Res.,_**66,** 4007-4012. * (1967) ------ 1967: Cosmic-ray particles that changed from 1954 to 1958 to 1965. _J. Geophys. Res.,_**72,** 1527-1539. * (197) Norris, J. R. 2000: What can cloud observations tell us about climate variability? _Space Sci. Rev.,_**94,** 375-380. * (198) Palle Bago, E., and C. J. Butler 2000: The influence of cosmic rays on terrestrial clouds and global warming. _Astr. Geophys.,_**41**, 4.18-4.22. * (199) Pudovkin, M. I., and S. V. Veretenenko 1995: Cloudiness decreases associated with Forbush-decrease of Galactic cosmic rays. _J. Atmos. Sol.-Terr. Phys.,_**57,** 1349-1355. * (200) Radke, L. F., Coakley, J. A. Jr., and M. D. King 1989: Direct and remote sensing observations of the effects of ships on clouds. _Science,_**246,** 1146-1149. * (201) Rasmussen, E. M., and T. M. Carpenter 1982: Variation in tropical sea surface temperature and surface wind fields associated with the Southern Oscillation/El Nino. _Mon. Wea. Rev.,_**110,** 354-384. * (202) Reid, G. C. 1987: Influence of solar variability on global sea surface temperatures. _Nature,_**329,** 142-143. * (203) Reiter, R. 1992: _Phenomena in Atmospheric and Environmental Electricity,_ Elsevier, 541 pp. * (204) Ricchiazzi, P, S. Yang, and C. Gautier 1998: SBDART: a research and teaching software tool for plane-parallel radiative transfer in the Earth's atmosphere. _Bull. Am. Met. Soc.,_**79,** 2101-2114. * (205) Rosenfeld, D. 2000: Suppression of rain and snow by urban and industrial air pollution. _Science,_**287,** 1793-1796. * (206)Rossow, W. B., and R. A. Schiffer 1999: Advances in understanding clouds from ISCCP. _Bull. Am. Met. Soc.,_**80,** 2261-2287. * Stephens (1978) Stephens, G. L. 1978: Radiation profiles in extended water clouds II: parameterization schemes. _J. Atmos. Sci.,_**35,** 2123-2132. * Stephens (1984) ------ 1984: The parameterization of radiation for numerical weather prediction and climate models. _Mon. Weath. Rev.,_**112,** 826-867. * Svensmark and Friis-Christensen (1997) Svensmark, H., and E. Friis-Christensen 1997: Variation of cosmic ray flux and global cloud coverage -- a missing link in solar-climate relationships. _J. Atmos. Sol.-Terr. Phys.,_**59,** 1225-1232. * Tinsley (2000) Tinsley, B. A. 2000: Influence of solar wind on the global electric circuit, and inferred effects on cloud microphysics, temperature, and dynamics in the troposphere. _Space Sci. Rev.,_**94,** 231-258. * Tric et al. (1992) Tric, E. et al. 1992: Paleointensity of the geomagnetic field during the last 80,000 years. _J. Geophys. Res.,_**97,** 9337-9351. * Turco et al. (2000) Turco, R. P., Yu, F., and J.-X. Zhao 2000: Tropospheric sulfate aerosol formation via ion-ion recombination. _J. Air & Waste Manage. Assoc.,_**50,** 902-907. * Twomey (1977) Twomey, S. 1977: The influence of pollution on the shortwave albedo of clouds. _J. Atmos. Sci.,_**34,** 1149-1152. * van den Hulst (1981) van den Hulst, H. C. 1981: _Light scattering by small particles._ Dover Publications, 470 pp. * Wigley and Raper (1992) Wigley, T. M. L., and S. C. B. Raper 1992: Implications for climate and sea level of revised IPCC emission scenarios. _Nature,_**357,** 293-300. * Wilson (1901) Wilson, C. T. R. 1901: On the ionization of atmospheric air. _Proc.Roy. Soc. Lon.,_**68,** 151-161. * Wilson (1993)* (2002) Yu, F. 2002: Altitude variations of cosmic-ray induced production of aerosols: Implications for global cloudiness and climate. _J. Geophys. Res.,_ in press. * (2000) Yu, F., and R. P. Turco 2000: Ultrafine aerosol formation via ion-mediated nucleation. _Geophys. Res. Lett.,_**27,** 883-886. * (2001) ----, and ---- 2001: From molecular clusters to nanoparticles: role of ambient ionization in tropospheric aerosol formation. _J. Geophys. Res.,_**106,** 4797-4814. * (2002) Zhou, K., and C. J. Butler 1998: A statistical study of the relationship between the solar cycle length and tree-ring index values. _J.Atmos. Sol.-Terr. Phys.,_**60,** 1711-1718. Figure 1: Cartoon illustrating two limiting scenarios for the effect of the Galactic cosmic rays (GCRs) on cloud optical properties, assuming that the varying ionizing cosmic-ray flux causes changes in the number of cloud condensation nuclei (CCN) through ion-mediated nucleation. In the first case we assume that the nucleation of cloud droplets is limited by the available amount of water in the supersaturated air. Therefore, as illustrated in the top panel, if an increase in the GCR ionization flux resulted in more cloud condensation nuclei (CCN) but no additional water condensation, the amount of water per droplet will be less and the effective radius \\(R_{eff}\\) of the droplet distribution will be smaller. Alternately, as illustrated in the bottom panel, if the formation of cloud droplets is limited by the local availability of CCN and not condensible water, \\(R_{eff}\\) can remained unchanged and additional CCN resulting from changes in cosmic-ray ionization would cause an increase in the amount of water extracted from the supersaturated air, so the cloud liquid water content would increase. The opposite trends hold for cases where the number of CCN is decreased by variations in the cosmic-ray flux. Figure 2: The fractional change in the albedo of a 1 km thick cloud expected from a 10% increase in the number of cloud droplets due to changes in the cosmic-ray flux, shown for the case of variable cloud water content LWC (top) and for variable droplet radius \\(R_{eff}\\) (bottom). The solid contours denote the change in albedo, and the dotted contours are for the optical thickness. Figure 3: The fractional change in the albedo, expected from a 10% increase in the number of cloud droplets from variations in the cosmic-ray rate, plotted as a function of cloud optical thickness for three different cloud geometrical thicknesses. The open symbols denote changes in cloud LWC and the filled symbols changes in \\(R_{eff}\\). Figure 4: The mean cloud 0.6 \\(\\mu\\)m optical thickness from the ISCCP database for all clouds in the low latitude band \\(|\\phi|<40^{\\circ}\\), with the cosmic-ray rate from the Climax neutron monitor. The high, mid-level, and low clouds refer to cloud top pressures of \\(P<440\\) mb, \\(440<P<680\\) mb, and \\(P>680\\) mb, respectively, and the shaded interval refers to cloud data affected significantly by the eruption of Mt. Pinatubo in June 1991. The \\(1\\sigma\\) error bars on the ISCCP data were calculated from the sample variance of the data. Figure 5: Same as Figure 4, but for all high latitude clouds with \\(|\\phi|>40^{\\circ}\\). Figure 8: Basis vectors used in the temporal model of ISCCP visible opacity and amount fraction variations. The vectors \\(X_{0}\\), \\(X_{1}\\), \\(X_{2}\\), and \\(X_{3}\\) represent the constant level term and variations due to ENSO, the eruption of Mt. Pinatubo, and cosmic rays, respectively. Figure 6: The mean cloud 0.6 \\(\\mu\\)m amount fractions from the the ISCCP database for all clouds in the low latitude band \\(|\\phi|<40^{\\circ}\\), with the cosmic-ray rate from the Climax neutron monitor. The \\(1\\sigma\\) error bars on the ISCCP data were calculated from the sample variance of the data. Figure 7: Same as Figure 6, but for all high latitude clouds with \\(|\\phi|>40^{\\circ}\\). Figure 9: Fractional change in visible cloud amount versus fractional change in visible opacity, from the fit of the temporal model to the ISCCP data. The points have been normalized by their respective constant term values in the temporal model, and the error bars have been omitted. In all cases there is an inverse correlation between the two parameters. **Solar Max.** **(low GCR flux)** **Solar Min.** **(high GCR flux)** **Case 1:** **More CCN** **LWC=LWC\\({}_{0}\\)** **R\\({}_{\\rm{Eff}}\\)** **Erf** **Erf** **Droplet in Cloud** **Case 2:** **More Water** **R\\({}_{\\rm{Eff}}\\)**=R\\({}_{\\rm{Eff}}^{0}\\)** **LWC\\(>\\)LWC\\({}_{0}\\)**
The observed correlation between global low cloud amount and the flux of high energy cosmic rays supports the idea that ionization plays a crucial role in tropospheric cloud formation. We explore this idea quantitatively with a simple model linking the concentration of cloud condensation nuclei to the varying ionization rate due to cosmic rays. Among the predictions of the model is a variation in global cloud optical thickness, or opacity, with cosmic-ray rate. Using the International Satellite Cloud Climatology Project database (1983-1999), we search for variations in the yearly mean visible cloud opacity and visible cloud amount due to cosmic rays. After separating out temporal variations in the data due to the Mt. Pinatubo eruption and El Nino/Southern Oscillation, we identify systematic variations in opacity and cloud amount due to cosmic rays. We find that the fractional amplitude of the opacity variations due to cosmic rays increases with cloud altitude, becoming approximately zero or negative (inverse correlation) for low clouds. Conversely, the fractional changes in visible cloud amount due to cosmic rays are only positively-correlated for low clouds and become negative or zero for the higher clouds. The opacity trends suggest behavior contrary to the current predictions of ion-mediated nucleation (IMN) models, but more accurate temporal modeling of the ISCCP data is needed before definitive conclusions can be drawn.
Provide a brief summary of the text.
arxiv-format/0201183v1.md
# Potts model on infinite graphs and the limit of chromatic polynomials **Aldo Procacci\\({}^{*}\\)1, Benedetto Scoppola\\({}^{\\dagger}\\)2 and Victor Gerasimov\\({}^{*}\\)** Footnote 1: Partially supported by CNPq (Brazil) Footnote 2: Partially supported by CNR, G.N.F.M. (Italy) \\({}^{*}\\)Departamento de Matematica - Universidade Federal de Minas Gerais Av. Antonio Carlos, 6627 - Caixa Postal 702 - 30161-970 - Belo Horizonte - MG Brazil e-mails: [email protected] (Aldo Procacci); [email protected] (Victor Gerasimov) \\({}^{\\dagger}\\)Dipartimento di Matematica - Universita \"La Sapienza\" di Roma Piazzale A. Moro 2, 00185 Roma, Italy e-mail: [email protected] # Introduction The Potts model with \\(q\\) states (or \\(q\\) \"colors\") is a system of random variables (spins) \\(\\sigma_{x}\\) sitting in the vertices \\(x\\in\\mathbb{V}\\) of a locally finite graph \\(\\mathbb{G}=(\\mathbb{V},\\mathbb{E})\\) with vertex set \\(\\mathbb{V}\\) and edge set \\(\\mathbb{E}\\), and taking values in the set of integers \\(\\{1,2,\\ldots,q\\}\\). Usually the graph \\(\\mathbb{G}\\) is a regular lattice, such as \\(\\mathbb{Z}^{d}\\) with the set of edges \\(\\mathbb{E}\\) being the set on nearest neighbor pairs, but of course more general situations can be considered. A _configuration_\\(\\sigma_{\\mathbb{V}}\\) of the system is a function \\(\\sigma_{\\mathbb{V}}:\\mathbb{V}\\to\\{1,2,\\ldots,q\\}\\) with \\(\\sigma_{x}\\) representing the value of the _spin_ at the site \\(x\\). We denote by \\(\\Gamma_{\\mathbb{V}}\\) the set of all spin configurations in \\(\\mathbb{V}\\). If \\(V\\subset\\mathbb{V}\\) we denote \\(\\sigma_{V}\\) the restriction of \\(\\sigma_{\\mathbb{V}}\\) to \\(V\\) and by \\(\\Gamma_{V}\\) the set of all spin configurations in \\(V\\). Let \\(V\\subset\\mathbb{V}\\) and let \\(\\mathbb{G}|_{V}=(V,\\mathbb{E}_{V})\\), where \\(\\mathbb{E}|_{V}=\\{\\{x,y\\}\\in\\mathbb{E}:x\\in V,y\\in V\\}\\). Then for \\(V\\subset\\mathbb{V}\\)_finite_, the _energy of the spin configuration \\(\\sigma_{V}\\) in \\(\\mathbb{G}|_{V}\\)_ is defined as \\[H_{\\mathbb{G}|_{V}}(\\sigma_{V})=-J\\sum_{\\{x,y\\}\\subset\\mathbb{E}|_{V}}\\delta_ {\\sigma_{x}\\sigma_{y}} \\tag{1.1}\\] where \\(\\delta_{\\sigma_{x}\\sigma_{y}}\\) is the Kronecker symbol which is equal to one when \\(\\sigma_{x}=\\sigma_{y}\\) and zero otherwise. The _coupling_\\(J\\) is called _ferromagnetic_ if \\(J>0\\) and _anti-ferromagnetic_ if \\(J<0\\). The _statistical mechanics_ of the system can be done by introducing the _Boltzmann weight_ of a configuration \\(\\sigma_{V}\\), defined as \\(\\exp\\{-\\beta H_{\\mathbb{G}|_{V}}(\\sigma_{V})\\}\\) where \\(\\beta\\geq 0\\) is the inverse temperature. Then the _probability_ to find the system in the configuration \\(\\sigma_{V}\\) is given by \\[\\mathrm{Prob}(\\sigma_{V})=\\frac{e^{-\\beta H_{\\mathbb{G}|_{V}}(\\sigma_{V})}}{Z_ {\\mathbb{G}|_{V}}(q)} \\tag{1.2}\\]The normalization constant in the denominator is called the _partition function_ and is given by \\[Z_{\\mathbb{G}|_{V}}(q,\\beta)=\\sum_{\\sigma_{V}\\in\\Gamma_{V}}e^{-\\beta H_{\\mathbb{ G}|_{V}}(\\sigma_{V})} \\tag{1.3}\\] The case \\(\\beta J=-\\infty\\) is the _anti-ferromagnetic_ and _zero temperature_ Potts model with \\(q\\)-states. In this case configurations with non zero probability are only those in which adjacent spins have different values (or colors) and \\(Z_{\\mathbb{G}|_{V}}(q)\\) becomes simply the number of all allowed configurations. The _thermodynamics_ of the system at inverse temperature \\(\\beta\\) and \"volume\" \\(V\\) is recovered through the _free energy per unit volume_ given by \\[f_{\\mathbb{G}|_{V}}(q,\\beta)=\\frac{1}{|V|}\\ln Z_{\\mathbb{G}|_{V}}(q) \\tag{1.4}\\] where \\(|V|\\) denotes the cardinality of \\(V\\). All thermodynamic functions of the system can be obtained via derivative of the free energy. In the zero temperature anti-ferromagnetic case the function \\(f_{\\mathbb{G}|_{V}}(q,\\beta)\\) is usually called _the ground state entropy_ of the system. The Potts model, despite its simple formulation, is a intensely investigated subject. Besides its own interest as a statistical mechanics model, it has deep connections with several areas in theoretical physics, probability and combinatorics. In particular, Potts models on general graphs are strictly related to a typical combinatorial problem. As a matter of fact, the partition function of the Potts model with \\(q\\) state on a finite graph \\(G\\), is equal, in the zero temperature antiferromagnetic case, to the number of proper coloring with \\(q\\) colors of the graph \\(G\\), where proper coloring means that adjacent vertices of the graph must have different colors. This number viewed as a function of the number of colors \\(q\\) is actually a polynomial function in the variable \\(q\\) which is known as _chromatic polynomial_. On the other hand, the same partition function in the general case can be related to more general chromatic type polynomials, known as _Tutte polynomials_[19]. This beautiful connection between statistical mechanics and graph coloring problems, first discussed by Fortuin and Kasteleyn [7], has been extensively studied and continues to attract many researchers till nowadays (see e.g. [1], [6], [13], [16], [17], [18], [20] and reference therein). One of the main interests in statistical physics is to establish whether or not a given system exhibits _phase transitions_. This means to search for points in the interval \\(\\beta\\in[0,\\infty]\\) where some thermodynamic function (like e.g. the free energy defined above) is non analytic. Now, functions as (1.3) and (1.4) are manifestly analytic as long as \\(V\\) is a finite set. Hence phase transition (i.e. non-analyticity) can arise only in the so called _infinite volume limit_ or _thermodynamic limit_. That is, the graph \\(\\mathbb{G}\\) is some countably infinite graph, usually a regular lattice, and the infinite volume limit \\[f_{\\mathbb{G}}(q,\\beta)=\\lim_{N\\to\\infty}\\frac{1}{|V_{N}|}\\ln Z_{\\mathbb{G}|_{ V_{N}}}(q,\\beta) \\tag{1.5}\\] is taken along a sequence \\(V_{N}\\) of finite subsets of \\(\\mathbb{V}\\) such that, roughly speaking, \\(\\mathbb{G}|_{V_{N}}\\) increase in size equally in all directions. Tipically, when \\(\\mathbb{V}\\) is \\(\\mathbb{Z}^{d}\\), \\(V_{N}\\) are usually cubes of increasing size \\(L_{N}\\). There is a considerable amount of rigorous results about thermodynamic limit and phase transitions for the Potts model on \\(\\mathbb{Z}^{d}\\) and other regular lattices, see e.g. the reviews [21] and, more recently, [18]. On the other hand, the study of thermodynamic limits of spin systems on infinite graphs which are not usual lattices has recently drove the attention of many researcher (e.g. [2], [8], [9], [11] and references therein). Concerning specifically the antiferromagnetic Potts model and/or chromatic polynomial on infinite graphs, the problem of the thermodynamic limit was first considered by Biggs [3] with further discussions in [4] and in [10]. Very recently Sokal [17] has shown that for any _finite graph_\\(G\\) with maximum degree \\(\\Delta\\), the zeros of the chromatic polynomial lies in a disk \\(q\\leq C\\Delta\\) where \\(C\\) is a constant. An important extension of this result would be to prove the existence and analyticity of the limiting free energy per unit volume (1.5) for a suitable class, as wide as possible, of infinite graphs. Such a generalization would be relevant from the statistical mechanics point of view, since it would imply that anti-ferromagnetic Potts model on such class of infinite graphs, if \\(q\\) is sufficiently large, does not present a phase transition at zero temperature (and hence at any temperature). To this respect, Shrock and Tsai have explicitly formulated a conjecture [14] (see also [10]), based on which the ground state entropy per unit volume of the antiferromagnetic Potts model at zero temperature on an infinite graph \\({\\mathbb{G}}\\) should be analytic in the neighbor of \\(1/q=0\\) whenever \\({\\mathbb{G}}\\) is a regular lattice. In this paper we actually prove that this conjecture is true not only for regular lattices, but even for a wide calls of graphs. In particular we prove that the ground state zero entropy is analytic near \\(1/q=0\\) for all infinite graphs which are quasi transitive and amenable, and the limit may be evaluated along _any_ Folner sequence in \\({\\mathbb{V}}\\). We stress that this result proves the Schrock conjecture in a considerably stronger formulation, since _all regular lattices_, either with the elementary cell made by one single vertex or by more than one vertex, are indeed quasi-transitive amenable graphs but actually the class of quasi-transitive amenable graphs is much wider than that of regular lattices. The paper is organized as follows. In section 2 we introduce the notations used along the paper, and we enunciate the main result (theorem 2). In section 3 we rephrase the problem in term of polymer expansion and prove a main technical result (lemma 4). In section 4 we prove a graph theory property (lemma 6) concerning quasi-transitive amenable graphs. Finally in section 5 we give the proof of the main result of the paper, i.e. theorem 2. SS2. **Some further notations and statement of the main result** In general, if \\(V\\) is any finite set, we denote by \\(|V|\\) the number of elements of \\(V\\). The set \\(\\{1,2,\\ldots,n\\}\\) will be denoted shortly \\({\\rm I}_{n}\\). We denote by \\({\\rm P}_{2}(V)\\) the set of all subsets \\(U\\subset V\\) such that \\(|U|=2\\) and by \\({\\rm P}_{\\geq 2}(V)\\) the set of all _finite_ subsets \\(U\\subset V\\) such that \\(|U|\\geq 2\\). Given a countable set \\(V\\), and given \\(E\\subset{\\rm P}_{2}(V)\\), the pair \\(G=(V,E)\\) is called a _graph_ in \\(V\\). The elements of \\(V\\) are called _vertices_ of \\(G\\) and the elements of \\(E\\) are called _edges_ of \\(G\\). Given two graphs \\(G=(V,E)\\) and \\(G^{\\prime}=(V^{\\prime},E^{\\prime})\\) in \\(V\\), we say that \\(G^{\\prime}\\subset G\\) if \\(E^{\\prime}\\subset E\\) and \\(V^{\\prime}\\subset V\\). Given a graph \\(G=(V,E)\\), two vertices \\(x\\) and \\(y\\) in \\(V\\) are said to be _adjacent_ if \\(\\{x,y\\}\\in E\\). The _degree_\\(d_{x}\\) of a vertex \\(x\\in V\\) in \\(G\\) is the number of vertices \\(y\\) adjacent to \\(x\\). A graph \\(G=(V,E)\\) is said _locally finite_ if \\(d_{x}<+\\infty\\) for all \\(x\\in V\\), and it is said _bounded degree_ if \\(\\max_{x\\in V}\\{d_{x}\\}\\leq\\Delta<\\infty\\). A graph \\(G=(V,E)\\) is said to be _connected_ if for any pair \\(B,C\\) of subsets of \\(V\\) such that \\(B\\cup C=V\\) and \\(B\\cap C=\\emptyset\\), there is an edge \\(e\\in E\\) such that \\(e\\cap B\ eq\\emptyset\\) and \\(e\\cap C\ eq\\emptyset\\). We denote by \\({\\cal G}_{V}\\) the set of all connected graphs with vertex set \\(V\\). If \\(V={\\rm I}_{n}\\) we use the notation \\({\\cal G}_{n}\\) in place of \\({\\cal G}_{{\\rm I}_{n}}\\). A _tree_ graph \\(\\tau\\) on \\(V\\) is a connected graph \\(\\tau\\in{\\cal G}_{V}\\) such that \\(|\\tau|=|V|-1\\). We denote by \\({\\cal T}_{V}\\) the set of all tree graphs of \\(V\\) and shortly \\({\\cal T}_{n}\\) in place of \\({\\cal T}_{{\\rm I}_{n}}\\). Let \\(\\mathbf{R}_{n}\\equiv(R_{1},\\ldots,R_{n})\\) an ordered n-ple of non empty sets, then we denote by \\(E(\\mathbf{R}_{n})\\) the set \\(\\subset\\mathrm{P}_{2}(\\mathbf{I}_{n})\\) defined as \\(E(\\mathbf{R}_{n})=\\{\\{i,j\\}\\in\\mathrm{P}_{2}(I_{n}):\\,R_{i}\\cap Rj\ eq\\emptyset\\}\\). We denote \\(G(\\mathbf{R}_{n})\\) the graph \\((\\mathrm{I}_{n},E(\\mathbf{R}_{n}))\\). Given two distinct vertices \\(x\\) and \\(y\\) of \\(G=(V,E)\\), a _path_\\(\\tau(x,y)\\) joining \\(x\\) to \\(y\\) is a _tree_ subgraph of \\(G\\) with \\(d_{x}=d_{y}=1\\) and \\(d_{z}=2\\) for any vertex \\(z\\) in \\(\\tau(x,y)\\) distinct from \\(x\\) and \\(y\\). We define the _distance_ between \\(x\\) and \\(y\\) as \\(|x-y|=\\min\\{|\\tau(x,y)|:\\tau(x,y)\\text{ path in }G\\}\\). Remark that \\(|x-y|=1\\Leftrightarrow\\{x,y\\}\\in E\\). Given \\(G=(V,E)\\) connected and \\(R\\subset V\\), let \\(E|_{R}=\\{\\{x,y\\}\\in E:x\\in R,y\\in R\\}\\) and define the graph \\(G|_{R}=(R,E|_{R})\\). Note that \\(G|_{R}\\) is a sub-graph of \\(G\\). We call \\(G|_{R}\\)_the restriction of \\(G\\) to \\(R\\)_. We say that \\(R\\subset V\\)_is connected_ if \\(G|_{R}\\) is connected. For any non void \\(R\\subset V\\), we further denote by \\(\\partial R\\) the _external boundary_ of \\(R\\) which is the subset of \\(V\\backslash R\\) given by \\[\\partial R=\\{y\\in V\\backslash R:\\exists x\\in R:|x-y|=1\\} \\tag{2.1}\\] An _automorphism_ of a graph \\(G=(V,E)\\) is a bijective map \\(\\gamma:V\\to V\\) such that \\(\\{x,y\\}\\in E\\Rightarrow\\{\\gamma x,\\gamma y\\}\\in E\\). A graph \\(G=(V,E)\\) is called _transitive_ if, for any \\(x,y\\) in \\(V\\), it exists an automorphism \\(\\gamma\\) on \\(G\\) which maps \\(x\\) to \\(y\\). The graph \\(G\\) is called _quasi-transitive_ if \\(V\\) can be partitioned in finitely many sets \\(O_{1},\\ldots O_{s}\\) (vertex orbits) such that for \\(\\{x,y\\}\\in O_{i}\\) it exists an automorphism \\(\\gamma\\) on \\(G\\) which maps \\(x\\) to \\(y\\) and this holds for all \\(i=1,\\ldots,s\\). If \\(x\\in O_{i}\\) and \\(y\\in O_{i}\\) we say that \\(x\\) and \\(y\\) are equivalent. Remark that a locally finite quasi-transitive graph is necessarily bounded degree. Roughly speaking in a transitive graph any vertex of the graph is equivalent; in other words \\(G\\) \"looks the same\" by observers sitting in different vertices. In a quasi-transitive graph there is a finite number of different type of vertices and \\(G\\) \"looks the same\" by observers sitting in vertices of the same type. As a immediate example all periodic lattices with the elementary cell made by one site (e.g. square lattice, triangular lattice, hexagonal lattice, etc.) are transitive infinite graphs, while periodic lattices with the elementary cell made by more than one site are quasi-transitive infinite graphs. Let \\(\\mathbb{G}=(\\mathbb{V},\\mathbb{E})\\) be a connected infinite graph. \\(\\mathbb{G}\\) is said to be _amenable_ if \\[\\inf\\left\\{\\frac{|\\partial W|}{|W|}:W\\subset\\mathbb{V},\\ 0<|W|<+\\infty\\right\\}=0\\] A sequence \\(\\{V_{N}\\}_{N\\in\\mathbb{N}}\\) of finite sub-sets of \\(\\mathbb{V}\\) is called a _Folner sequence_ if \\[\\lim_{N\\to\\infty}\\frac{|\\partial V_{N}|}{|V_{N}|}=0 \\tag{2.2}\\] From now on \\(\\mathbb{G}=(\\mathbb{V},\\mathbb{E})\\) will denote a connected locally finite infinite graph and \\(V_{N}\\subset\\mathbb{V}\\) a _finite_ subset. The partition function of the _antiferromagnetic_ Potts model with \\(q\\) colours on \\(\\mathbb{G}|_{V_{N}}\\)_at zero temperature_ can be rewritten (in a slightly different notation respect (1.1)) as \\[Z_{\\mathbb{G}|_{V_{N}}}(q)=\\sum_{\\sigma_{V_{N}}}\\exp\\left\\{-\\sum_{\\{x,y\\}\\in \\mathrm{P}_{2}(V_{N})}J_{xy}\\delta_{\\sigma_{x}\\sigma_{y}}\\right\\} \\tag{2.3}\\]where \\[J_{xy}=\\cases{+\\infty&if $|x-y|=1$\\cr 0&otherwise\\cr} \\tag{2.4}\\] We stress again that, due to assumption (2.4) (i.e. antiferromagnetic interaction \\(+\\) zero temperature), the function \\(Z_{\\mathbb{G}|_{V_{N}}}(q)\\) represents the number of ways that the vertices \\(x\\in V_{N}\\) of \\(\\mathbb{G}|_{V_{N}}\\) can be assigned \"colors\" from the set \\(\\{1,2,\\ldots,q\\}\\) in such way that adjacent vertices always receive different colors. We also recall that the function \\(Z_{\\mathbb{G}|_{V_{N}}}(q)\\) is called, in the graph theory language, the _chromatic polynomial_ of \\(\\mathbb{G}|_{V_{N}}\\). **Definition 1**. _Let \\(\\mathbb{G}=(\\mathbb{V},\\mathbb{E})\\) connected and locally finite infinite graph and let \\(\\{V_{N}\\}_{N\\in\\mathbb{N}}\\) be a Folner sequence of subsets of \\(\\mathbb{V}\\). Then we define, if it exists, the ground state specific entropy of the antiferromagnetic Potts model at zero temperature on \\(\\mathbb{G}\\) as_ \\[S_{\\mathbb{G}}(q)=\\lim_{N\\to\\infty}\\frac{1}{|V_{N}|}\\ln Z_{\\mathbb{G}|_{V_{N} }}(q) \\tag{2.5}\\] _We also define the reduced ground state degeneracy per site as_ \\[W_{r}(\\mathbb{G},q)=\\frac{1}{q}\\lim_{N\\to\\infty}\\left[Z_{\\mathbb{G}|_{V_{N}}} (q)\\right]^{\\frac{1}{|V_{N}|}} \\tag{2.6}\\] The ground state specific entropy \\(S_{\\mathbb{G}}(q)\\) and the reduced ground state degeneracy \\(W_{r}(\\mathbb{G},q)\\) are directly related by the identity \\[S_{\\mathbb{G}}(q)=\\ln W_{r}(\\mathbb{G},q)+\\ln q \\tag{2.7}\\] We can now state our main result. **Theorem 2**. _Let \\(\\mathbb{G}=(\\mathbb{V},\\mathbb{E})\\) a locally finite connected quasi-transitive amenable infinite graph with maximum degree \\(\\Delta\\), and let \\(\\{V_{N}\\}_{N\\in\\mathbb{N}}\\) a Folner sequence in \\(\\mathbb{G}\\). Then, \\(W_{r}(\\mathbb{G},q)\\) exists, is finite, is independent on the choice of the sequence \\(\\{V_{N}\\}_{N\\in\\mathbb{N}}\\), and is analytic in the variable \\(1/q\\) whenever \\(|1/q|<1/2e^{3}\\Delta\\) (\\(e\\) being the basis of natural logarithm)._ Again we stress that this result proves the Schrock conjecture in a considerably stronger formulation, since any regular lattice is a quasi-transitive amenable graph but the class of quasi-transitive amenable graphs is actually much wider than that of regular lattices. We remark also that the proof of analyticity of \\(W_{r}(\\mathbb{G},q)\\) requires to prove the analyticity and boundness of the function \\(|V_{N}|^{-1}\\ln Z_{\\mathbb{G}|_{V_{N}}}(q)\\) for any finite graph \\(\\mathbb{G}|_{V_{N}}\\) in a disk \\(|1/q|<1/C\\Delta\\)_uniformly in the volume \\(V_{N}\\)_, which is a stronger statement than theorem 5.1 in [17], claiming that the zeros of the function \\(Z_{\\mathbb{G}|_{V_{N}}}(q)\\) lie in the disk \\(|q|<C\\Delta\\) for any \\(\\mathbb{G}|_{V_{N}}\\) finite with maximum degree \\(\\Delta\\). ## 3. Polymer expansion and analyticity We first rewrite the partition function of the Potts model on a generic _finite_ graph \\(G=(V,E)\\) as a hard core Polymer gas grand canonical partition function. Without loss in generality, we will assume in this section that \\(G\\) is a sub-graph of a bounded degree infinite graphs \\(\\mathbb{G}=(\\mathbb{V},\\mathbb{E})\\) with maximum degree \\(\\Delta\\). Denote by \\(\\pi(V)\\) the set of all unordered partitions of \\(V\\), i.e. an element of \\(\\pi(V)\\) is an unordered n-ple \\(\\{R_{1},R_{2},\\ldots,R_{n}\\}\\), with \\(1\\leq n\\leq|V|\\), such that, for \\(i,j\\in I_{n}\\), \\(R_{i}\\subset V\\)\\(R_{i}\ eq\\emptyset\\), \\(R_{i}\\cap R_{j}=\\emptyset\\), and \\(\\cup_{i=1}^{n}R_{i}=V\\). Then, by writing the factor \\(\\exp\\{-\\sum_{\\{x,y\\}\\subset V}\\delta_{\\sigma_{x}\\sigma_{y}}J_{xy}\\}\\) in (2.3) as \\(\\prod_{\\{x,y\\}\\subset V}[(\\exp\\{-\\delta_{\\sigma_{x}\\sigma_{y}}J_{xy}\\}-1)+1]\\) and developing the product (a standard Mayer expansion procedure, see e.g [5]) we can rewrite the partition function on \\(G\\) (2.3) as \\[Z_{G}(q)=q^{|V|}\\Xi_{G}(q) \\tag{3.1}\\] where \\[\\Xi_{G}(q)=\\sum_{n\\geq 1}\\sum_{\\{R_{1},\\ldots,R_{n}\\}\\in\\pi(V)}\\rho(R_{1})\\ldots \\rho(R_{n}) \\tag{3.2}\\] with \\[\\rho(R)=\\left\\{\\begin{aligned} & 1&\\text{if }|R|=1\\\\ & q^{-|R|}\\sum\\limits_{\\sigma_{R}\\in\\Gamma_{R}}\\sum\\limits_{ \\begin{subarray}{c}E^{\\prime}\\subset\\mathrm{P}_{2}(R)\\\\ (R,E^{\\prime})\\in\\mathcal{G}_{R}\\end{subarray}}\\prod\\limits_{\\{x,y\\}\\in E^{ \\prime}}[e^{-\\delta_{\\sigma_{x}\\sigma_{y}}J_{xy}}-1]&\\text{if }|R|\\geq 2\\text{ and }\\mathbb{G}|_{R}\\in \\mathcal{G}_{R}\\\\ & 0&\\text{if }|R|\\geq 2\\text{ and }\\mathbb{G}|_{R}\ otin \\mathcal{G}_{R}\\end{aligned}\\right. \\tag{3.3}\\] Observe that the sum in l.h.s. of (3.3) runs over all possible connected graphs with vertex set \\(R\\). The r.h.s. of (3.2) can be written in a more compact way, by using the short notations \\[\\mathbf{R}_{n}\\equiv(R_{1},\\ldots,R_{n})\\hskip 28.452756pt;\\hskip 28.452756pt \\rho(\\mathbf{R}_{n})\\equiv\\rho(R_{1})\\cdots\\rho(R_{n})\\] as \\[\\Xi_{G}(q)=1+\\sum_{n\\geq 1}\\frac{1}{n!}\\sum_{\\begin{subarray}{c}\\mathbf{R}_{n} \\in[\\mathrm{P}_{\\geq 2}(V)]^{n}\\\\ R_{i}\\cap R_{j}=0\\;\\forall\\;\\{i,j\\}\\subset\\mathrm{I}_{n}\\end{subarray}}\\rho( \\mathbf{R}_{n}) \\tag{3.4}\\] where \\([\\mathrm{P}_{\\geq 2}(V)]^{n}\\) denote the \\(n\\)-times Cartesian product of \\(\\mathrm{P}_{\\geq 2}(V)\\) (which, we recall, denotes the set of all finite subsets of \\(V\\) with cardinality greater than \\(2\\)). It is also convenient to simplify the expression for the activity (3.3) by performing the sum over \\(\\sigma_{R}\\). As a matter of fact \\[q^{-|R|}\\sum_{\\sigma_{R}\\in\\Gamma_{R}}\\sum_{\\begin{subarray}{c}E^{\\prime} \\subset\\mathrm{P}_{2}(R)\\\\ (R,E^{\\prime})\\in\\mathcal{G}_{R}\\end{subarray}}\\prod_{\\{x,y\\}\\in E^{\\prime}}[ e^{-\\delta_{\\sigma_{x}\\sigma_{y}}J_{xy}}-1]=q^{-|R|}\\sum_{\\sigma_{R}\\in\\Gamma_{R}} \\sum_{\\begin{subarray}{c}E^{\\prime}\\subset\\mathrm{P}_{2}(R)\\\\ (R,E^{\\prime})\\in\\mathcal{G}_{R}\\end{subarray}}\\prod_{\\{x,y\\}\\in E^{\\prime}} \\delta_{\\sigma_{x}\\sigma_{y}}[e^{-J_{xy}}-1]=\\] \\[=q^{-|R|}\\sum_{\\begin{subarray}{c}E^{\\prime}\\subset\\mathrm{P}_{2}(R)\\\\ (R,E^{\\prime})\\in\\mathcal{G}_{R}\\end{subarray}}\\left[\\sum_{\\sigma_{R}\\in\\Gamma _{R}}\\prod_{\\{x,y\\}\\in E^{\\prime}}\\delta_{\\sigma_{x}\\sigma_{y}}\\right]\\prod_{ \\{x,y\\}\\in E^{\\prime}}[e^{-J_{xy}}-1]\\] But now, for any connected graph \\((R,E^{\\prime})\\in\\mathcal{G}_{R}\\) \\[\\sum_{\\sigma_{R}\\in\\Gamma_{R}}\\prod_{\\{x,y\\}\\in E^{\\prime}}\\delta_{\\sigma_{x} \\sigma_{y}}=q\\] Hence we get, for \\(|R|>1\\) \\[\\rho(R)=\\left\\{\\begin{aligned} & q^{-(|R|-1)}\\sum\\limits_{ \\begin{subarray}{c}E^{\\prime}\\subset\\mathrm{P}_{2}(R)\\\\ (R,E^{\\prime})\\in\\mathcal{G}_{R}\\end{subarray}}\\prod\\limits_{\\{x,y\\}\\in E^{ \\prime}}[e^{-J_{xy}}-1]&\\text{if }\\mathbb{G}|_{R}\\in\\mathcal{G}_{R}\\\\ & 0&\\text{otherwise}\\end{aligned}\\right. \\tag{3.5}\\]By definitions (3.5) or (3.3), the polymer activity \\(\\rho(R)\\) can be viewed as a real valued function defined on any finite subset \\(R\\) of \\(\\mathbb{V}\\). Of course this function depends on the \"topological structure\" of \\(\\mathbb{G}\\). We remark that if \\(\\gamma\\) is an automorphism of \\(\\mathbb{G}\\), then (3.5) clearly implies that \\(\\rho(\\gamma R)=\\rho(R)\\). In other words the activity \\(\\rho(R)\\) is invariant under automorphism of \\(\\mathbb{G}\\). The function \\(\\Xi_{G}(q)\\) is the standard grand canonical partition function of an _hard core polymer gas_ in which the polymers are finite subsets \\(R\\in V\\) with cardinality greater than \\(2\\), with _activity_\\(\\rho(R)\\), and submitted to an _hard core_ condition (\\(R_{i}\\cap R_{j}=\\emptyset\\) for any pair \\(\\{i,j\\}\\in\\mathrm{I}_{n}\\)). Note that by (3.1)and definitions (2.6)-(2.7) we have \\[W_{r}(\\mathbb{G},q)=\\exp\\left\\{\\lim_{N\\to\\infty}\\frac{1}{|V_{N}|}\\ln\\Xi_{ \\mathbb{G}|_{V_{N}}}(q)\\right\\} \\tag{3.6}\\] It is a well known fact in statistical mechanics that the natural logarithm of \\(\\Xi_{G}\\) can be rewritten as formal series, called the _Mayer series_ (see e.g. [5]) as \\[\\ln\\Xi_{G}(q)=\\sum_{n=1}^{\\infty}\\frac{1}{n!}\\sum_{\\mathbf{R}_{n}\\in[\\mathbb{P }_{\\geq 2}(V)]^{n}}\\phi^{T}(\\mathbf{R}_{n})\\rho(\\mathbf{R}_{n}) \\tag{3.7}\\] where \\[\\phi^{T}(\\mathbf{R}_{n})=\\left\\{\\begin{array}{ll}\\sum_{E^{\\prime}\\subset E( \\mathbf{R}_{n})\\atop(I_{n},E^{\\prime})\\in\\mathcal{G}_{n}}\\prod_{\\{i,j\\}\\in E^ {\\prime}}(-1)^{|E^{\\prime}|}&\\mbox{if }G(\\mathbf{R}_{n})\\in\\mathcal{G}_{n}\\\\ 0&\\mbox{otherwise}\\end{array}\\right. \\tag{3.8}\\] and \\(G(\\mathbf{R}_{n})\\equiv G(R_{1},\\ldots,R_{n})\\) defined at the beginning of section 2. The reader should note that the summation in the l.h.s. of (3.4) is actually a _finite sum_. On the contrary, the summation in the l.h.s. of (3.7) is an _infinite series_. We conclude this section showing two important technical lemmas concerning precisely the convergence of the series (3.8). In the proof of both lemma we will use a well known combinatorial inequality due to Rota [12], which states that if \\(G=(V,E)\\) is a connected graph, i.e. \\(G\\in\\mathcal{G}_{V}\\), then \\[\\left|\\sum_{E^{\\prime}\\subset E:\\atop(V,E^{\\prime})\\in\\mathcal{G}_{V}}(-1)^{ |E^{\\prime}|}\\right|\\leq\\sum_{E^{\\prime}\\subset E:\\atop(V,E^{\\prime})\\in T_{ V}}1=N_{\\mathcal{T}_{V}}[G] \\tag{3.9}\\] where \\(N_{\\mathcal{T}_{V}}[G]\\) is the number of tree graphs with vertex set \\(V\\) which are sub-graphs of \\(G\\). **Lemma 3**. _Let \\(\\mathbb{G}=(\\mathbb{V},\\mathbb{E})\\) a bounded degree infinite graph with maximum degree \\(\\Delta\\), and let, for any \\(R\\in\\mathbb{V}\\) such that \\(|R|\\geq 2\\), the activity \\(\\rho(R)\\) be given as in (3.5). Then, for any \\(n\\geq 2\\)_ \\[\\sup_{x\\in\\mathbb{V}}\\sum_{R\\in\\mathbb{V}:\\ x\\in R\\atop|R|=n}|\\rho(R)|\\leq \\left[\\frac{e\\Delta}{|q|}\\right]^{n-1} \\tag{3.10}\\] **Proof.** By definition \\[\\sup_{x\\in\\mathbb{V}}\\sum_{R\\in\\mathbb{V}:\\ x\\in R\\atop|R|=n}|\\rho(R)|=|q|^{- (n-1)}\\sup_{x\\in\\mathbb{V}}\\sum_{R\\in\\mathbb{V}:\\ x\\in R\\atop|R|=n,\\ \\sigma|_{R}\\in\\mathcal{G}_{R}}\\left|\\sum_{E^{\\prime}\\subset\\mathcal{P}_{2}(R )\\atop(R,E^{\\prime})\\in\\mathcal{G}_{R}}\\prod_{\\{x,y\\}\\in E^{\\prime}}[e^{-J_{xy }}-1]\\right| \\tag{3.11}\\]Using thus the Rota inequality (3.9), recalling that \\(\\mathbb{E}|_{R}=\\{\\{x,y\\}\\in\\mathbb{E}:x\\in R,y\\in R\\}\\), and observing that \\(e^{-J_{xy}}-1=-1\\) if \\(|x-y|=1\\) and \\(e^{-J_{xy}}-1=0\\) otherwise, we get \\[\\left|\\sum_{\\begin{subarray}{c}E^{\\prime}\\subset\\mathrm{P}_{2}(R)\\\\ (R,E^{\\prime})\\in\\mathcal{G}_{R}\\end{subarray}}\\prod_{\\{x,y\\}\\in E^{\\prime}}[ e^{-J_{xy}}-1]\\right|=\\left|\\sum_{\\begin{subarray}{c}E^{\\prime}\\subset \\mathbb{E}|_{R}\\\\ (R,E^{\\prime})\\in\\mathcal{G}_{R}\\end{subarray}}(-1)^{|E^{\\prime}|}\\right|\\leq \\sum_{\\begin{subarray}{c}E^{\\prime}\\subset\\mathbb{E}|_{R}:\\\\ (R,E^{\\prime})\\in\\mathcal{T}_{R}\\end{subarray}}1=\\sum_{\\begin{subarray}{c}E^{ \\prime}\\subset\\mathrm{P}_{2}(R)\\\\ (R,E^{\\prime})\\in\\mathcal{T}_{R}\\end{subarray}}\\prod_{\\{x,y\\}\\in E^{\\prime}} \\delta_{|x-y|1}\\] where \\(\\delta_{|x-y|1}=1\\) if \\(|x-y|=1\\) and \\(\\delta_{|x-y|1}=0\\) otherwise. Hence \\[\\sup_{x\\in\\mathbb{V}}\\sum_{\\begin{subarray}{c}R\\subset\\mathbb{V}:\\ x\\in R\\\\ |R|=n,\\ \\delta_{1}\\in\\mathcal{G}_{R}\\end{subarray}}|\\rho(R)|\\leq|q|^{-(n-1)} \\sup_{x\\in\\mathbb{V}}\\sum_{\\begin{subarray}{c}R\\subset\\mathbb{V}:\\ x\\in R\\\\ |R|=n\\end{subarray}}\\sum_{\\begin{subarray}{c}E^{\\prime}\\subset\\mathrm{P}_{2}(R )\\\\ (R,E^{\\prime})\\in\\mathcal{T}_{R}\\end{subarray}}\\prod_{\\{x,y\\}\\in E^{\\prime}} \\delta_{|x-y|1}\\leq\\] \\[\\leq\\frac{|q|^{-(n-1)}}{(n-1)!}\\sum_{\\begin{subarray}{c}E^{\\prime}\\subset \\mathrm{P}_{2}(\\mathrm{I}_{n})\\\\ (\\mathrm{I}_{n},E^{\\prime})\\in\\mathcal{T}_{R}\\end{subarray}}\\left[\\sup_{x\\in \\mathbb{V}}\\sum_{\\begin{subarray}{c}x_{1}=x,\\ (x_{2},\\ldots,x_{n})\\in\\mathbb{V}^{n-1}\\\\ x_{i}\ eq x_{j}\\ \\forall\\{i,j\\}\\in\\mathrm{I}_{n}\\end{subarray}}\\prod_{\\{i,j\\}\\in E^{ \\prime}}\\delta_{|x_{i}-x_{j}|1}\\right]\\] It is now easy to check that, for any \\(E^{\\prime}\\subset\\mathrm{P}_{2}(\\mathrm{I}_{n})\\) such that \\((I_{n},E^{\\prime})\\) is a tree, it holds \\[\\sup_{x\\in\\mathbb{V}}\\sum_{\\begin{subarray}{c}x_{1}=x,\\ (x_{2},\\ldots,x_{n})\\in\\mathbb{V}^{n-1}\\\\ x_{i}\ eq x_{j}\\ \\forall\\{i,j\\}\\in\\mathrm{I}_{n}\\end{subarray}}\\prod_{\\{i,j\\}\\in E^{ \\prime}}\\delta_{|x_{i}-x_{j}|1}\\leq\\frac{\\Delta^{n-1}}{(n-1)!}\\] and since, by Cayley formula, \\(\\sum_{\\begin{subarray}{c}E^{\\prime}\\subset\\mathrm{P}_{2}(\\mathrm{I}_{n})\\\\ (R,E^{\\prime})\\in\\mathcal{T}_{R}\\end{subarray}}1=n^{n-2}\\), we get \\[\\sup_{x\\in\\mathbb{V}}\\sum_{\\begin{subarray}{c}R\\subset\\mathbb{V}:\\ x\\in R\\\\ |R|=n\\end{subarray}}|\\rho(R)|\\leq\\left(\\frac{\\Delta}{|q|}\\right)^{n-1}\\frac{n^{ n-2}}{(n-1)!}\\leq\\left[\\frac{e\\Delta}{|q|}\\right]^{n-1}\\] \\(\\Box\\) To enunciate the second lemma we need to introduce a formal series more general that l.h.s. of (3.7). Let thus \\(U\\subset\\mathbb{V}\\) finite and let \\(m\\) a positive integer. We define \\[\\mathcal{S}_{U}^{m}(\\mathbb{G},q)=\\sum_{n=1}^{\\infty}\\frac{1}{n!}\\sum_{ \\begin{subarray}{c}\\mathbf{R}_{n}\\in[\\mathbb{P}_{\\geq 2}(\\mathbb{V})]^{n}\\\\ |\\mathbf{R}_{n}|\\geq m,\\ R_{1}\\cap U\ eq 0\\end{subarray}}\\phi^{T}(\\mathbf{R}_{n}) \\rho(\\mathbf{R}_{n}) \\tag{3.12}\\] where \\(|\\mathbf{R}_{n}|=\\sum_{i=1}^{n}|R_{i}|\\) and recall that \\(\\mathrm{P}_{\\geq 2}(\\mathbb{V})\\) denotes the set of all finite subsets of \\(\\mathbb{V}\\) with cardinality greater or equal than \\(2\\) and \\([\\mathrm{P}_{\\geq 2}(\\mathbb{V})]^{n}\\) denote the \\(n\\)-times Cartesian product. We will now prove the following: **Lemma 4**. _Let \\(\\mathbb{G}=(\\mathbb{V},\\mathbb{E})\\) a locally finite infinite graph with maximum degree \\(\\Delta\\). Let \\(U\\subset\\mathbb{V}\\) finite and let \\(m\\) a positive integer. Then \\(\\mathcal{S}_{U}^{m}(\\mathbb{G},q)\\) defined in (3.12) exists and is analytic as a function of \\(1/q\\) in the disk \\(\\ |2\\Delta e^{3}/q|<1\\). Moreover it satisfies the following bound_ \\[|\\mathcal{S}_{U}^{m}(\\mathbb{G})|\\leq|U|\\frac{1}{1-\\sqrt{2e^{3}|\\Delta/q|}} \\left|2e^{3}\\frac{\\Delta}{q}\\right|^{m/2}\\] **Proof**. We will prove the theorem by showing directly that the r.h.s. of (3.12) converge absolutely when \\(|1/q|\\) is sufficiently small. Let us define \\[|\\mathcal{S}|_{U}^{m}(\\mathbb{G})=\\sum_{n=1}^{\\infty}\\frac{1}{n!}\\sum_{ \\begin{subarray}{c}\\mathbf{R}_{n}\\in[P_{\\geq 2}(\\mathbb{V})]^{n}\\\\ |\\mathbf{R}_{n}|\\geq m,\\ R_{1}\\cap U\ eq\\emptyset\\end{subarray}}|\\phi^{T}( \\mathbf{R}_{n})\\rho(\\mathbf{R}_{n})| \\tag{3.13}\\] then \\(|\\mathcal{S}_{U}^{m}(\\mathbb{G})|\\leq|\\mathcal{S}|_{U}^{m}(\\mathbb{G})\\). We now bound \\(|\\mathcal{S}|_{U}^{m}(\\mathbb{G})\\). We have: \\[|\\mathcal{S}|_{U}^{m}(\\mathbb{G})\\leq\\sum_{s=m}^{\\infty}\\sum_{n=1}^{[s/2]} \\frac{1}{n!}\\sum_{\\begin{subarray}{c}\\mathbf{R}_{n}\\in[P_{\\geq 2}(\\mathbb{V})]^{n}\\\\ R_{1}\\cap U\ eq\\emptyset,\\ |\\mathbf{R}_{n}|=s\\end{subarray}}|\\phi^{T}(\\mathbf{R}_{n}) \\rho(\\mathbf{R}_{n})|=\\sum_{s=m}^{\\infty}\\sum_{n=1}^{[s/2]}\\frac{1}{n!}\\sum_{ \\begin{subarray}{c}\\mathbf{k}_{n}\\in\\mathbb{N};\\ k_{i}\\geq 2\\\\ k_{1}+\\dots+k_{n}=s\\end{subarray}}B_{n}(\\mathbf{k}_{n})\\] where \\(\\mathbf{k}_{n}\\equiv(k_{1},\\dots,k_{n})\\), \\(\\mathbb{N}^{n}\\) denotes the \\(n\\)- times Cartesian product of \\(\\mathbb{N}\\), \\([s/2]=\\max\\{\\ell\\in\\mathbb{N}:\\ell\\leq s/2\\}\\), and \\[B_{n}(\\mathbf{k}_{n})=\\sum_{\\begin{subarray}{c}\\mathbf{R}_{n}\\in[P_{\\geq 2}( \\mathbb{V})]^{n}\\\\ R_{1}\\cap U\ eq\\emptyset\\\\ |R_{1}|=k_{1},\\dots,|R_{n}|=k_{n}\\end{subarray}}|\\phi^{T}(\\mathbf{R}_{n})\\rho (\\mathbf{R}_{n})|\\] recalling now (3.8) and using again the Rota bound (3.9) we get \\[|\\phi^{T}(\\mathbf{R}_{n})|\\ \\begin{cases}\\leq N_{\\mathcal{T}_{n}}[G(\\mathbf{R}_{n}) ]&\\text{if }G(\\mathbf{R}_{n})\\in\\mathcal{G}_{n}\\\\ =0&\\text{otherwise}\\end{cases}\\] Hence \\[B_{n}(\\mathbf{k}_{n})\\leq\\sum_{G\\in\\mathcal{G}_{n}}N_{\\mathcal{T}_{n}}[G]\\sum _{\\begin{subarray}{c}\\mathbf{R}_{n}\\in[P_{\\geq 2}(\\mathbb{V})]^{n}\\\\ R_{1}\\cap U\ eq\\emptyset,\\ G(\\mathbf{R}_{n})=G\\\\ |R_{1}|=k_{1},\\dots,|R_{n}|=k_{n}\\end{subarray}}|\\rho(\\mathbf{R}_{n})| \\tag{3.14}\\] Observing now that \\[\\sum_{G\\in\\mathcal{G}_{n}}N_{\\mathcal{T}_{n}}[G](\\cdots)=\\sum_{\\tau\\in \\mathcal{T}_{n}}\\sum_{G\\in\\mathcal{G}_{n}:\\ G\\supset\\tau}(\\cdots)\\] We can rewrite \\[B_{n}(\\mathbf{k}_{n})\\leq\\sum_{\\tau\\in\\mathcal{T}_{n}}B_{n}(\\tau,\\mathbf{k}_{ n}) \\tag{3.15}\\] where \\[B_{n}(\\tau,\\mathbf{k}_{n})=\\sum_{\\begin{subarray}{c}\\mathbf{R}_{n}\\in[P_{ \\geq 2}(\\mathbb{V})]^{n}\\\\ R_{1}\\cap U\ eq\\emptyset,\\ G(\\mathbf{R}_{n})\\supset\\tau\\\\ |R_{1}|=k_{1},\\dots,|R_{n}|=k_{n}\\end{subarray}}|\\rho(\\mathbf{R}_{n})|\\] Note now that for any non negative function \\(F(R)\\) it holds \\[\\sum_{\\begin{subarray}{c}R\\in\\mathbb{V}:R\\cap R^{\\prime}\ eq\\emptyset\\\\ |R|=k\\end{subarray}}F(R)\\leq|R^{\\prime}|\\sup_{\\begin{subarray}{c}x\\in\\mathbb{V} \\\\ x\\in R,\\ |R|=k\\end{subarray}}\\sum_{F(R)} \\tag{3.16}\\]Hence we can now estimates \\(B_{n}(\\tau,{\\bf k}_{n})\\) for any fixed \\(\\tau\\) by explicitly perform the sum over polymers \\({\\bf R}_{n}\\) submitted to the constraint that \\(g({\\bf R}_{n})\\supset\\tau\\), summing first over the \"outermost polymers\", i.e. those polymers \\(R_{i}\\) such that \\(i\\) is a vertex of degree 1 in \\(\\tau\\), and using repetively the bounds (3.16). Then one can easily check that \\[B_{n}(\\tau,{\\bf k}_{n})\\leq|U|\\sup_{x\\in\\mathbb{V}}\\sum_{R_{1}\\in\\mathbb{V} \\atop x\\in R_{1},\\ |R_{1}|=k_{1}}\\cdots\\sup_{x\\in\\mathbb{V}}\\sum_{R_{n}\\in\\mathbb{V}\\atop x\\in R _{n}\\ |R_{n}|=k_{n}}|\\rho(R_{1})||R_{1}|^{d_{1}}\\prod_{i=2}^{k}\\Big{[}|R_{i}|^{d_{i}- 1}|\\rho(R_{i})|\\Big{]} \\tag{3.17}\\] where \\(d_{i}\\) is the degree of the vertex \\(i\\) of \\(\\tau\\). Recall that, for any tree \\(\\tau\\in{\\cal T}_{n}\\), it holds \\(1\\leq d_{i}\\leq n-1\\) and \\(d_{1}+\\ldots+d_{n}=2n-2\\). Now, by lemma 3, (3.10), we can bound \\[B_{n}(\\tau,{\\bf k}_{n})\\leq|U|\\varepsilon^{k_{1}-1}k_{1}^{d_{1}}\\prod_{i=2}^{ k}\\Big{[}k_{i}^{d_{i}-1}\\varepsilon^{k_{n}-1}\\Big{]} \\tag{3.18}\\] where we have put for simplicity \\(\\varepsilon=e\\Delta/|q|\\). Noting that estimates in l.h.s. of (3.17) depends only on the degrees \\(d_{1},\\ldots,d_{n}\\) of the vertices in \\(\\tau\\), we can now easily sum over all connected tree graphs in \\({\\cal T}_{n}\\) and obtain \\[B_{n}({\\bf k}_{n})\\leq\\sum_{\\tau\\in{\\cal T}_{n}}B_{n}(\\tau,{\\bf k}_{n})=\\sum_{ r_{1},\\ldots,r_{n}\\atop r_{1}+\\ldots+r_{n}=2n-2\\atop 1\\leq r_{i}\\leq n-1} \\sum_{r\\in{\\cal T}_{n}\\atop r_{1}+\\ldots+r_{n}=2n-2\\atop 1\\leq r_{i}\\leq n-1} \\ B_{n}(\\tau,{\\bf k}_{n})\\leq\\] \\[\\leq|U|\\sum_{r_{1},\\ldots,r_{n}\\atop r_{1}+\\ldots r_{n}=2n-2\\atop 1\\leq r_{i} \\leq n-1}(n-2)!k_{1}\\prod_{i=1}^{n}\\Bigg{[}{k_{i}^{r_{i}-1}\\over(r_{i}-1)!} \\varepsilon^{k_{i}-1}\\Bigg{]}\\] where in the second line we used the bound (3.17) and Cayley formula \\[\\sum_{r\\in T_{n}\\atop d_{1}\\ldots d_{n}\\ {\\rm fixed}}1={(n-2)!\\over\\prod_{i=1}^{ n}(d_{i}-1)!} \\tag{3.19}\\] Now, recalling that \\(k_{1}+\\ldots+k_{n}=s\\) and using the Newton multinomial formula, we get \\[B_{n}({\\bf k}_{n})\\leq|U|k_{1}s^{n-2}\\varepsilon^{s-n}\\leq|V|s^{n}\\varepsilon^ {s-n}\\] thus, since \\( \\[\\leq|U|\\frac{\\left[2e^{2}\\varepsilon\\right]^{m/2}}{1-e\\sqrt{2\\varepsilon}}\\] provided \\[2e^{2}\\varepsilon<1\\] Hence, recalling that \\(\\varepsilon=e\\Delta/|q|\\), the lemma is proved. \\(\\Box\\) The following corollary is now a trivial consequence of the two lemmas above. **Corollary 5**. _Let \\(G=(V,E)\\) any finite connected sub-graph of an infinite connected bounded degree graph \\({\\mathbb{G}}=({\\mathbb{V}},{\\mathbb{E}})\\) with maximum degree \\(\\Delta\\). Then the function \\(|V|^{-1}\\log\\Xi_{G}(q)\\) is analytic in the variable \\(1/q\\) for \\(|1/q|<1/2e^{3}\\Delta\\) and it admits the following bound uniformly in \\(|V|\\):_ \\[\\left|\\frac{1}{|V|}\\log\\Xi_{G}(q)\\right|\\leq\\frac{1}{1-\\sqrt{2e^{3}|\\Delta/q|} }\\left|2e^{3}\\frac{\\Delta}{q}\\right|\\] **Proof**. For any \\(G=(V,E)\\subset{\\mathbb{G}}=({\\mathbb{V}},{\\mathbb{E}})\\) with \\(V\\) finite, by definition (3.7) and (3.12), it holds that \\(|\\ln\\Xi_{G}(q)|\\leq|{\\cal S}|_{V}^{2}({\\mathbb{G}},q)\\) and one can thus apply lemma 4. \\(\\Box\\) SS4. **A graph theory lemma** **Lemma 6**. _Let \\({\\mathbb{G}}=({\\mathbb{V}},{\\mathbb{E}})\\) be a locally finite quasi-transitive infinite graph and let \\(\\{V_{N}\\}_{N\\in{\\mathbb{N}}}\\) be a Folner sequence of finite subsets of \\({\\mathbb{V}}\\). Then, for every vertex orbit \\(O\\subset{\\mathbb{V}}\\) of \\({\\rm Aut}(G)\\), there exists a non-zero finite limit_ \\[\\lim_{N\\to\\infty}\\frac{|O\\cap V_{N}|}{|V_{N}|}\\] _and it is independent on the choice of the sequence \\(\\{V_{N}\\}_{N\\in{\\mathbb{N}}}\\)._ **Proof**. For a natural \\(r\\) and a finite set \\(F\\subset\\)\\({\\mathbb{V}}\\) denote by \\({\\rm B}_{r}F\\) the set \\[{\\rm B}_{r}(F)=\\{x\\in{\\mathbb{V}}:\\exists y\\in F\\ |x-y|\\leqslant r\\}\\] Thus, for a single-point set \\(\\{y\\}\\), \\({\\rm B}_{r}(\\{y\\})\\) is the ball of radius \\(r\\) centered at \\(y\\). Moreover we have the bound \\[|{\\rm B}_{r}(F)|\\leqslant|F|(1+\\Delta+\\ldots+\\Delta^{r})\\leqslant\\Delta^{r+1 }|F|\\] Let \\(O_{1},\\ldots,O_{s}\\) be the complete list of vertex orbits of \\({\\rm Aut}({\\mathbb{G}})\\) in the set \\({\\mathbb{V}}\\) and let \\(A_{0}\\subset{\\mathbb{V}}\\) be a set with exactly one element in common with every orbit. Denote by \\(d\\) the diameter of \\(A_{0}\\). Consider the orbit \\({\\cal A}=\\{gA_{0}:g\\in\\)\\({\\rm Aut}({\\mathbb{G}})\\}\\) of \\(A_{0}\\). A set \\(A\\subset{\\mathbb{V}}\\) is therefore an element of \\({\\cal A}\\) if it exists a \\(g\\in\\)\\({\\rm Aut}({\\mathbb{G}})\\}\\) such that \\(A=gA_{0}\\). For any set \\(U\\subset\\)\\(V\\) we denote \\({\\cal A}_{U}=\\{A\\in\\)\\({\\cal A}:A\\subset\\)\\(U\\}\\). Note that for any set \\(A\\in{\\cal A}\\) and any vertex orbit \\(O\\), we have that \\(|A\\cap O|=1\\), hence for a fixed vertex orbit \\(O_{i}\\) we can define the function \\(\\varphi_{i}\\) as follows. \\[\\varphi_{i}:{\\cal A}\\to O_{i}:A\\mapsto A\\cap O_{i}\\] The function \\(\\varphi_{i}\\) is a surjection and for \\(x\\in O_{i}\\) the number \\(k_{i}=|\\varphi_{i}^{-1}(x)|\\) is finite and does not depend on the choice of \\(x\\in O_{i}\\). For the sets \\[V_{N}^{-}=V_{N}\\backslash{\\rm B}_{d}(\\partial V_{N}),\\ \\ \\ \\ V_{N}^{+}=V_{N} \\cup{\\rm B}_{d}(\\partial V_{N})\\]we have \\(V_{N}^{-}\\!\\subset\\!V_{N}\\!\\subset\\!V_{N}^{+}\\) and \\[\\varphi_{i}^{-1}(V_{N}^{-}\\!\\cap\\!O_{i})\\subset{\\cal A}_{V_{N}}\\subset\\varphi_{i }^{-1}(V_{N}\\!\\cap\\!O_{i})\\subset{\\cal A}_{V_{N}^{+}} \\tag{4.5}\\] Indeed, suppose that \\(A\\!\\in\\!\\varphi_{i}^{-1}(V_{N}^{-}\\!\\cap\\!O_{i})\\) and \\(A\\!\ ot\\subset\\!V_{N}\\). Let \\(a_{1}\\!\\!=\\!\\!\\varphi_{i}(A)\\!\\!\\in\\!\\!V_{N}^{-}\\!\\subset\\!V_{N}\\) and let \\(a_{2}\\!\\in\\!\\!A\\backslash V_{N}\\). There exists a path \\(\\tau(a_{1},a_{2})\\) in \\({\\mathbb{G}}\\) of length \\(\\leqslant\\!d\\). This chain must have at least one point in \\(\\partial V_{N}\\). This implies \\(a_{1}\\!\\in\\!\\!{\\rm B}_{d}(\\partial V_{N})\\) contradicting the assumption \\(a_{1}\\!\\in\\!\\!V_{N}^{-}\\). The second inclusion of (4.5) is obvious and the third one is true by the same reason as the first one. (4.5) implies \\[k_{i}|V_{N}^{-}\\!\\cap\\!O_{i}|\\leqslant|{\\cal A}_{V_{N}}|\\leqslant k_{i}|V_{N} \\!\\cap\\!O_{i}|\\leqslant|{\\cal A}_{V_{n}^{+}}| \\tag{4.6}\\] and \\[|V_{N}^{-}\\!\\cap\\!O_{i}|\\leqslant\\frac{1}{k_{i}}|{\\cal A}_{V_{N}}|\\leqslant|V_ {N}\\!\\cap\\!O_{i}|\\leqslant\\frac{1}{k_{i}}|{\\cal A}_{V_{N}^{+}}| \\tag{4.7}\\] By taking sum over \\(i\\) we get \\[|V_{N}^{-}|\\leqslant\\alpha|{\\cal A}_{V_{N}}|\\leqslant|V_{N}|\\leqslant\\alpha|{ \\cal A}_{V_{N}^{+}}| \\tag{4.8}\\] where \\(\\alpha=\\sum_{i=1}^{s}\\frac{1}{k_{i}}\\). On the other hand \\({\\cal A}_{V_{N}^{+}}\\!\\setminus\\!{\\cal A}_{V_{N}}\\subset{\\cal A}_{{\\rm B}_{d}( \\partial V_{N})}\\) and, by (4.3), \\[|{\\cal A}_{V_{N}^{+}}|\\leqslant|{\\cal A}_{V_{N}}|+|{\\cal A}_{{\\rm B}_{d}( \\partial V_{N})}|\\leqslant|{\\cal A}_{V_{N}}|+k\\Delta^{d+1}|\\partial V_{N}| \\tag{4.9}\\] where \\(k\\!\\!=\\!\\!\\max\\{k_{i}:i\\!\\in\\!\\{1,\\ldots,s\\}\\}\\). From the first inequality of (4.8) we have \\[|{\\cal A}_{V_{N}}|\\geqslant\\frac{1}{\\alpha}|V_{N}^{-}|\\geqslant\\frac{1}{ \\alpha}(|V_{N}|-|{\\rm B}_{d}(\\partial V_{N})|)\\geqslant\\frac{1}{\\alpha}(|V_{N }|-\\Delta^{d+1}|\\partial V_{N}|) \\tag{4.10}\\] If \\(\\frac{|\\partial V_{N}|}{|V_{N}|}\\leqslant\\varepsilon\\) then, by (4.9) and (4.10), \\[1\\leqslant\\frac{|{\\cal A}_{V_{N}^{+}}|}{|{\\cal A}_{V_{N}}|}\\leqslant 1+\\frac{ \\Delta^{d+1}|\\partial V_{N}|}{\\frac{1}{\\alpha}(|V_{N}|-\\Delta^{d+1}|\\partial V _{N}|)}\\leqslant 1+\\frac{\\Delta^{d+1}\\varepsilon}{\\frac{1}{\\alpha}(1-\\Delta^{d+1} \\varepsilon)}\\] This proves that \\[\\lim_{N\\to\\infty}\\frac{|{\\cal A}_{V_{N}^{+}}|}{|{\\cal A}_{V_{N}}|}=1 \\tag{4.11}\\] By (4.8) and (4.10) we also have \\[\\lim_{N\\to\\infty}\\frac{|{\\cal A}_{V_{N}}|}{|V_{N}|}=\\frac{1}{\\alpha} \\tag{4.12}\\] Dividing (4.7) by \\(|V_{N}|\\) and using (4.12) we obtain \\[\\lim_{N\\to\\infty}\\frac{|O_{i}\\cap V_{N}|}{|V_{N}|}=\\frac{1}{k_{i}\\alpha}\\] and the lemma is proved. \\(\\square\\)\\(\\lx@sectionsign\\)5. **Potts model on infinite graphs: proof of theorem 2.** Let \\({\\mathbb{G}}=({\\mathbb{V}},{\\mathbb{E}})\\) infinite bounded degree and let \\(x\\in{\\mathbb{V}}\\). Then we define \\[f_{\\mathbb{G}}(x)=\\sum_{n=1}^{\\infty}\\frac{1}{n!}\\sum_{\\begin{subarray}{c}{\\bf R }_{n}\\in[{\\mathbb{V}}\\geq({\\mathbb{V}})]^{n}\\\\ x\\in R_{1}\\end{subarray}}\\phi^{T}({\\bf R}_{n})\\frac{\\rho({\\bf R}_{n})}{|R_{1}|} \\tag{5.1}\\] We stress that, by construction, \\(f_{\\mathbb{G}}(x)\\) is invariant under automorphism. I.e. if \\(x\\in{\\mathbb{V}}\\) and \\(y\\in{\\mathbb{V}}\\) are equivalent (i.e. it exists \\(\\gamma\\) automorphism of \\({\\mathbb{G}}\\) such that \\(y=\\gamma x\\)) then \\(f_{\\mathbb{G}}(x)=f_{\\mathbb{G}}(y)\\). Given now a _finite_ set \\(V_{N}\\subset{\\mathbb{V}}\\), we define \\[F(V_{N})=\\frac{1}{|V_{N}|}\\sum_{x\\in V_{N}}f_{\\mathbb{G}}(x) \\tag{5.2}\\] The numbers \\(F(V_{N})\\) are actually functions of \\(q\\). As a trivial corollary of lemma 4 we can state the following **Lemma 7**. _Let \\({\\mathbb{G}}=({\\mathbb{V}},{\\mathbb{E}})\\) infinite bounded degree. Then for any \\(V_{N}\\subset V\\) finite, the functions \\(f_{\\mathbb{G}}(x)\\) and \\(F(V_{N})\\) defined in (5.1) and (5.2) are analytic in the variable \\(1/q\\) for \\(|1/q|<1/2e^{3}\\Delta\\) and bounded by \\(\\left|2e^{3}\\Delta/q\\right|/(1-\\sqrt{2e^{3}|\\Delta/q|})\\) uniformly in \\(N\\)._ **Proof**. Comparing l.h.s. of (3.12) with l.h.s. of (5.1) we have that \\(|f_{\\mathbb{G}}(x)|\\leq|{\\cal S}|_{\\{x\\}}^{2}({\\mathbb{G}})\\), hence one can again use lemma 4 and get immediately the proof. \\(\\square\\). From lemma 6 and lemma 7 it follows: **Proposition 8**. _Let \\({\\mathbb{G}}=({\\mathbb{V}},{\\mathbb{E}})\\) be a locally finite quasi-transitive infinite graph and let \\(\\{V_{N}\\}_{N\\in{\\mathbb{N}}}\\) be a sequence of finite subsets of \\({\\mathbb{V}}\\) such that \\(|\\partial V_{N}|/|V_{N}|\\to 0\\) as \\(N\\to\\infty\\). Let \\(\\Delta\\) be the maximum degree of \\({\\mathbb{G}}\\), then the limits_ \\[\\lim_{N\\to\\infty}F(V_{N})\\doteq F_{\\mathbb{G}}(q) \\tag{5.3}\\] _exists, is finite, is independent on the sequence \\(\\{V_{N}\\}_{N\\in{\\mathbb{N}}}\\), and is analytic as a function of \\(1/q\\) for \\(|1/q|<1/2e^{3}\\Delta\\)._ **Proof**. If the limit (5.3) exists, then by lemma 7 it is clearly bounded by \\(|2e^{3}\\Delta/q|/(1-\\sqrt{2e^{3}|\\Delta/q|})\\) and it analytic in \\(1/q\\) for \\(|1/q|<1/2e^{3}\\Delta\\). To prove the existence of the limit (5.3) we proceed as follows. Since \\({\\mathbb{G}}\\) is quasi-transitive then \\({\\mathbb{V}}\\) can be partitioned into orbits \\(O_{1}\\), , \\(O_{s}\\) of \\({\\rm Aut}({\\mathbb{G}})\\) such that for two any vertices \\(x,y\\) in the same orbit \\(O_{i}\\) there is an automorphism of \\({\\mathbb{G}}\\) which maps \\(x\\) to \\(y\\). Hence for such a pair we have \\(f_{\\mathbb{G}}(x)=f_{\\mathbb{G}}(y)\\) and we can conclude that \\(f_{\\mathbb{G}}(x)\\) has value in a finite set \\(\\{f_{1},\\ldots,f_{s}\\}\\) with \\(f_{i}=f_{\\mathbb{G}}(x)\\) where \\(x\\) is any vertex \\(x\\in{\\cal O}_{i}\\). Thus for any finite connected \\(V_{N}\\) and any \\(j\\in\\{1,2,\\ldots,s\\}\\) we have \\[\\frac{1}{|V_{N}|}\\sum_{x\\in V_{N}}f_{\\mathbb{G}}(x)=\\left[\\frac{|V_{N}\\cap O_{ 1}|}{|V_{N}|}f_{1}+\\ldots\\frac{|V_{N}\\cap O_{s}|}{|V_{N}|}f_{s}\\right]\\] hence \\[\\lim_{N\\to\\infty}F(V_{N})==f_{1}\\lim_{N\\to\\infty}\\frac{|V_{N}\\cap O_{1}|}{|V_{ N}|}+\\ldots+f_{s}\\lim_{N\\to\\infty}\\frac{|V_{N}\\cap O_{s}|}{|V_{N}|}\\]and by lemma 6 the limit above exists. \\(\\Box\\) We are at last in the position to prove the main results of the paper, namely the theorem 2 enunciated at the end of section 2. **Proof of theorem 2**. We will prove that \\(\\lim_{N\\to\\infty}|V_{N}|^{-1}\\log\\Xi_{\\mathbb{G}|_{V_{N}}}(q)=F_{\\mathbb{G}}(q)\\) where \\(F_{\\mathbb{G}}(q)\\) is the function defined in (5.3) and then use definition (3.6). \\[\\log\\Xi_{\\mathbb{G}|_{V_{N}}}-\\sum_{x\\in V_{N}}f_{\\mathbb{G}}(x)=\\sum_{n=1}^{ \\infty}\\frac{1}{n!}\\left[\\sum_{\\mathbf{R}_{n}\\in[\\mathbb{P}_{\\geq 2}(V_{N})]^{n}} \\phi^{T}(\\mathbf{R}_{n})\\rho(\\mathbf{R}_{n})-\\sum_{x\\in V_{N}}\\sum_{\\begin{subarray} {c}\\mathbf{R}_{n}\\in[\\mathbb{P}_{\\geq 2}(V)]^{n}\\\\ x\\in R_{1}\\end{subarray}}\\phi^{T}(\\mathbf{R}_{n})\\frac{\\rho(\\mathbf{R}_{n})}{| R_{1}|}\\right]\\] Now note that \\[\\sum_{\\begin{subarray}{c}\\mathbf{R}_{n}\\in[\\mathbb{P}_{\\geq 2}(V)]^{n}\\\\ x\\in R_{1}\\end{subarray}}(\\cdots)=\\sum_{\\begin{subarray}{c}\\mathbf{R}_{n}\\in[ \\mathbb{P}_{\\geq 2}(V_{N})]^{n}\\\\ x\\in R_{1}\\\\ 2R_{i}\\text{ }R_{i}\ ot\\in V_{N}\\end{subarray}}(\\cdots)+\\sum_{\\begin{subarray}{c} \\mathbf{R}_{n}\\in[\\mathbb{P}_{\\geq 2}(V)]^{n}\\\\ x\\in R_{1}\\\\ 2R_{i}\\text{ }R_{i}\ ot\\in V_{N}\\end{subarray}}(\\cdots)\\] moreover \\[\\sum_{x\\in V_{N}}\\sum_{\\begin{subarray}{c}R_{1}\\in V_{N}\\\\ x\\in R_{1}\\end{subarray}}(\\cdots)=\\sum_{R_{1}\\in V_{N}}|R_{1}|(\\cdots)\\quad, \\quad\\sum_{x\\in V_{N}}\\sum_{\\begin{subarray}{c}R_{1}\\in\\mathbb{V}\\\\ x\\in R_{1}\\end{subarray}}(\\cdots)=\\sum_{R_{1}\\in\\mathbb{V}}|R_{1}\\cap V_{N}|(\\cdots)\\] hence, using also that \\(|R_{1}\\cap V_{N}|/|R_{1}|\\leq 1\\) we get \\[\\Big{|}\\log\\Xi_{\\mathbb{G}|_{V_{N}}}-\\sum_{x\\in V_{N}}f_{\\mathbb{G}}(x)\\Big{|} \\leq\\sum_{n=1}^{\\infty}\\frac{1}{n!}\\sum_{\\begin{subarray}{c}\\mathbf{R}_{n}\\in [\\mathbb{P}_{\\geq 2}(V)]^{n}\\\\ R_{1}\\cap V_{N}\ eq\\emptyset\\\\ \\exists R_{i}\\text{ }R_{i}\ ot\\in V_{N}\\end{subarray}}\\big{|}\\phi^{T}( \\mathbf{R}_{n})\\rho(\\mathbf{R}_{n})\\big{|}\\] Let now choose \\(p>\\ln\\Delta\\) and define \\[m_{N}^{p}=\\frac{1}{p}\\ln\\left[\\frac{|V_{N}|}{|\\partial V_{N}|}\\right] \\tag{5.4}\\] remark that, since by the hypothesis the sequence \\(V_{N}\\) is Folner and hence (2.2) holds, then \\(\\lim_{N\\to\\infty}m_{N}^{p}=\\infty\\), for any integer \\(p\\). We now can rewrite \\[\\sum_{\\begin{subarray}{c}\\mathbf{R}_{n}\\in[\\mathbb{P}_{\\geq 2}(V)]^{n}\\\\ \\exists R_{1}\\cap V_{N}\ eq\\emptyset\\\\ \\exists R_{i}\\text{ }R_{i}\ ot\\in V_{N}\\end{subarray}}(\\cdots)=\\sum_{ \\begin{subarray}{c}\\mathbf{R}_{n}\\in[\\mathbb{P}_{\\geq 2}(V)]^{n}\\\\ R_{1}\\cap V_{N}\ eq\\emptyset,\\text{ }|\\mathbf{R}_{n}|\\geq m_{N}^{p}\\\\ 2R_{i}\\text{ }R_{i}\ ot\\in V_{N},\\end{subarray}}(\\cdots)+\\sum_{ \\begin{subarray}{c}\\mathbf{R}_{n}\\in[\\mathbb{P}_{\\geq 2}(V)]^{n}\\\\ R_{1}\\cap V_{N}\ eq\\emptyset,\\text{ }|\\mathbf{R}_{n}|<m_{N}^{p}\\\\ 2R_{i}\\text{ }R_{i}\ ot\\in V_{N},\\end{subarray}}(\\cdots)\\] Hence \\[\\left|\\log\\Xi_{\\mathbb{G}|_{V_{N}}}-\\sum_{x\\in V_{N}}f_{\\mathbb{G}}(x)\\right| \\leq\\sum_{n=1}^{\\infty}\\frac{1}{n!}\\sum_{\\begin{subarray}{c}\\mathbf{R}_{n}\\in [\\mathbb{P}_{\\geq 2}(V)]^{n}\\\\ R_{1}\\cap V_{N}\ eq\\emptyset,\\text{ }|\\mathbf{R}_{n}|\\geq m_{N}^{p}\\\\ 2R_{i}\\text{ }R_{i}\ ot\\in V_{N},\\end{subarray}}\\big{|}\\phi^{T}(\\mathbf{R}_{n}) \\rho(\\mathbf{R}_{n})\\big{|}+\\]\\[+\\sum_{n=1}^{\\infty}\\frac{1}{n!}\\sum_{\\begin{subarray}{c}\\mathbf{R}_{n}\\in[P_{ \\geq 2}(\\mathbb{V})]^{n}\\\\ R_{1}\\cap V_{N}\ eq\\emptyset,\\ |\\mathbf{R}_{n}|<m_{N}^{p}\\\\ \\exists R_{i}:\\ R_{i}\ ot\\in V_{N},\\end{subarray}}\\left|\\phi^{T}(\\mathbf{R}_{n} )\\rho(\\mathbf{R}_{n})\\right| \\tag{5.5}\\] but, concerning the first sum, recalling definition (3.13), we have \\[\\sum_{n=1}^{\\infty}\\frac{1}{n!}\\sum_{\\begin{subarray}{c}\\mathbf{R}_{n}\\in[P_{ \\geq 2}(\\mathbb{V})]^{n}\\\\ R_{1}\\cap V_{N}\ eq\\emptyset,\\ |\\mathbf{R}_{n}|\\leq m_{N}^{p}\\\\ \\exists R_{i}:\\ R_{i}\ ot\\in V_{N},\\end{subarray}}\\left|\\phi^{T}(\\mathbf{R}_{n })\\rho(\\mathbf{R}_{n})\\right|\\leq|\\mathcal{S}|_{V_{N}}^{m_{N}^{p}}(\\mathbb{G},q)\\leq\\mathrm{Const.}|V_{N}|\\varepsilon^{m_{N}^{p}/2}\\] where \\(\\varepsilon=q/2e^{3}\\Delta<1\\) by hypothesis. which, divided by \\(|V_{N}|\\), converge to zero as \\(N\\to\\infty\\) because by hypothesis \\(m_{N}^{p}\\to\\infty\\) as \\(N\\to\\infty\\). On the other hand, recalling that due to the factor \\(\\phi^{T}(\\mathbf{R}_{n})\\) the sets \\(R_{i}\\) must be pair-wise connected, we have that \\(|\\cup_{i}R_{i}|<\\sum_{i}|R_{i}|\\). So, since \\(|\\cup_{i}R_{i}|<m_{N}^{p}\\) and at least one among \\(R_{i}\\) intersects \\(\\partial V_{N}\\), this means that all polymers \\(R_{i}\\) must lie in the set \\[\\mathrm{B}_{m_{N}^{p}}(\\partial V_{N})=\\{x\\in\\mathbb{V}:\\exists v\\in\\partial V _{N}:|x-v|\\leq m_{N}^{p}\\}\\] Recalling (4.2) we have \\[|\\mathrm{B}_{m_{N}^{p}}(\\partial V_{N})|\\leq|\\partial V_{N}|\\Delta^{m_{N}^{p} +1}\\] Hence we have, again recalling (3.13), that second sum in l.h.s. of (5.5) is bounded by \\[\\sum_{n=1}^{\\infty}\\frac{1}{n!}\\sum_{\\begin{subarray}{c}\\mathbf{R}_{n}\\in[P_ {\\geq 2}(\\mathbb{V})]^{n}\\\\ R_{1}\\cap V_{N}\ eq\\emptyset,\\ |\\mathbf{R}_{n}|<m_{N}^{p}\\\\ \\exists R_{i}:\\ R_{i}\ ot\\in V_{N}.\\end{subarray}}\\left|\\phi^{T}(\\mathbf{R}_ {n})\\rho(\\mathbf{R}_{n})\\right|\\leq|\\mathcal{S}|_{\\mathrm{B}_{m_{N}^{p}}( \\partial V_{N})}^{2}(\\mathbb{G},q)\\leq\\] \\[\\leq\\mathrm{Cost.}|\\mathrm{B}_{m_{N}^{p}}(\\partial V_{N})|\\varepsilon\\leq \\mathrm{Const.}\\Delta|\\partial V_{N}|\\Delta^{m_{N}^{p}}\\varepsilon\\] Thus recalling definition (5.4), we have \\[\\left|\\frac{1}{|V_{N}|}\\log\\Xi_{\\mathbb{G}|_{V_{N}}}-\\frac{1}{|V_{N}|}\\sum_{x \\in V_{N}}f_{\\mathbb{G}}(x)\\right|=\\left|\\frac{1}{|V_{N}|}\\log\\Xi_{\\mathbb{G} |_{V_{N}}}-F_{\\mathbb{G}}(q)\\right|\\leq\\] \\[\\leq\\mathrm{Const.}\\left[\\frac{|\\partial V_{N}|}{|V_{N}|}\\right]^{\\frac{| \\ln\\varepsilon|}{p}}+\\mathrm{Const.}\\varepsilon\\left[\\frac{|\\partial V_{N}|}{| V_{N}|}\\right]^{1-\\frac{\\ln\\Delta}{p}}\\] Since by hypothesis \\(|\\partial V_{N}|/|V_{N}|\\to 0\\) as \\(N\\to\\infty\\), we conclude that the quantity above is as small as we please for \\(N\\) large enough. This ends the proof of the theorem. \\(\\square\\) **Acknowledgements** This work was supported by Ministero dell'Universita' e della Ricerca Scientifica e Tecnologica (Italy) and Conselho Nacional de Desenvolvimento Cientifico e Tecnologico - CNPq, a Brazilian Governmental agency promoting Scientific and technology development (grant n. 460102/00-1). We thank Prof. Mario Jorge Dias Carneiro for useful discussions and Prof. Bojan Mohar for a valuable suggestion via e-mail. ## References * [1] Baxter, R. J.: _Dichromatic Polynomials and Potts Models Summed Over Rooted Maps_. Annals of Combinatorics, **5** (2001) 17-36. * [2] Benjamini, I. and Schramm, O.: _Percolation beyond \\(Z^{d}\\), many questions and a few answers_. Electr. Comm. Probab., **1** (1996), 71-82. * [3] Biggs, N. L.: _Chromatic and thermodynamic limits_. J. Phys. A, **8** (1975), no. 10, L110-L112. * [4] Biggs, N. L.; Meredith, G. H. J.: _Approximations for chromatic polynomials_. J. Combinatorial Theory Ser. B, **20** (1976), no. 1, 5-19. * [5] Cammarota, C.: _Decay of Correlations for Infinite range Interactions in unbounded Spin Systems_. Comm. Math Phys., **85** (1982), 517-528 * [6] Chang, Shu-Chiuan; Shrock, Robert: _Structural properties of Potts model partition functions and chromatic polynomials for lattice strips_. Phys. A, **296** (2001), no. 1-2, 131-182. * [7] Fortuin, C. M.; Kasteleyn, P. W.: _On the random-cluster model. I. Introduction and relation to other models_. Physica **57** (1972), 536-564. * [8] Haggstrom, O.; Schonmann, R. H.; Steif, J. E.: _The Ising model on diluted graphs and strong amenability_. Ann. Probab. **28** (2000), no. 3, 1111-1137. * [9] Jonasson, J.: _The random cluster model on a general graph and a phase transition characterization of nonamenability_. Stochastic Process Appl., **79** (1999), 335-534. * [10] Kim, D.; Enting, I. G.: _The limit of chromatic polynomials_. J. Combin. Theory Ser. B, **26** (1979), no. 3, 327-336. * [11] Lyons, R.: _Phase transitions on nonamenable graphs. Probabilistic techniques in equilibrium and nonequilibrium statistical physics_. J. Math. Phys., **41** (2000), no. 3, 1099-1126. * [12] Rota, G.:_On the foundations of combinatorial theory. I. Theory of Mobius functions_. Z. Wahrscheinlichkeitstheorie und Verw. Gebiete, **2** (1964), 340-368. * [13] Salas, J.; Sokal, A. D.: _Transfer matrices and partition-function zeros for antiferromagnetic Potts models. I. General theory and square-lattice chromatic polynomial_. J. Statist. Phys., **104** (2001), no. 3-4, 609-699. * [14] Shrock R., Tsai S.H.: _Families of graphs with \\(W_{r}(G,q)\\) functions that are nonanalytic at 1/q=0_. Phys. Rev. E, **56** (1997) n. 4, 3935-3943. * [15] Shrock R., Tsai S.H.: _Ground-state degeneracy of Potts antiferromagnets: homeomorphic classes with noncompact W boundaries_. Physica A, **265** (1999), no. 1-2, 186-223 * [16] Shrock, R.:_Chromatic polynomials and their zeros and asymptotic limits for families of graphs_. 17th British Combinatorial Conference (Canterbury, 1999). Discrete Math., **231** (2001), no. 1-3, 421-446. * [17] Sokal, A. D.: _Bounds on the complex zeros of (di)chromatic polynomials and Potts-model partition functions_. Combin. Probab. Comput., **10** (2001), no. 1, 41-77. * [18] Sokal, A. D.: _A personal list of unsolved problems concerning lattice gases and antiferromagnetic Potts models_. Inhomogeneous random systems (Cergy-Pontoise, 2000). Markov Process. Related Fields, **7** (2001), no. 1, 21-38. * [19] Tutte, W. T.: _A contribution to the theory of chromatic polynomials_. Canadian J. Math., **6** (1954), 80-91. * [20] Welsh D. J. A., Merino C.: _The Potts model and the Tutte polynomial_. J. Math. Phys. **41** (2000 ), no. 3 1127-1152. * [21] Wu, F. Y.: _The Potts model_. Rev. Modern Phys., **54** (1982), no. 1, 235-268
_Given an infinite graph \\(\\mathbb{G}\\) quasi-transitive and amenable with maximum degree \\(\\Delta\\), we show that reduced ground state degeneracy per site \\(W_{r}(\\mathbb{G},q)\\) of the \\(q\\)-state antiferromagnetic Potts model at zero temperature on \\(\\mathbb{G}\\) is analytic in the variable \\(1/q\\), whenever \\(|2\\Delta e^{3}/q|<1\\). This result proves, in an even stronger formulation, a conjecture originally sketched in [10] and explicitly formulated in [14], based on which a sufficient condition for \\(W_{r}(\\mathbb{G},q)\\) to be analytic at \\(1/q=0\\) is that \\(\\mathbb{G}\\) is a regular lattice._ **Keywords**: _Potts model, chromatic polynomials, cluster expansion_ \\(\\lx@sectionsign\\)
Write a summary of the passage below.
arxiv-format/0202015v3.md
# Scanning Lidar Based Atmospheric Monitoring for Fluorescence Detectors of Cosmic Showers A. Filipcic M. Horvat D. Veberic Corresponding author. [email protected] D. Zavrtanik M. Zavrtanik M. Zavrtanik Jozef Stefan Institute, Jamova 39, POB 3000, SI-1001 Ljubljana, Slovenia ###### keywords: backscatter lidar, inversion methods, two- and multi-angle reconstruction, atmospheric optical depth, cosmic showers, fluorescence detectors Pacs: 42.68.Ay, 42.68.Jg, 42.68.Wt, 98.70.Sa, number of emitting particles in high energy shower makes this source of radiation highly significant. Ultimately, this electromagnetic (EM) cascade dissipates much of the primary particle's energy. Fluorescence light is emitted isotropically with an intensity proportional to the number of charged particles in the shower. EM component and hence the total number of low energy EM particles is in turn fairly accurately proportional to the energy of the primary particle. Thus, the calorimetric measure of the total EM shower energy [4] is proportional to the integral of EM particle density \\(N_{\\text{em}}\\) along the shower direction \\(x\\), \\[E_{\\text{em}}=K\\int N_{\\text{em}}(x)\\ \\mathrm{d}x \\tag{1}\\] with \\(K\\approx 2.2\\) MeV cm\\({}^{2}\\)/g, where \\(x\\) is measured in units of longitudinal air density (g/cm\\({}^{2}\\)). \\(E_{\\text{em}}\\) is a lower bound for the energy of the primary cosmic ray. The lower portion of shower development is usually obscured by the ground so that EM cascade reaching below ground is included by fitting a functional form to the observed longitudinal profile and integrating the function past surface depth. The number of photons \\(N_{\\text{ph}}\\) reaching fluorescence detector (FD) is proportional to EM particle density \\(N_{\\text{em}}(x)\\) at the point of production \\(x\\), so that in turn \\[N_{\\text{em}}(x)\\propto\\frac{N_{\\text{ph}}R^{2}(x)}{T(x)}, \\tag{2}\\] with \\(R(x)\\) being distance between shower point \\(x\\) and FD. Light originating within the shower is certainly affected by the absorption and scattering on molecules and aerosols in the atmosphere. The number of detected photons is thus reduced due to non-ideal atmospheric transmission \\(T(x)<1\\), where \\[T(x)=\\exp\\left[-\\int_{0}^{x}\\!\\!\\alpha(r)\\ \\mathrm{d}r\\right]=\\mathrm{e}^{- \\tau(x)}, \\tag{3}\\] with \\(\\alpha(r)\\) volume extinction coefficient along the line-of-sight, and \\(\\tau(x)\\) the resulting atmospheric optical depth (OD) to the shower point \\(x\\). In this sense, the atmosphere can be treated as an elementary-particle detector. However, weather conditions change the atmospheric transmission properties dramatically resulting in strongly time-dependent detection efficiency. Therefore, an absolute calibration system for fluorescence light absorption is an essential part of FD [5, 6]. Eq. (3) is a basis for the fluorescence-detector energy calibration. Apart from the Pierre Auger Observatory, all existing fluorescence-based experiments have suffered from a lack of a sufficient atmosphere monitoring system. Weather conditions in a desert-like atmosphere were expected to be stable enough, so that the standard attenuation-length profile should suffice to reconstruct the total EM shower energy, Eq. (1), with a controllable precision. However, this has turned out not to be the case, especially for rare events with energies above \\(10^{19}\\) eV. In addition, the energy reconstruction is obscured by Cerenkov radiation in the lower part of the air-shower which can not be separated from fluorescence. For more than a half of the highest-energy events measured so far the atmosphere properties are not known well enough to accurately reconstruct the primary energy. To be able to provide adequate calibration, one has to measure attenuation at the time of the event in the whole region of the air-shower. There is also a systematic discrepancy when comparing the cosmic-ray spectra of fluorescence experiments with ground arrays. Their compatibility can be established only with a shift in energy and flux of one or the other. As in the case of fluorescence detectors, ground arrays have their own problems with the energy determination and are much more dependent on air-shower simulations. At present, it is not known whether the discrepancy is due to fluorescence-detector or ground-array method, or both. Therefore, it is of utmost importance to have _in situ_ atmosphere monitoring system which is working coherently with a fluorescence detector. To lower primary cosmic ray energy uncertainties, the volume extinction coefficient \\(\\alpha(r)\\) thus has to be well estimated over almost whole detection volume of FD. In the case of the Pierre Auger Observatory, the detection volume corresponds to ground area of 3000 km\\({}^{2}\\) and height of \\(\\sim 15\\) km. The paper is organized as follows. The first two sections are devoted to introductory material on lidar measurement technique and a description of our specific experimental setup. Then the atmospheric model for simulation of lidar signals is presented and the signals are evaluated by two well established inversion methods. Results of the inversions are compared to the input model and conclusions on their applicability are drawn. Next, improved approaches to FD calibration based on scanning lidar system are proposed and evaluated on real data obtained with our experimental setup. ## 2 Lidar system One of the most suitable calibration setups for FD is the backscattering lidar system, where a short laser light pulse is transmitted from FD position in the direction of interest. With a mirror and a photomultiplier tube, backscattered light is collected and recorded as a function of time, i.e. as a function of backscatter distance. Note that light from the lidar source traverses both directions, so that in case of matching laser and fluorescence light wavelength, OD for lidar light sums to twice the OD for fluorescence. The lidar equation [11] describes the received laser power \\(P(r)\\) from distance \\(r\\) as a function of volume extinction coefficient \\(\\alpha(r)\\) and backscatteringcoefficient \\(\\beta(r)\\), \\[P(r)=P_{0}\\frac{ct_{0}}{2}\\beta(r)\\frac{A}{r^{2}}\\;\\mathrm{e}^{-2\\tau(r)}\\,. \\tag{4}\\] \\(P_{0}\\) is the transmitted laser power and \\(A\\) is an effective receiving area of the detector, proportional to the area of the mirror and proportional to an overlap between its field of view with the laser beam. \\(t_{0}\\) is laser pulse duration. As seen from Eq. (2), measurement precision of \\(\\alpha\\) and corresponding \\(\\tau\\) directly influences the precision of primary particle energy estimation. Simple as it may look, the lidar equation (4) is nevertheless difficult to solve for two unknown variables, \\(\\alpha(r)\\) and \\(\\beta(r)\\). All existing analysis algorithms (Klett [7], Fernald [8], and their respective variations) reviewed in one of the following sections are based on an experimental setup with static beam direction. This leads to ambiguity in determination of \\(\\alpha(r)\\) and \\(\\beta(r)\\) which can not be resolved without additional assumptions about atmospheric properties. At the FD experimental sites the atmosphere can be assumed to be almost horizontally invariant. In this case, there is an additional constraint when comparing signals coming from different directions, which solves the lidar equation for \\(\\alpha(r)\\) and \\(\\beta(r)\\) unambiguously. Even at this point, it can be stated that the need for steerable (scanning) lidar setup is unavoidable for proper solution of lidar equation. ## 3 Experimental setup The lidar system used for verification of the analysis method is based on the Continuum MiniLite-1 frequency tripled Nd:YaG laser, which is able to transmit up to 15 shots per second, each with energy of 6 mJ and 4 ns duration (1.2 m). The emitted wavelength of 355 nm is in the \\(300-400\\) nm range of fluorescence spectrum. The receiver was constructed using \\(80\\) cm diameter parabolic mirror with focal length of 41 cm. The mirror is made of aluminum coated pyrex and protected with SiO\\({}_{2}\\). The backscattered light is detected by a Hammamatsu R7400 photomultiplier with operating voltage up to 1000 V and gain of \\(10^{5}\\) to \\(10^{6}\\). To suppress background, a broadband UG-1 filter with 60% trasmittance at 353 nm and FWHM of 50 nm is used. The distance between laser beam and the mirror center is fixed to 1 m, and the system is fully steerable with \\(0.1^{\\circ}\\) angular resolution. The signal is digitized using a three-channel LICEL transient recorder TR40-160 with 12 bit resolution at 40 MHz sampling rate with 16k trace length combined with 250 MHz photon counting system. Maximum detection distance of the hardware is thus, with this sampling rate and trace length, set to 60 km. However, in real measurements, atmospheric features up to 30 km only are observed. LICEL is operatedusing a PC-Linux system through a National Instruments digital input-output card (PCI-DIO-32HS) with Comedi drivers [9] and a ROOT interface [10]. ## 4 Lidar simulation with specific atmospheric model In a low opacity atmosphere the attenuation and the backscattering coefficient can be written as a sum of contributions from two independent components, \\[\\alpha(h) =\\alpha_{\\text{m}}(h)+\\alpha_{\\text{a}}(h), \\tag{5a}\\] \\[\\beta(h) =P_{\\text{m}}(180^{\\circ})\\alpha_{\\text{m}}(h)+P_{\\text{a}}(180^{ \\circ})\\alpha_{\\text{a}}(h). \\tag{5b}\\] where \\(\\alpha_{\\text{m}}\\) and \\(\\alpha_{\\text{a}}\\) correspond to molecular and aerosol attenuation, respectively. The aerosol phase function \\(P_{\\text{a}}(180^{\\circ})\\) for backscattering has, apart from the wavelength, also a strong dependence on the optical and geometrical properties of the aerosol particles. Nevertheless, at wavelength of 355 nm, values in the range 0.025 and up to 0.05 sr\\({}^{-1}\\) can be assumed [11] for aerosol phase function \\(P_{\\text{a}}(180^{\\circ})\\). The angular dependence of molecular phase function is defined by the Rayleigh scattering theory, where \\(P_{\\text{m}}(180^{\\circ})=3/8\\pi\\) sr\\({}^{-1}\\). For simulation purposes, the elevation dependence of the extinction coefficients is modelled as following, \\[\\alpha_{\\text{m}}(h) =\\frac{1}{L_{\\text{m}}}\\,\\text{e}^{-h/h_{\\text{m}}^{0}}, \\tag{6a}\\] \\[\\alpha_{\\text{a}}(h) =\\frac{1}{L_{\\text{a}}}\\begin{cases}1,&h<h_{\\text{x}}\\\\ \\text{e}^{-(h-h_{\\text{x}})/h_{\\text{a}}^{0}},&h\\geq h_{\\text{x}}.\\end{cases} \\tag{6b}\\] where \\(L_{\\text{m}}\\) and \\(L_{\\text{a}}\\) are the molecular and aerosol attenuation lengths at ground level, and \\(h_{\\text{m}}^{0}\\) and \\(h_{\\text{a}}^{0}\\) are the molecular and aerosol scale height, respectively. An additional mixing height \\(h_{\\text{x}}\\) is set up for aerosols, assuming uniform concentration near Figure 1: Schematic view of the lidar system. A mirror of 80 cm diameter and a UV-laser head are mounted on the steerable mechanism. The LICEL TR40-160 receives the trigger from the laser and the signal from Hammamatsu R7400 phototube. The Linux-PC controls the LICEL digitizer through PCI-DIO-32HS Digital Input/Output card. The steering motors are controlled through RS-232 port. Zenith angle is denoted by \\(\\phi\\). the ground level and continuous transition into exponential vanishing for \\(h>h_{\\rm x}\\). The following values of the parameters are used: \\(L_{\\rm m}=15\\,\\)km, \\(h_{\\rm m}^{0}=17.5\\,\\)km, \\(L_{\\rm a}=2\\,\\)km, \\(h_{\\rm x}=0.8\\,\\)km, and \\(h_{\\rm a}^{0}=1.4\\,\\)km. The atmospheric model, Eq. (6), serves as a testing ground for two widely used reconstruction methods presented in the next section. A comparison with reconstruction of the real atmosphere yields insight into the common problems of the lidar field. The Poissonian statistics of photon counting and multiplying, background noise, and effects of digitalization have been taken into account in the generation of the simulated lidar signals and under inspection match those observed in the real lidar power returns. Model in Eq. (6) is a valid approximation to the atmospheric conditions found in real experiments. Although the vertical variation of aerosol and molecular densities is quite simple, the model still produces a non-trivial relation between total attenuation \\(\\alpha\\) and total backscattering coefficient \\(\\beta\\). Therefore, the dependence of \\(\\beta\\) as a function of \\(\\alpha\\), shown in Fig. 2, cannot be well approximated by some simple functional form. ## 5 Reconstruction of a 1D atmosphere Concentrating on a single shot lidar measurement, the optical properties obviously have to be reconstructed in a 1D subspace of the atmosphere. Rewriting the lidar equation (4), \\[P(r)=B\\frac{\\beta(r)}{r^{2}}\\,{\\rm e}^{-2\\tau(r)} \\tag{7}\\] Figure 2: Extinction–backscatter plot (\\(\\alpha\\beta\\) diagram) for the model atmosphere in Eq. (6). where the effective aperture of the system is gathered in the constant \\(B\\), an auxiliary \\(S\\)-function can be introduced, \\[S(r)=\\ln\\frac{P(r)r^{2}}{P(r_{0})r_{0}^{2}}=\\ln\\left[\\beta(r)/\\beta_{0}\\right]-2 \\tau(r;r_{0}). \\tag{8}\\] Note that \\(\\tau(r;r_{0})=\\int_{r_{0}}^{r}\\alpha(r^{\\prime})\\ \\mathrm{d}r^{\\prime}\\) corresponds to atmospheric OD between \\(r_{0}\\) and \\(r\\). ### Klett inversion Apart from the experimentally measured lidar power return \\(P(r)\\), in Eq. (7) there are two unknown quantities, \\(\\beta\\) and \\(\\alpha\\) (or equivalently \\(\\tau\\)), preventing the unique solution of the lidar equation. Nevertheless, a simple, and sometimes physically meaningful, assumption of proportionality between backscattering and extinction, \\[\\beta(r)\\propto\\alpha^{k}(r), \\tag{9}\\] allows for the transformation of the integral Eq. (8) to the corresponding Bernoulli's differential equation with an existing analytical solution. Direct application of the solution (forward inversion) is numerically unstable, in some cases singular, and highly sensitive to the signal noise [7, 12]. Klett's reformulation [7] of the solution (backward inversion) avoids these problems. The lidar backward inversion algorithm proceeds from the far point of the measured signal \\(r_{\\mathrm{f}}\\) to the near end, \\[\\alpha(r;\\alpha_{\\mathrm{f}})=\\frac{\\mathrm{e}^{S(r)/k}}{\\mathrm{e}^{S_{ \\mathrm{f}}/k}\\,/\\alpha_{\\mathrm{f}}+\\frac{2}{k}\\int_{r}^{r_{\\mathrm is shown to possess substantial range dependence. The main reason for failure of the power law proportionality stems from the inequality of the molecular and aerosol phase functions, \\(P_{\\text{m}}(180^{\\circ})\\) and \\(P_{\\text{a}}(180^{\\circ})\\), rendering the \\(\\alpha\\) and \\(\\beta\\) relationship dependent on the particular magnitude of both quantities, and consequently range dependence (see Fig. 2). Therefore, the best value of \\(k\\) must be chosen using some _ad hoc_ criterion. Analyzing results in Fig. 4, presenting Klett inversion of simulated lidar signals, it seems that the closest reconstruction of the model profile is achieved with \\(k\\approx 0.5\\). From Fig. 3, showing local exponent \\(k\\) obtained with use of Eq. (12), it can be seen that \\(k\\approx 0.5\\) is observed only in small interval around \\(4\\,\\)km whereas at other places it is substantially larger. For \\(r>8\\,\\)km, dominated by molecular scattering it, slowly approaches the value of 1, most commonly adopted in the literature. Nevertheless, as can be seen in Fig. 5, reconstruction of OD with \\(k=1\\) totally fails to reproduce Figure 4: Reconstructed attenuation \\(\\alpha(h)\\) from Klett’s inversion of the simulated vertical shot data as obtained by different boundary values \\(\\alpha_{\\text{f}}\\). Solutions with 0.5, 1, and 2 times the correct \\(\\alpha_{\\text{f}}\\) are plotted with dots. The actual model \\(\\alpha\\) profile is drawn with solid line. Assuming range-independent (constant) Klett’s \\(k\\), the best agreement between the reconstructed and actual profile is achieved for \\(k\\approx 0.5\\), therefore this value is used for all tree plots. Figure 3: Effective power \\(k\\) in Eq. (9) as obtained from the model atmosphere in Eqs. (5) and (6). Note that the discontinuity at \\(h_{\\text{x}}=1.4\\,\\)km arises due to aerosol part of the model, up to where aerosol concentration is kept constant. the correct answer. Surprisingly, in case of this specific atmospheric model the most authentic result is obtained with \\(k\\approx 0.5\\). Another drawback of the Klett's method is estimation of the extinction \\(\\alpha_{\\rm f}\\) at the far end of the lidar return. In the case that \\(r_{\\rm f}\\) corresponds to a highly elevated point, approximation \\(\\alpha_{\\rm f}\\equiv\\alpha_{\\rm m}(r_{\\rm f})\\), i.e. the extinction at that point is dominated by the molecular scattering, yields quite reasonable results [13] with qualitative convergence to the correct \\(\\alpha\\)-profile. In general, for optically dense atmosphere (e.g. presence of moderate haze) convergence of the Klett's method is far more rapid as in clear, optically thin case. However, sites for FD are usually chosen at locations with clear and cloudless atmosphere. For horizontal lidar measurements (zenith angle \\(\\phi=90^{\\circ}\\)) in a horizontally invariant atmosphere, \\(\\alpha_{\\rm f}\\) can be estimated as the one that minimizes extinction deviations from a constant value [13, 14], i.e. minimizes the functional \\(\\int_{r_{0}}^{r_{\\rm f}}[\\alpha(r^{\\prime})-\\alpha_{\\rm f}]^{2}\\,{\\rm d}r^{\\prime}\\). ### Fernald inversion Since the concentration of the molecules depends solely on the thermodynamic parameters of the atmosphere, the Rayleigh scattering on molecules is modeled separately on a basis of the meteorological data. \\(\\alpha_{\\rm m}(r)\\) acquired in that way is inserted in Eq. (5). With an estimate for the molecular and aerosol backscattering phase fraction, \\(F=P_{\\rm m}(180^{\\circ})/P_{\\rm a}(180^{\\circ})\\), and modified \\(S\\)-function \\[\\tilde{S}(r)=S(r)+2(F-1)\\int_{r}^{r_{\\rm f}} the lidar equation can be solved for aerosol part \\(\\alpha_{\\rm a}(r)\\) following the same steps as in Klett's version, \\[\\alpha_{\\rm a}(r)=-F\\alpha_{\\rm m}(r)+\\frac{{\\rm e}^{\\tilde{S}(r)}}{{\\rm e}^{ \\tilde{S}_{\\rm f}}\\,/\\tilde{\\alpha}_{\\rm f}+2\\int_{r}^{r_{\\rm r}}{\\rm e}^{ \\tilde{S}(r^{\\prime})}\\,\\,{\\rm d}r^{\\prime}}, \\tag{14}\\] with \\(\\tilde{\\alpha}_{\\rm f}=F\\alpha_{\\rm m}(r_{\\rm f})+\\alpha_{\\rm a}(r_{\\rm f})\\) and \\(\\tilde{S}_{\\rm f}=\\tilde{S}(r_{\\rm f})=S(r_{\\rm f})\\). In the same way OD is expressed as \\[\\tau(r;r_{0},\\tilde{\\alpha}_{\\rm f}) = \\frac{1}{2}\\ln\\left[\\frac{{\\rm e}^{\\tilde{S}_{\\rm f}}\\,{\\rm+}2 \\tilde{\\alpha}_{\\rm f}\\int_{r_{0}}^{r_{\\rm m}}{\\rm e}^{\\tilde{S}(r^{\\prime})} \\,\\,{\\rm d}r^{\\prime}}{{\\rm e}^{\\tilde{S}_{\\rm f}}\\,{\\rm+}2\\tilde{\\alpha}_{ \\rm f}\\int_{r}^{r_{\\rm m}}{\\rm e}^{\\tilde{S}(r^{\\prime})}\\,\\,{\\rm d}r^{\\prime}} \\right]+ \\tag{15}\\] \\[\\mbox{}+(1-F)\\int_{r_{0}}^{r}\\alpha_{\\rm m}(r^{\\prime})\\,\\,{\\rm d }r^{\\prime}.\\] Note that the Fernald procedure relies on three independently supplied parameters: (i) an accurate estimate of molecular part of the scattering \\(\\alpha_{\\rm m}(r)\\) along the whole range of interest, (ii) total extinction at the far end \\(\\tilde{\\alpha}_{\\rm f}\\), and (iii) proper approximation for phase fraction \\(F\\). As predicted by the Mie theory, it is quite difficult to obtain reasonable values for the latter. As for \\(\\tilde{\\alpha}_{\\rm f}\\), conclusions are similar to those of Klett's \\(\\alpha_{\\rm f}\\). In Fig. 6 Fermald's inversion of simulated lidar return is shown for different input values of \\(\\alpha_{\\rm a}(r_{\\rm f})\\) that enter total extinction \\(\\tilde{\\alpha}_{\\rm f}\\). For upward pointing lidar measurements vanishing aerosol concentration can be assumed at the far end of atmosphere, i.e. \\(\\alpha_{\\rm a}(r_{\\rm f})=0\\). To test the sensitivity of the reconstructed OD on this assumption, data sets with \\(\\alpha_{\\rm a}(r_{\\rm f})=\\pm\\alpha_{\\rm m}(r_{\\rm f})/2\\) and therefore \\(\\tilde{\\alpha}_{\\rm f}=(F\\pm 1/2)\\alpha_{\\rm m}(r_{\\rm f})\\), are also plotted. \\(P_{\\rm a}(180^{\\circ})=0.025\\,{\\rm sr}^{-1}\\) is used in the expression for phase fraction \\(F\\). Comparing to the Klett's method, which does not separate aerosol and molecular scattering, it is not surprising that the variation of Fernald's results on boundary parameters is somewhat weaker. Pinning the molecular part of scattering undoubtedly stabilizes OD profiles obtained. Nevertheless, Fernald's inversion still relies heav Figure 6: Fermald inversion of simulated lidar signal. The correct result is drawn in solid line. The three data sets are inversions with \\(\\alpha_{\\rm a}(r_{\\rm f})=0\\) and \\(\\pm\\alpha_{\\rm m}(r_{\\rm f})/2\\). The phase fraction \\(F\\) is kept equal to the value used for generation of simulated lidar returns. ily on additional external parameters that are usually difficult, if not impossible, to measure. ## 6 Horizontally invariant atmosphere Fluorescence detectors for cosmic showers are usually placed at locations with specific atmospheric conditions. In case of the Pierre Auger Observatory, the FD cameras are covering the lower part of the atmosphere over an almost perfect \\(3000\\,\\)km\\({}^{2}\\) plane \\(1500\\,\\)m above the sea level with remarkable fraction of cloudless days. Due to the high elevation and dry inland climate, an optically thin atmosphere is expected. But, as noted before, in this case convergence of Klett's method is slower and can lead to erroneous estimates of OD. Based on that, and other peculiar problems of the well established lidar inversion methods, a new approach with fewer _a priori_ or hard-to-estimate input parameters is needed. Since the lidar equation is not uniquely solvable, a minimal set of assumptions needed for inversion has to be reconsidered. For a typical FD site it is quite reasonable to assume weak horizontal variation of the atmospheric optical properties. That is even more true for the huge plane mentioned above, with hardly any changes in elevation and vegetation coverage. Since the FD is exclusively operating at night, only atmospheric conditions at that time have to be considered. The mean night wind speeds do not exceed \\(12\\,\\)km/h [15], so that particularly thin layer of aerosols close to the ground is expected. At night, it is also expected that there will be a low probability for formation of convective types of atmospheric instabilities. ### Two-angle reconstruction Under the moderate assumptions presented above, optical parameters of atmosphere that enter the lidar equation (7) can be assumed to possess only vertical variations, while being uniform and invariant in the horizontal plane. Thus, it makes sense to rewrite the range dependent \\(S\\)-function in Eq. (8) in terms of height \\(h\\) and geometric factor \\(\\xi=1/\\cos\\phi=\\sec\\phi\\), when lidar shots with zenith angle \\(\\phi\\) are considered. The \\(S\\)-function becomes \\[S(h,\\xi)=\\ln\\left[\\beta(h)/\\beta_{0}\\right]-2\\xi\\,\\tau(h;h_{0}) \\tag{16}\\] with \"vertical\" OD \\(\\tau(h;h_{0})=\\int_{h_{0}}^{h}\\alpha(h^{\\prime})\\,\\mathrm{d}h^{\\prime}\\) and \\(\\beta_{0}=\\beta(h_{0})\\). After measuring two \\(S\\)-functions at different zenith angles \\(\\xi_{1}=1/\\cos\\phi_{1}\\) and \\(\\xi_{2}=1/\\cos\\phi_{2}\\) and height \\(h\\), Eq. (16) can be solved for the vertical OD, \\[\\tau(h)=-\\frac{1}{2}\\frac{S(h,\\xi_{1})-S(h,\\xi_{2})}{\\xi_{1}-\\xi_{2}}, \\tag{17}\\]and the backscatter coefficient ratio, \\[\\frac{\\beta(h)}{\\beta_{0}}=\\exp\\left[-\\frac{\\xi_{2}S(h,\\xi_{1})-\\xi_{1}S(h,\\xi_{2} )}{\\xi_{1}-\\xi_{2}}\\right]. \\tag{18}\\] Both quantities are directly proportional to the difference of two \\(S\\)-functions at the same height and different angles. Therefore, choosing a small separation between zenith angles, \\(\\xi_{1}=\\xi\\) and \\(\\xi_{2}=\\xi+\\,\\mathrm{d}\\xi\\), a differential form of Eq. (17) can be written, \\[\\tau(h)=-\\frac{1}{2}\\frac{\\partial S}{\\partial\\xi}\\bigg{|}_{h}. \\tag{19}\\] Equivalently, the differential form of Eq. (18) can be obtained, \\[\\frac{\\beta(h)}{\\beta_{0}}=\\exp\\left[S(h,\\phi)-\\xi\\frac{\\partial S}{\\partial \\xi}\\bigg{|}_{h}\\right]. \\tag{20}\\] Note that the OD is in that way determined up to the additive constant, and the backscatter coefficient up to the multiplicative factor. Nevertheless, both values should satisfy \\(S(h_{0})=0\\) and \\(\\tau(h_{0})=0\\). Taking into account the Poissonian statistics of collected photons, and neglecting all other sources of measurement uncertainties, a relative error of the obtained OD at some height depends on the lidar system parameters, \\[\\frac{\\sigma_{\\tau}}{\\tau}=\\frac{h/h_{0}}{2\\tau\\sqrt{N_{0}\\tilde{\\beta}}} \\cdot\\frac{1}{|\\xi_{1}-\\xi_{2}|}\\sqrt{\\mathrm{e}^{2\\xi_{1}\\tau}+\\mathrm{e}^{2 \\xi_{2}\\tau}}, \\tag{21}\\] as well as relative error of backscatter coefficient \\[\\frac{\\sigma_{\\beta}}{\\beta}=\\frac{h/h_{0}}{\\sqrt{N_{0}\\tilde{\\beta}}}\\cdot \\frac{\\sqrt{\\xi_{2}^{2}\\,\\mathrm{e}^{2\\xi_{1}\\tau}+\\xi_{1}^{2}\\,\\mathrm{e}^{2 \\xi_{2}\\tau}}}{|\\xi_{1}-\\xi_{2}|}, \\tag{22}\\] where \\(N_{0}\\) is number of detected photons in the time interval corresponding to the power return from height \\(h_{0}\\), and \\(\\tilde{\\beta}=\\beta/\\beta_{0}\\). In Fig. 7, an example of \\(S\\)-functions and their zenith angle variation is presented. All results are obtained from real lidar measurements performed during few November nights in a typical urban atmosphere (GPS location: 46\\({}^{\\circ}\\)04'35\" N, 014\\({}^{\\circ}\\)29'05\" E, 312 m above sea level). For fixed primary azimuth angle \\(\\phi_{1}=0^{\\circ}\\) and three selected secondary angles \\(\\phi_{2}=38^{\\circ}\\), \\(42^{\\circ}\\), and \\(47^{\\circ}\\) results for OD (Fig. 8), backscatter coefficient (Fig. 9), and \\(\\alpha\\beta\\) diagram (Fig. 10) are obtained from corresponding \\(S\\)-functions in Fig. 7. Due to presence of a thin layer of optically thick haze at \\(h\\approx 3\\) km, a drastic change in both OD and backscattering at that height is observed. Since OD is well determined only up to an additive constant, note that the variation of results for different \\(\\phi_{2}\\) is easily produced by the inadequate determination of \\(S_{0}\\), in other terms, by variation of atmospheric optical properties at \\(h_{0}\\) Compatible with a scale height of \\(\\sim 18\\) km, the variation of backscattering in Fig. 9 is slower as found in our model, generating a gradual but still comparable \\(\\alpha\\beta\\) diagram in Fig. 10. Figure 8: Reconstructed optical depth (OD) \\(\\tau\\) from three pairs of \\(S\\)-functions. In all pairs, \\(S_{1}\\) corresponds to the \\(S\\)-function with \\(\\phi=0^{\\circ}\\) (\\(\\xi=1\\)) and \\(S_{2}\\) to the \\(S\\)-functions with \\(\\phi=38^{\\circ}\\), \\(42^{\\circ}\\), and \\(47^{\\circ}\\) (\\(\\xi=1.27\\), \\(1.35\\), and \\(1.47\\)), respectively from bottom to top. Figure 9: Reconstructed backscatter coefficient \\(\\beta(h)/\\beta(h_{0})\\) from three pairs of \\(S\\)-functions. In all pairs, \\(S_{1}\\) corresponds to the \\(S\\)-function with \\(\\phi=0^{\\circ}\\) (\\(\\xi=1\\)) and \\(S_{2}\\) to the \\(S\\)-functions with \\(\\phi=38^{\\circ}\\), \\(42^{\\circ}\\), and \\(47^{\\circ}\\) (\\(\\xi=1.27\\), \\(1.35\\), and \\(1.47\\)), respectively from bottom to top. In Fig. 11, a logarithmic plot of the relative error in OD is presented for typical lidar system parameters. First, the angle is fixed to \\(\\phi_{1}=0^{\\circ}\\) while the second one, \\(\\phi_{2}\\), is varied from a vertical to an almost horizontal shot. It is hard to avoid the fact that minimum error is produced with evaluation of two quite considerably separated Figure 11: Logarithmic plot of the relative deviation in OD, \\(\\sigma_{\\tau}/\\tau\\) (upper panel), and backscattering coefficient, \\(\\sigma_{\\beta}/\\beta\\) (lower pannel), vs. the second shot angle \\(\\phi_{2}\\), in case that the first shot zenith angle is set to \\(\\phi_{1}=0^{\\circ}\\). Values \\(h/h_{0}=8\\), \\(N_{0}=4\\cdot 10^{6}\\), \\(\\tau=1\\), and \\(\\beta/\\beta_{0}=0.6\\) corresponding to the far point (\\(h\\approx 8\\) km) in Fig. 8 have been assumed for parameters in Eqs. (21) and (22) (solid line). Values \\(h/h_{0}=3\\), \\(N_{0}=4\\cdot 10^{6}\\), \\(\\tau=0.4\\), and \\(\\beta/\\beta_{0}=0.8\\) corresponding to the near point (\\(h\\approx 2\\) km) are assumed for the dashed curves. Note that \\(\\phi=60^{\\circ}\\) corresponds to \\(\\xi=2\\). lidar shots, \\(\\phi_{2}\\approx 70^{\\circ}\\). Even at moderate elevations \\(h\\) this can amount to large spatial separations of the two points of lidar return, and thus the requirement of horizontal invariance easily broken. In the case of an atmosphere, that is slowly horizontally modulated, a more \"local\" approach to the OD problem is needed. ### Multi-angle reconstruction For the ideal atmosphere, with true horizontal invariance, the \\(\\xi\\) dependence of \\(S\\)-function is particularly simple, \\[S(h,\\xi)=\\ln[\\beta(h)/\\beta_{0}]-2\\xi\\,\\tau(h;h_{0}), \\tag{23}\\] with the backscatter coefficient \\(\\ln[\\beta/\\beta_{0}]\\) as offset, and OD \\(\\tau\\) as the slope of the resulting linear function in \\(\\xi\\). Therefore, the optical properties of the atmosphere can be alternatively obtained from the analysis of the \\(S\\)-function behavior for scanning lidar measurements. Furthermore, disagreement of the measured \\(S(\\xi)\\) profiles from the linear form is a suitable criterion for detection of deviations from the assumed horizontal invariance of the atmosphere. A generalization of the two-angle equations (19) and (20) to their differential counterparts strongly suggested this way of reconstruction of optical properties, the two-angle method being a mere two-point approximation of the linear function in Eq. (23). Taking into account quite substantial uncertainties in \\(S(\\xi)\\) for single angle, the linear fit trough many data points seems to yield superior results and the reconstruction is no longer limited to two lidar shots, well-separated in angle. The preferred horizontal invariance is not required to take place across huge atmospheric volumes (as in case of \\(\\phi_{1}=0^{\\circ}\\) and \\(\\phi_{2}=60^{\\circ}\\) shots), but has to be met only in relatively small arc of interest where the continuous lidar scan is performed. In the opposite case, when slow variation of atmospheric properties in horizontal plane is allowed, Eq. (19) is similar enough to the renown 1D \"slope method\", based on assumption of small variation of \\(\\beta(r)\\), or equivalently \\(\\,\\mathrm{d}\\beta/\\,\\mathrm{d}r\\approx 0\\). Bear in mind that in method presented here the variation of \\(\\beta\\) with height can be of any magnitude, as long as there are only modest variations in the horizontal direction. Opposite to Fig. 7, in Fig. 12\\(S\\)-function profiles with respect to zenith \\(\\xi\\) are drawn for fixed heights, starting with \\(h=3.2\\,\\)km and up to \\(7\\,\\)km with \\(633\\,\\)m step. Approximate linear behavior is observed in few arc intervals, with narrow bands of minute atmospheric shifts at \\(\\xi=1.15\\) and 1.38. Since these shifts in profiles disappear when lifting \\(h_{0}\\) from \\(3\\,\\)km to \\(3.5\\,\\)km, they are obviously due to the distortions of atmosphere in the latter interval, feature already observed in Fig. 8. In Fig. 13 the results of fitting and extraction of OD are similar to the ones in Fig. 8. Note that in both cases OD is obtained relative to the \\(h_{0}\\) point, so that the resultsmay differ up to some additive constant. Therefore, comparing both figures, it is more accurate to concentrate on the same span of OD within the \\(3.5\\,\\)km to \\(9\\,\\)km interval. Nevertheless, the range of OD results with acceptable error bars is with multi-angle method increased up to \\(12\\,\\)km. The relative error of OD in Fig. 14 is needed for correct estimation of shower energy uncertainty. It is kept below 6% even for the OD from the far points of the range, and below 3% for modest values of OD. Fig. 15, with values for \\(\\beta/\\beta_{0}\\) should be compared to Fig. 9. ## 7 Conclusions Inversion attempts of simulated lidar returns for atmosphere, modeled by Eqs. (6), show numerous drawbacks of established numerical methods. For instance, Klett's and Fernald's method of section 5.1 and 5.2 do not satisfy the specific requirements of FD calibration. While they may be useful for qualitative reconstruction of atmo Figure 12: Dependence of \\(S\\)-function on azimuth angle at various heights \\(h=3.2\\) (black), 5.6 (gray), and 8 km (light gray), while \\(h_{0}=3\\,\\)km. Note that \\(\\xi=1\\) corresponds to \\(\\phi=0^{\\circ}\\), and \\(\\xi=1.5\\) to \\(\\phi=48^{\\circ}\\). Figure 13: Optical depth \\(\\tau\\) obtained by linear fits of angle dependence of \\(S\\)-functions in Fig. 12. spheric properties (spatial haze/cloud distribution, cloud base etc.), they are not applicable for absolute assessment of atmospheric transmission properties. There are many reasons for this failure. One of them is certainly strong dependence of obtained inversions on presumed extinction/backscatter functional relation, Eq. (9), in case of Klett's method, and assumed spatial dependence of Rayleigh scattering on molecules in Fernald's case. Another issue is the extraordinarily difficult measurement of far-side extinction rate \\(\\alpha_{\\mathrm{f}}\\), needed in Eq. (10), and phase fraction \\(F\\), Eq. (13). We are therefore forced to find better solutions, even at the expense of adding scanning capabilities to an otherwise rigid lidar setup. In contrast to that, based on the sole assumption of a horizontally invariant (or at least horizontally slowly varying) atmosphere, the two-, and especially the multi-angle, method presented in section 6, while simple in structure, nevertheless produce reliable quantitative answers with small uncertainties (e.g., see Figs. 8 and 13) to FD calibration questions. As found by our investigation of first-run measurements, the relative error of OD for distances up to 12 km stay well below 6%. This number can be reduced even further by slow angular scanning and fast multiple-shot averaging of lidar returns. Nevertheless, in that case increased interaction between FD and lidar laser source, especially FD blind time, has to be taken into account. Furthermore, concerning the specific form of the atmospheric transmission entering Eq. (2), they offer suitable starting ground for development of methods Figure 14: Dependence of relative error in optical depth on depth itself, \\(\\sigma_{\\tau}/\\tau\\). Data points and uncertainties are from Fig. 13. Figure 15: Relative backscattering coefficient \\(\\beta(h)/\\beta_{0}\\) from \\(S\\)-functions in Fig. 12. that can considerably reduce systematic errors of shower energy \\(E_{\\rm em}\\) estimation with fluorescence detectors. In the case of strict horizontal invariance, both methods deliver exact solutions of the lidar Eq. (4) with accuracy of the results limited only by the quality of the measurement. In that way, they offer reliable framework for study of the notorious _lidar ratio problem_ (i.e., extinction to backscatter codependency), widely discussed in the pure lidar community [16]. Since, for example in the case of the Pierre Auger Observatory, where optical properties have to be known over large volumes of atmosphere, and a scanning lidar is therefore a necessity, both mentioned methods represent natural first choice of data analysis. ## Acknowledgements Authors would like to express gratitude to O. Ullaland for the support and encouragement during our work. Authors also wish to thank G. Navarra for assistance with EAS-TOP telescopes. This work has been supported by the Slovenian Ministry of Education, Science, and Sport under program No. P0-0501-1540. ## References * [1] R.M. Baltrusaitis et al., Nucl. Instrum. Methods A **240**, 410 (1985); Phys. Rev. Lett. **54**, 1875 (1985). * [2] T. Abu-Zayyad et al., Proc. 25th ICRC **5**, 321 (1997); ibid. **5**, 325 (1997); ibid. **5**, 329 (1997). * [3] D. Zavrtanik, J. Phys. G: Nucl. Phys. **27**, 1597 (2001). * [4]_Pierre Auger Observatory Design Report_, Second Edition, Auger Collaboration (1997). * [5] I. Arcon, A. Filipcic, and M. Zavrtanik, Pierre Auger Collaboration note GAP-1999-028, Fermilab (1999). * [6] D.J. Bird et al., _Atmospheric Monitoring for Fluorescence Detector Experiments_ in Proc. 24th ICRC (1995). * [7] J.D. Klett, Appl. Optics **20**, 211 (1981); ibid. **24**, 1638 (1985). * [8] F.G. Fernald, Appl. Optics **23**, 652 (1984). * [9] Comedi, _Linux control and measurement device interface_, [http://stm.lbl.gov/comedi](http://stm.lbl.gov/comedi) (2001). * [10] ROOT, _An Object-Oriented Data Analysis Framework_, [http://root.cern.ch](http://root.cern.ch) (2001). * [11] R.T.H. Collis and P.B. Russell, _Lidar Measurement of Particles and Gases by Elastic Backscattering and Differential Absorption_ in _Laser Monitoring of the Atmosphere_, edited by E.D. Hinkley, p. 88 (Springer, 1976). * [12] F. Rocadenbosch and A. Comeron, Appl. Optics **38**, 4461 (1999). * [13] M. Horvat, _Measurement of atmospheric optical properties with lidar system_, graduate thesis (Ljubljana, 2001). * [14] T. Yamamoto et al., _Telescope Array atmospheric monitoring system at Akeno Observatory_ in Proc. 27th ICRC (2001). * [15] P. Bauleo et al., Pierre Auger Collaboration note GAP-1998-041, Fermilab (1998). * [16] for example see _EARLINET: A European Aerosol Research Lidar Network to Establish an Aerosol Climatology_, Scientific Report for the period Feb. 2000 to Jan. 2001, compiled by J. Bosenberg (2001), [http://lidarb.dkrz.de/earlinet](http://lidarb.dkrz.de/earlinet); F. Rocadenbosh, C. Soriano, A. Comeron, and J.-M. Baldasano, Appl. Optics **38**, 3175 (1999) and references therein. * [17] J.A.J. Matthews, Pierre Auger Collaboration note GAP-2001-046, Fermilab (2001); ibid. GAP-2001-051. * [18] J.A.J. Matthews and R. Clay, _Atmospheric Monitoring for the Auger Fluorescence Detector_ in Proc. 27th ICRC (2001). * [19] B. Dawson, Pierre Auger Collaboration note GAP-2001-016, Fermilab (2001).
Measurements of the cosmic-ray air-shower fluorescence at extreme energies require precise knowledge of atmospheric conditions. The absolute calibration of the cosmic-ray energy depends on the absorption of fluorescence light between its origin and point of its detection. To reconstruct basic atmospheric parameters we review a novel analysis method based on two- and multi-angle measurements performed by the scanning backscatter lidar system. Applied inversion methods, optical depth, absorption and backscatter coefficient, as well as other parameters that enter the lidar equation are discussed in connection to the attenuation of the light traveling from shower to fluorescence detector.
Give a concise overview of the text below.
arxiv-format/0202030v1.md
# Complete relativistic equation of state for neutron stars H. Shen Electronic address: [email protected] CCAST (World Laboratory), P.O. Box 8730, Beijing 100080, China Department of Physics, Nankai University, Tianjin 300071, China Center of Theoretical Nuclear Physics, National Laboratory of Heavy Ion Accelerator, Lanzhou 730000, China Institute of Theoretical Physics, Beijing 100080, China ## I Introduction The properties of neutron stars are mainly determined by the equation of state (EOS) of neutron star matter, which is charge neutral matter in \\(\\beta\\)-equilibrium. A comprehensive description of neutron stars should include not only the interior region but also the inner and outer crusts, therefore, the EOS for neutron stars is required to cover a wide density range. For the EOS at high densities, there are many efforts based on both non-relativistic and relativistic approaches, which discussed several possible mechanisms to soften the EOS at high densities, e.q., by hyperons, kaon condensates, or even quark phases [1, 2, 3]. When the density lower to \\(10^{14}\\) g/cm\\({}^{3}\\), some heavy nuclei may be formed and matter becomes inhomogeneous. There are a few works based on non-relativistic models describing the EOS at low densities where the heavy nuclei exist [4, 5, 6]. Most studies of neutron stars are using the composite EOS, which is constructed by connecting the EOS at high densities to the one at low densities [7, 8, 9]. Even though the EOS at high densities are based on various relativistic many body theories, it has to be combined with some non-relativistic EOS at low densities. The differences in the models used in the different density ranges usually lead to some discontinuity and inconsistency in the composite EOS. Therefore, it is very interesting to construct the EOS in the whole density range within the same framework. In this paper, we provide a complete relativistic EOS for the studies of neutron stars, which is based on the relativistic mean field (RMF) theory. The RMF theory has been quite successfully and widely used for the description of nuclear matter and finite nuclei [10, 11, 12]. We study the properties of dense matter with both uniform and non-uniform distributions in the RMF framework adopting the parameter set TM1, which is known to provide excellent properties of the ground states of heavy nuclei including unstable nuclei [13]. The RMF theory with the TM1 parameter set was also shown to reproduce satisfactory agreement with experimental data in the studies of the nuclei with deformed configuration and the giant resonances within the RPA formalism [14, 15, 16]. At high densities, hyperons may appear as new degrees of freedom through the weak interaction, the neutron star matter is thencomposed of neutrons, protons, hyperons, electrons, and muons in \\(\\beta\\)-equilibrium. For the non-uniform matter at low densities, we perform the Thomas-Fermi calculation, in which the RMF results are taken as its input. The non-uniform matter is assumed to be composed of a lattice of spherical nuclei immersed in an electron gas with (or without) free neutrons dripping out of nuclei [17, 18]. The optimal state at each density is determined by minimizing the energy density with respect to the independent parameters in the model. The phase transition from non-uniform matter to uniform matter takes place around \\(10^{14}\\) g/cm\\({}^{3}\\). The same method (but without the inclusion of hyperons) has been used to work out the equation of state at finite temperature with various proton fractions for the use of supernova simulations [19, 20]. This paper is arranged as follows. In Sec. II, we briefly describe the RMF theory and its parameters. In Sec. III, we explain the Thomas-Fermi approximation used for the description of non-uniform matter. The resulting EOS in the whole density range is shown and discussed in Sec. IV. We apply the relativistic EOS to study the constitution and structure of neutron stars in Sec. V. The conclusion is presented in Sec. VI. ## II Relativistic mean field theory We briefly explain the RMF theory used to describe the uniform matter. In the RMF theory, baryons interact via the exchange of mesons. The baryons considered in the present calculation include nucleons (\\(n\\) and \\(p\\)) and hyperons (\\(\\Lambda\\), \\(\\Sigma\\), \\(\\Xi\\)). The exchanged mesons consist of isoscalar scalar and vector mesons (\\(\\sigma\\) and \\(\\omega\\)), isovector vector meson (\\(\\rho\\)), and two strange mesons (\\(\\sigma^{*}\\) and \\(\\phi\\)) which couple only to hyperons. The total Lagrangian density of neutron star matter, in the mean field approximation, can be written as \\[{\\cal L} = \\sum_{B}\\bar{\\psi}_{B}\\left[i\\gamma_{\\mu}\\partial^{\\mu}-\\left(m_{ B}-g_{\\sigma B}\\sigma-g_{\\sigma^{*}B}\\sigma^{*}\\right)-\\left(g_{\\omega B} \\omega+g_{\\phi B}\\phi+g_{\\rho B}\\tau_{3}\\rho\\right)\\gamma^{0}\\right]\\psi_{B} \\tag{1}\\] \\[-\\frac{1}{2}m_{\\sigma}^{2}\\sigma^{2}+\\frac{1}{3}g_{2}\\sigma^{3}- \\frac{1}{4}g_{3}\\sigma^{4}+\\frac{1}{2}m_{\\omega}^{2}\\omega^{2}+\\frac{1}{4}c_{ 3}\\omega^{4}+\\frac{1}{2}m_{\\rho}^{2}\\rho^{2}\\] \\[-\\frac{1}{2}m_{\\sigma^{*}}^{2}{\\sigma^{*}}^{2}+\\frac{1}{2}m_{\\phi }^{2}\\phi^{2}+\\sum_{l}\\bar{\\psi}_{l}\\left(i\\gamma_{\\mu}\\partial^{\\mu}-m_{l} \\right)\\psi_{l}\\,\\]where the sum on \\(B\\) is over all the charge states of the baryon octet (\\(p\\), \\(n\\), \\(\\Lambda\\), \\(\\Sigma^{+}\\), \\(\\Sigma^{0}\\), \\(\\Sigma^{-}\\), \\(\\Xi^{0}\\), \\(\\Xi^{-}\\)), and the sum on \\(l\\) is over the electrons and muons (\\(e^{-}\\) and \\(\\mu^{-}\\)). The meson mean fields are denoted as \\(\\sigma\\), \\(\\omega\\), \\(\\rho\\), \\(\\sigma^{*}\\), and \\(\\phi\\). The inclusion of the non-linear \\(\\sigma\\) and \\(\\omega\\) terms is essential to reproduce the feature of the relativistic Brueckner-Hartree-Fock theory and satisfactory properties of finite nuclei [13]. We adopt the TM1 parameter set for the meson-nucleon couplings and the self-coupling constants and some masses, which was determined in Ref. [13] by reproducing the properties of finite nuclei in the wide mass range in the periodic table including neutron-rich nuclei. The RMF theory with the TM1 parameter set was also shown to reproduce satisfactory agreement with experimental data in the studies of the nuclei with deformed configuration and the giant resonances within the RPA formalism [14, 15, 16]. The hyperon masses are taken to be \\(m_{\\Lambda}=1116\\) MeV, \\(m_{\\Sigma}=1193\\) MeV, and \\(m_{\\Xi}=1313\\) MeV, while the strange meson masses are \\(m_{\\sigma^{*}}=975\\) MeV and \\(m_{\\phi}=1020\\) MeV [3]. As for the hyperon couplings, we employ the following relations derived \\[m_{\\phi}^{2}\\phi=\\sum_{B}g_{\\phi B}\\left(2J_{B}+1\\right)k_{B}^{3}/(6\\pi^{2}), \\tag{7}\\] where \\(m_{B}^{*}=m_{B}-g_{\\sigma B}\\sigma-g_{\\sigma^{*}B}\\sigma^{*}\\) is the effective mass of the baryon species \\(B\\), and \\(k_{B}\\) is its Fermi momentum. \\(J_{B}\\) and \\(I_{3B}\\) denote the spin and the isospin projection of baryon \\(B\\). For neutron star matter with uniform distributions, the composition is determined by the requirements of charge neutrality and \\(\\beta\\)-equilibrium conditions. Considering the baryon octet and leptons included in the present calculation, the \\(\\beta\\)-equilibrium conditions, without trapped neutrinos, can be clearly expressed by \\[\\mu_{p}=\\mu_{\\Sigma^{+}}=\\mu_{n}-\\mu_{e},\\] \\[\\mu_{\\ \\[P = \\frac{1}{3}\\sum_{B}\\frac{2J_{B}+1}{2\\pi^{2}}\\int_{0}^{k_{B}}\\frac{k ^{4}\\;dk}{\\sqrt{k^{2}+m_{B}^{*2}}}-\\frac{1}{2}m_{\\sigma}^{2}\\sigma^{2}+\\frac{1}{ 3}g_{2}\\sigma^{3}-\\frac{1}{4}g_{3}\\sigma^{4} \\tag{13}\\] \\[+\\frac{1}{2}m_{\\omega}^{2}\\omega^{2}+\\frac{1}{4}c_{3}\\omega^{4}+ \\frac{1}{2}m_{\\rho}^{2}\\rho^{2}-\\frac{1}{2}m_{\\sigma^{*}}^{2}\\sigma^{*2}+\\frac {1}{2}m_{\\phi}^{2}\\phi^{2}\\] \\[+\\frac{1}{3}\\sum_{l}\\frac{1}{\\pi^{2}}\\int_{0}^{k_{l}}\\frac{k^{4} \\;dk}{\\sqrt{k^{2}+m_{l}^{2}}}\\.\\] ## III Thomas-Fermi approximation In the low density range, where heavy nuclei exist, we perform the Thomas-Fermi calculation based on the work done by Oyamatsu [17]. In this approximation, the non-uniform matter can be modeled as a lattice of nuclei immersed in a vapor of neutrons and electrons. At lower density, there is no neutron dripping out of nuclei. We assume that each heavy spherical nucleus is located in the center of a charge-neutral cell consisting of a vapor of neutrons and electrons. The nuclei form a body-centered-cubic (BCC) lattice to minimize the Coulomb lattice energy. It is useful to introduce the Wigner-Seitz cell to simplify the energy of the unit cell. The Wigner-Seitz cell is a sphere whose volume is the same as the unit cell in the BCC lattice. We assume the nucleon distribution functions \\(n_{i}(r)\\) (\\(i=n\\) for neutron, \\(i=p\\) for proton) in the Wigner-Seitz cell as \\[n_{i}\\left(r\\right)=\\left\\{\\begin{array}{ll}\\left(n_{i}^{in}-n_{i}^{out} \\right)\\left[1-\\left(\\frac{r}{R_{i}}\\right)^{t_{i}}\\right]^{3}+n_{i}^{out},&0 \\leq r\\leq R_{i}\\\\ n_{i}^{out},&R_{i}\\leq r\\leq R_{cell}\\end{array}\\right.\\, \\tag{14}\\] where \\(r\\) represents the distance from the center of the nucleus, and \\(R_{cell}\\) is the radius of the Wigner-Seitz cell defined by the relation, \\(V_{cell}=\\frac{4\\pi}{3}R_{cell}^{3}=a^{3}\\) (\\(a\\) is the lattice constant). The parameters \\(R_{i}\\) and \\(t_{i}\\) determine the boundary and the relative surface thickness of the heavy nucleus. \\(R_{n}\\) and \\(t_{n}\\) may be a little different from \\(R_{p}\\) and \\(t_{p}\\) due to the additional neutrons forming a neutron skin in the surface region. For neutron star matter at a given average density of baryons, \\(n_{B}=\\int_{cell}\\left[n_{n}\\left(r\\right)+n_{p}\\left(r\\right)\\right]d^{3}r\\) / \\(V_{cell}\\), there are only seven independent parameters among the eight variables: \\(a,n_{n}^{in},n_{n}^{out},R_{n},t_{n},n_{p}^{in},R_{p},t_{p}\\). The optimal state is determined by minimizing the average energy density, \\(\\varepsilon=E_{cell}\\) / \\(V_{cell}\\), with respect to those independent parameters. The total energy per cell, \\(E_{cell}\\), can be written as \\[E_{cell}=E_{bulk}+E_{s}+E_{C}+E_{e}. \\tag{15}\\] Here the bulk energy of baryons, \\(E_{bulk}\\), is calculated by \\[E_{bulk}=\\int_{cell}\\varepsilon_{RMF}\\left(\\ n_{n}\\left(r\\right),n_{p}\\left(r \\right)\\ \\right)d^{3}r, \\tag{16}\\] where \\(\\varepsilon_{RMF}\\) is the energy density in the RMF theory as the functional of the neutron density \\(n_{n}\\) and the proton density \\(n_{p}\\). As the input in the Thomas-Fermi calculation, \\(\\varepsilon_{RMF}\\) at each radius \\(r\\) is calculated in the RMF theory for uniform matter with the corresponding densities \\(n_{n}\\) and \\(n_{p}\\). The surface energy term \\(E_{s}\\) due to the inhomogeneity of nucleon distribution is given by, \\[E_{s}=\\int_{cell}F_{0}\\mid\ abla\\left(\\ n_{n}\\left(r\\right)+n_{p}\\left(r\\right) \\ \\right)\\mid^{2}\\ d^{3}r, \\tag{17}\\] where the parameter \\(F_{0}=70\\ {\\rm MeV}\\cdot{\\rm fm}^{5}\\) is determined by doing the Thomas-Fermi calculations of finite nuclei as described in the appendix in Ref. [17]. The Coulomb energy per cell \\(E_{C}\\) is calculated using the Wigner-Seitz approximation with an added correction term for the BCC lattice: \\[E_{C}=\\frac{1}{2}\\int_{cell}e\\ \\left[n_{p}\\left(r\\right)-n_{e}\\right]\\ \\phi(r)\\ d^{3}r\\ +\\ \\triangle E_{C}, \\tag{18}\\] where \\(\\phi(r)\\) stands for the electrostatic potential calculated in the Wigner-Seitz approximation, \\(\\triangle E_{C}=C_{BCC}(Ze)^{2}/a\\) is the correction term for the BCC lattice as given in Ref. [17]. \\(n_{e}\\) is the electron number density of uniform electron gas, which can be determined by the charge neutrality condition as \\(n_{e}=Z/V_{cell}\\) (\\(Z\\) denotes the proton number per cell). The last term in Eq. (15), \\(E_{e}\\), is the kinetic energy of electrons, which is given by \\[E_{e}=\\frac{1}{\\pi^{2}}\\int_{0}^{k_{e}}\\sqrt{k^{2}+m_{e}^{2}}\\ k^{2}dk \\tag{19}\\]where \\(k_{e}=(3\\pi^{2}n_{e})^{1/3}\\) is the Fermi momentum of electrons. For each baryon density \\(n_{B}\\), we minimize the average energy density \\(\\varepsilon\\) of non-uniform matter with respect to the independent parameters in the Thomas-Fermi approximation. At some higher densities, the heavy nuclei dissolve and the matter becomes homogeneous. We determine the density, at which the phase transition takes place, by comparing the energy density of non-uniform matter with the one of uniform matter. ## IV Properties of neutron star matter In this section, we present the resulting EOS of neutron star matter in the density range from \\(10^{-7}\\) to \\(1.2\\) fm\\({}^{-3}\\). At low densities where heavy nuclei exist, the non-uniform matter is described by the Thomas-Fermi approximation, in which the optimal state is determined by minimizing the average energy density with respect to its independent parameters. For the density below \\(\\sim 2.4\\times 10^{-4}\\) fm\\({}^{-3}\\), the nucleons form the optimal nuclei and those nuclei build a BCC lattice with uniform electron gas. It is found that the neutrons begin to drip out from nuclei at \\(n_{B}\\sim 2.4\\times 10^{-4}\\) fm\\({}^{-3}\\), then there is a neutron gas in addition to the electron gas. In Fig. 1 we show the neutron and the proton distributions along the straight line joining the centers of the nearest nuclei in the BCC lattice at the average baryon densities \\(n_{B}=0.0001,\\ 0.001,\\ 0.01,\\ 0.05\\) fm\\({}^{-3}\\). As the density increases, the optimal nuclei become closer and more neutron rich. At \\(n_{B}\\sim 0.06\\) fm\\({}^{-3}\\), the nuclei dissolve and the optimal state is a uniform matter consisting of neutrons, protons, and electrons in \\(\\beta\\)-equilibrium. When the electron chemical potential exceeds the rest mass of the muon (at \\(n_{B}\\approx 0.11\\) fm\\({}^{-3}\\)), it becomes energetically favorable to convert the electrons at the Fermi surface into muons, then the muons appear with the chemical equilibrium condition \\(\\mu_{e}=\\mu_{\\mu}\\). Hyperons appear at higher densities (\\(n_{B}\\stackrel{{>}}{{\\sim}}0.27\\) fm\\({}^{-3}\\)). In Fig. 2 we show the fraction of species \\(i\\), \\(Y_{i}=n_{i}/n_{B}\\), as a function of the total baryon density \\(n_{B}\\). The composition of uniform neutron star matter is calculated by solving the coupled equations (3)-(7), (8), and (11). The threshold density for a hyperon species is determined not only by its charge and mass but also by the meson mean fields, which are shown in Fig. 3 as functions of baryon density. In the present calculation, \\(\\Sigma^{-}\\) is the first hyperon which appears at \\(n_{B}\\approx 0.27\\) fm\\({}^{-3}\\), while \\(\\Lambda\\) has almost the same threshold density (\\(n_{B}\\approx 0.29\\) fm\\({}^{-3}\\)). It is partly because the negative charge is much more favorable, even though \\(\\Sigma^{-}\\) has somewhat larger mass compared with the mass of \\(\\Lambda\\). The other hyperons, \\(\\Sigma^{0}\\), \\(\\Sigma^{+}\\), \\(\\Xi^{-}\\), and \\(\\Xi^{0}\\), appear one by one at higher densities (\\(n_{B}\\approx 0.57,\\ 0.72,\\ 0.84,\\ 1.17\\) fm\\({}^{-3}\\)). The appearance of hyperons causes some decreases of the nucleon fractions. At high densities (\\(n_{B}\\stackrel{{>}}{{\\sim}}0.7\\) fm\\({}^{-3}\\)), the \\(\\Lambda\\) fraction is larger than the neutron fraction. We note that the hyperon threshold densities and fractions are sensitive to the hyperon couplings, and there are quite large uncertainties in the hyperon couplings. In this work, we adopt the hyperon couplings derived from the quark model. We display in Fig. 4 the pressure of neutron star matter as a function of energy density. The present EOS shown by the solid curve is compared with the EOS considering only the uniform matter phase (dotted curve), and it is found that the contribution from the non-uniform matter is quite large at low densities. The EOS without hyperons is also shown for comparison by dashed curve. The inclusion of hyperons considerably softens the EOS at high densities, because the conversion of nucleons to hyperons can relieve the Fermi pressure of the nucleons. In Fig. 5 we show the fraction of species \\(i\\), \\(Y_{i}\\), in neutron star matter as a function of the average baryon density \\(n_{B}\\). It is very interesting to see the phase transitions in the wide density range. At low densities, all nucleons exist inside nuclei, therefore the fraction of the nucleons in nuclei (dot-dashed curve) is equal to one. The decrease of the electron fraction (dotted curve), which is equal to the proton fraction due to the charge neutrality, implies that the optimal nucleus becomes more and more neutron rich as the density increases. Beyond the neutron drip density (\\(n_{B}\\sim 2.4\\times 10^{-4}\\) fm\\({}^{-3}\\)), there is a increasing fraction of the free neutrons outside nuclei (solid curve), and this causes a rapid decrease of the fraction of the nucleons in nuclei (dot-dashed curve). The phase transition from non-uniform matter to uniform matter occurs at \\(\\sim 0.06\\) fm\\({}^{-3}\\), where the heavy nuclei dissolve and the matter consists of neutrons, protons, and electrons in \\(\\beta\\)-equilibrium. We note that the neutron star matter is assumed to be at zero temperature, so there is no free proton gas outside nuclei in the non-uniform matter phase. The muon fraction appears at \\(n_{B}\\approx 0.11\\) fm\\({}^{-3}\\) with the charge neutrality condition \\(Y_{\\mu}+Y_{e}=Y_{p}\\). At high densities (\\(n_{B}\\stackrel{{>}}{{\\sim}}0.27\\) fm\\({}^{-3}\\)), the hyperon fractions appear, which have been shown more clearly in Fig. 2. ## V Neutron star structure We calculate the neutron star properties by using the relativistic EOS. The neutron star masses as functions of central baryon density are displayed in Fig. 6. It is shown that the maximum mass of the neutron stars including hyperons is around \\(1.6M_{\\odot}\\), while it is around \\(2.2M_{\\odot}\\) without hyperons. The neutron star mass is determined predominantly by the behavior of the EOS at high densities. The inclusion of hyperons considerably softens the EOS at high densities, therefore, results much smaller neutron star masses. The non-uniform matter, which exists in the crusts of neutron stars, has negligible contribution to the total neutron star mass, but it plays an important role in the description of the neutron star profile in the crustal region. In Fig. 7 and 8, we show the number density of the composition in the neutron stars with \\(M=1.6M_{\\odot}\\) and \\(M=1.2M_{\\odot}\\) respectively, as a function of radius. It is clear that the uniform matter containing the equilibrium mixture of nucleons, hyperons, and leptons exists in the internal region of the neutron star, while the non-uniform matter phase occurs only in the surface region. The neutron star with \\(M=1.6M_{\\odot}\\) has much thinner crusts as compared to the case of the neutron star with \\(M=1.2M_{\\odot}\\). We show in Fig. 9 the mass-radius relations using the EOS with or without hyperons. It is found that the inclusion of hyperons only influences the neutron stars having large masses (\\(M\\stackrel{{>}}{{\\sim}}1.2M_{\\odot}\\)). ## VI Conclusion We have constructed the relativistic EOS of neutron star matter in the density range from \\(10^{-7}\\) to \\(1.2\\) fm\\({}^{-3}\\). The non-uniform matter at low densities has been described by the Thomas-Fermi approximation, in which the nucleons form the optimal nuclei and those nuclei build a BCC lattice. The uniform matter at high densities has been studied in the RMF theory. We adopted the RMF model with the TM1 parameter set, which was demonstrated to be successful in describing the properties of nuclear matter and finite nuclei including unstable nuclei [13], and its results were taken as the input in the Thomas-Fermi calculations. Hence we have worked out consistent calculations for uniform matter and non-uniform matter. The phase transition from non-uniform matter to uniform matter is found to take place at \\(n_{B}\\sim 0.06\\) fm\\({}^{-3}\\). At high densities (\\(n_{B}\\stackrel{{>}}{{\\sim}}0.27\\) fm\\({}^{-3}\\)), it is energetically favorable to convert some nucleons into hyperons via weak interactions. The inclusion of hyperons leads to a considerable softening of the EOS at high densities, since the conversion of nucleons to hyperons can relieve the Fermi pressure of the nucleons. We note that the contributions from hyperons are sensitive to the hyperon couplings, here we have adopted the hyperon couplings derived from the quark model. Presently, there exist large uncertainties in hyperon couplings. The hyperon couplings should be constrained by the experimental data of hypernuclei, but the experimental information is deficient for determining them. From the study of single \\(\\Lambda\\) hypernuclei, the quark model values of \\(\\Lambda\\) hyperon couplings usually predict overbinding of \\(\\Lambda\\) single particle energies. It seems that the quark model values of \\(\\Lambda\\) hyperon couplings lead to rather strong attraction. This might cause an earlier appearance of \\(\\Lambda\\) hyperon. A detailed investigation of the dependence of the results on the hyperon couplings is deferred to future work. We have employed the present EOS to calculate the neutron star properties. With the appearance of hyperons, the maximum mass of neutron stars turned out to be \\(1.6M_{\\odot}\\). It is found that the inclusion of hyperons results much smaller neutron star masses due to the softening of the EOS. The core of massive neutron stars is then composed of the equilibrium mixture of nucleons, hyperons, and leptons. The non-uniform matter exists only in the surface region, which forms quite thin crusts of neutron stars. The consideration of the non-uniform matter phase has negligible contribution to the neutron star mass, but it is essential to provide a realistic description of the neutron star structure. The present calculations have been performed within the framework of the relativistic mean field approach, which is incapable to include pions explicitly. It will be possible and important to construct a complete EOS based on more microscopic theory such as the Dirac-Brueckner-Hartree-Fock approach. Especially, same approach should be employed in the treatment of both uniform matter and non-uniform matter. It is well known that the relativity plays an essential role in describing the nuclear saturation and the nuclear structure, it also brings some distinctive properties in the EOS comparing with the case in the non-relativistic framework. Therefore, it is very interesting and important to study the astrophysical phenomena such as neutron star properties using the relativistic EOS. ###### Acknowledgements. The author would like to thank H. Toki, K. Sumiyoshi, and K. Oyamatsu for fruitful discussions and collaborations. This work was supported in part by the National Natural Science Foundation of China under contract No. 10075028 and No. 10135030. ## References * [1] M. Prakash, I. Bombaci, M. Prakash, P.J. Ellis, J.M. Lattimer, and R. Knorren, Phys. Rep. **280**, 1 (1997). * [2] H. Heiselberg and M. Hjorth-Jensen, Phys. Rep. **328**, 237 (2000). * [3] S. Pal, M. Hanauske, I. Zakout, H. Stocker, and W. Greiner, Phys. Rev. C **60**, 015802 (1999). * [4] G. Baym, H.A. Bethe, and C.J. Pethick, Nucl. Phys. **A175**, 225 (1971). * [5] J.W. Negele and D. Vautherin, Nucl. Phys. **A207**, 298 (1973). * [6] C.J. Pethick and D.G. Ravenhall, Annu. Rev. Nucl. Part. Sci. **45**, 429 (1995). * [7] N.K. Glendenning, F. Weber, and S.A. Moszkowski, Phys. Rev. C **45**, 844 (1992). * [8] P.K. Sahu, Phys. Rev. C **62**, 045801 (2000). * [9] K. Schertler, C. Greiner, J. Schaffner-Bielich, and M.H. Thoma, Nucl. Phys. **A677**, 463 (2000). * [10] B.D. Serot and J.D. Walecka, Adv. Nucl. Phys. **16**, 1 (1986). * [11] Y.K. Gambhir, P. Ring, A. Thimet, Ann. Phys. **198**, 132 (1990). * [12] D. Hirata, K. Sumiyoshi, B.V. Carlson, H. Toki, and I. Tanihata, Nucl. Phys. **A609**, 131 (1996). * [13] Y. Sugahara and H. Toki, Nucl. Phys. **A579**, 557 (1994). * [14] D. Hirata, H. Toki, and I. Tanihata, Nucl. Phys. **A589**, 239 (1995). * [15] Z.Y. Ma, H. Toki, B.Q. Chen, and N. Van Giai, Prog. Theor. Phys. **98**, 917 (1997). * [16] Z.Y. Ma, N. Van Giai, A. Wandelt, D. Vretenar, and P. Ring, Nucl. Phys. **A686**, 173 (2001). * [17] K. Oyamatsu, Nucl. Phys. **A561**, 431 (1993). * [18] K. Sumiyoshi, K. Oyamatsu, and H. Toki, Nucl. Phys. **A595**, 327 (1995). * [19] H. Shen, H. Toki, K. Oyamatsu, and K. Sumiyoshi, Nucl. Phys. **A637**, 435 (1998). * [20] H. Shen, H. Toki, K. Oyamatsu, and K. Sumiyoshi, Prog. Theor. Phys. **100**, 1013 (1998). Figure 1: The neutron distribution (solid curves) and the proton distribution (dashed curves) along the straight lines joining the centers of the nearest nuclei in the BCC lattice at the average baryon density \\(n_{B}=0.0001,\\ 0.001,\\ 0.01,\\ 0.05\\ {\\rm fm}^{-3}\\). Figure 2: The fraction of species \\(i\\), \\(Y_{i}=n_{i}/n_{B}\\), as a function of the total baryon density \\(n_{B}\\). Figure 3: The meson mean fields as functions of baryon density. Figure 4: The pressure \\(P\\) versus energy density \\(\\varepsilon\\) for neutron star matter with the inclusion of hyperons (solid curve) and without hyperons (dashed curve). The EOS considering only uniform matter phase (dotted curve) is also shown for comparison. Figure 5: The fractions of the composition in neutron star matter as functions of baryon density. Figure 6: The neutron star masses as functions of central baryon density Figure 7: The number density of the composition in the neutron star with \\(M=1.6M_{\\odot}\\) as a function of radius \\(r\\). Figure 8: Same as Fig. 7 but for \\(M=1.2M_{\\odot}\\). Figure 9: The mass-radius relations for neutron stars. The solid curve shows the results with the inclusion of hyperons, while those without hyperons is plotted by the dashed curve for comparison.
We construct the equation of state (EOS) in a wide density range for neutron stars using the relativistic mean field theory. The properties of neutron star matter with both uniform and non-uniform distributions are studied consistently. The inclusion of hyperons considerably softens the EOS at high densities. The Thomas-Fermi approximation is used to describe the non-uniform matter, which is composed of a lattice of heavy nuclei. The phase transition from uniform matter to non-uniform matter occurs around 0.06 fm\\({}^{-3}\\), and the free neutrons drip out of nuclei at about \\(2.4\\times 10^{-4}\\) fm\\({}^{-3}\\). We apply the resulting EOS to investigate the neutron star properties such as maximum mass and composition of neutron stars. PACS numbers: 26.60.+c, 24.10.Jv, 21.65.+f
Write a summary of the passage below.
arxiv-format/0202207v2.md
## 1 Introduction and Summary Understanding the infrared sector of Yang-Mills theory still represents a challenge in quantum field theory. The strong coupling of the system and the rich dynamics of its degrees of freedom are well beyond the applicability of many field theoretic methods. Even without attempting to solve the theory at one fell swoop, it is already difficult to find (and then answer) questions that can be disentangled from the full complexity of the problem. In this work, we study Yang-Mills theory in the framework of renormalization group (RG) flow equations [1] for the effective average action [2], concentrating solely on the running gauge coupling. Whereas perturbation theory describes asymptotic freedom of the coupling in the high-energy limit, it fails to predict anything at low energies except for its own failure - manifested by the Landau pole singularity. Even without unveiling the complete infrared structure of gauge theories (including confinement and a mass gap), an analytic knowledge of the running of the coupling towards lower energies beyond perturbation theory is desirable. Exact RG flow equations represent an appropriate tool for tackling this problem. **Flow equation for the effective average action.** Being a \"coarse-grained\" free-energy functional, the effective average action \\(\\Gamma_{k}\\) governs the dynamics of a theory at a momentum scale \\(k\\). It comprises the effects of all quantum fluctuations of the dynamical field variables with momenta larger than \\(k\\), whereas fluctuations with momenta smaller than \\(k\\) have not (yet) been integrated out. Decreasing \\(k\\) corresponds to integrating out more and more momentum shells of the quantum fluctuations. This successive averaging is implemented by a \\(k\\)-dependent infrared cutoff term \\(\\Delta_{k}S\\) which is added to the classical action in the standard Euclidean functional integral. This term gives a momentum-dependent mass square \\(R_{k}(p^{2})\\) to the field modes with momentum \\(p\\) which vanishes for \\(p^{2}\\gg k^{2}\\). Regarding\\(\\Gamma_{k}\\) as a function of \\(k\\), the effective average action runs along a RG trajectory in the space of all action functionals that interpolates between the classical action \\(S=\\Gamma_{k\\to\\infty}\\) and the conventional quantum effective action \\(\\Gamma=\\Gamma_{k\\to 0}\\). The response of \\(\\Gamma_{k}\\) to an infinitesimal variation of the scale \\(k\\) is described by a functional differential equation, the flow equation (exakt RG equation). In a symbolic notation, \\[\\partial_{t}\\Gamma_{k}=\\frac{1}{2}\\operatorname{STr}\\Big{[}\\partial_{t}R_{k} \\left(\\Gamma_{k}^{(2)}+R_{k}\\right)^{-1}\\Big{]},\\quad\\partial_{t}\\equiv k\\frac {d}{dk}, \\tag{1}\\] where \\(\\Gamma_{k}^{(2)}\\) denotes the second functional derivative of the effective average action with respect to the field variables and corresponds to the inverse exact propagator at the scale \\(k\\). **Flow equation in gauge theories.** The use of flow equations in gauge theories, as initiated in [3], [4], [5], is complicated by the fact that it is difficult to reconcile the Wilsonian idea of integrating out momentum shells of quantum fluctuations with gauge invariance. Working with gauge-noninvariant field variables such as gluons and ghosts, a regularization of the theory with a momentum cutoff necessarily breaks gauge invariance. Nevertheless, gauge-invariant flows can, in principle, be constructed by taking care of constraints imposed by the Ward identities which are modified by the presence of the cutoff [4], [6], [7], [8]; in practice, resolving these constraints beyond perturbation theory is highly involved; for a review, see [9]. As an alternative, a formulation in terms of gauge-invariant variables such as, for instance, Wilson loops may therefore be desirable and so has been proposed and worked out in [10]. Related to this, a gauge-invariant regularization has been formulated in [11] by constructing \\(\\operatorname{SU}(N)\\) Yang-Mills theory from a spontaneously broken \\(\\operatorname{SU}(N|N)\\) super-gauge extension; here the fermionic super-partners become massive and act as Pauli-Villars regulator fields without breaking the residual \\(\\operatorname{SU}(N)\\) gauge invariance. As a result, the one-loop \\(\\beta\\) function has been computed without any gauge fixing. In this work, we decide to employ the conventional and technically more feasible formulation in terms of the gluonic gauge field at the expense of only partially resolving the modified Ward identities resulting in less control over gauge invariance. In this way, we shall accept a compromise between calculational advantages and the implementation of complete quantum gauge invariance. In particular, we follow the strategy of [12], employing the background-field method. Our solution to the flow equation will be gauge invariant in the background field, but the renormalization group trajectory that connects the classical (bare) action with our quantum solution will not satisfy all requirements of gauge invariance (cf. Sect. 2). **Truncations.** Flow equations for interacting quantum field theories can be solved only approximately. A consistent and systematic approximation scheme is given by the method of truncations. Herein, the infinite space of all possible actions, spanned by the field operators compatible with the symmetries, is truncated to a subset of operators; the flow equation for the complete effective action can then be boiled down to the flow equationsof the coefficients of these operators (generalized couplings). The renormalization trajectory in the space of all actions is thereby projected onto the hypersurface spanned by all operators of the truncation. For a selected truncation to be able to describe the physics of the system, its operators have to cover the dynamics of the relevant degrees of freedom of the system under consideration. Since the relevant degrees of freedom in strongly coupled quantum field theories such as Yang-Mills theories may change under the renormalization flow, a careful and deliberate choice of the truncation is halfway to the solution of the theory. In view of the many proposals concerning the \"true\" degrees of freedom in the infrared sector of Yang-Mills theory, their systematic study within a flow equation approach would be desirable. Along this direction, interesting and promising results have been obtained in [13] and [14], where the choices of the truncation have been based on the monopole picture of infrared Yang-Mills theory. In the present work, we follow a different strategy: we stick to the \"gluonic language\" and maintain the gauge field as the basic variable. This avoids complications inherent in the change of quantum variables, which has to be performed with great care (see, e.g., [15] and [16]). But in order to account for the fact that the \"true\" infrared degrees of freedom may have a complicated gluonic description, we include infinitely many gluonic invariants in our truncation; to be explicit, we consider a truncation in which the gauge-invariant part of the effective action is an arbitrary function \\(W_{k}\\) of the square of the field strength \\(F\\), \\[\\Gamma^{\\rm inv}_{k}[A]=\\int W_{k}(\\theta),\\quad\\theta:=\\frac{1}{4}F^{a}_{\\mu \ u}F^{a}_{\\mu\ u}, \\tag{2}\\] and the running of the coupling will be extracted from the flow of the linear \\(F^{a}_{\\mu\ u}F^{a}_{\\mu\ u}\\) term in \\(W_{k}\\), as it is standard in continuum quantum Yang-Mills theory. At weak coupling, it may be sufficient to approximate \\(W_{k}[\\theta]\\) by a finite series, i.e., a polynomial in \\(\\theta\\), being justifiable by simple power counting (higher operators are suppressed by powers of the ultraviolet cutoff). But at strong coupling, those higher operators can acquire large anomalous dimensions that completely obstruct a naive power-counting analysis. In fact, our results show that the flow of the complete function \\(W_{k}\\) contributes to the running gauge coupling, and that the flow of higher order operators must not be neglected. Beyond the approximations involved (i) in choosing Eq. (2) as our truncation (and neglecting other invariants) and (ii) in resolving the modified Ward identity only partially, we make a third approximation (iii) by neglecting any nontrivial running in the ghost and gauge-fixing sectors. **Regulators.** For an explicit evaluation of the flow equation, a cutoff function (or regulator) \\(R_{k}\\) has to be specified. This cutoff function is to some extent arbitrary (see Sect. D). In the denominator of the flow equation (1), it acts as an infrared cutoff for modes with momenta smaller than \\(k\\); its derivative \\(\\partial_{t}R_{k}\\) in the numerator is peaked \\(\\delta\\)-like around \\(k\\) and thus implements the Wilsonian idea of integrating successively over momentum shells. Different choices of \\(R_{k}\\) correspond to different RG trajectories in the space of all action functionals. But by construction, the complete quantum solution \\(\\Gamma=\\Gamma_{k\\to 0}\\), being the endpoint of all trajectories, is independent of \\(R_{k}\\). This \\(R_{k}\\) independence of the solution, of course, holds only for exact solutions to the flow equation. Approximations such as the choice of a truncation generically introduce a cutoff dependence of the final result. On the one hand, this is clearly a disadvantage of the method; one is led to study one and the same problem with many different cutoffs in order to extract cutoff-independent information. On the other hand, after having accepted that exact solutions might never be at our disposal for most quantum field theories, we can exploit the cutoff dependence in order to improve our approximations. In order to illustrate this point, let us recall that truncations cut a hypersurface out of the space of all action functionals. A truncation will be acceptable if the complete quantum effective action lies within or close to this hypersurface. But this is not a sufficient criterion: imagine a certain exact RG trajectory (corresponding to a certain cutoff function) that begins and ends within this hypersurface, but in between develops a large distance to the hypersurface. In the exact theory, this flow may largely be driven by operators which do not belong to the truncation spanning the hypersurface. Working only within the truncation, the contribution of these other operators cannot be accounted for, and the so-found solution to the flow will generally be different from the true solution. Instead, the optimal strategy would be to choose those exact RG trajectories (and their corresponding cutoff functions) that lie completely in (or close to) the hypersurface. But strictly speaking, this ideal case is not possible, since the cutoff function generally couples the flow to all operators, so that an RG trajectory will never lie only within a restricted hypersurface. A more precise criterion would be that the truncated RG trajectory within the hypersurface should be equal to (or close to) the exact RG trajectory after projecting the latter onto the hypersurface. Then, the flow towards the quantum solution is driven mainly by the operators contained in the truncation, and the final result will represent a good approximation to the exact one. However, we are currently not aware of any method that fully formalizes these ideas. Up to now, the properties of the flow that depend on the cutoff function can only be investigated within a given truncation. However, a systematic study of cutoff functions has recently been put forward mainly within derivative-expansion truncations in scalar and fermionic theories, and \"optimized\" cutoff functions have been proposed [17]. The optimization criterion focuses on improving the convergence of approximate solutions to flow equations; in fact, for scalar O(\\(N\\)) symmetric theories, it leads to better results for the critical exponents [18]. **Spectrally adjusted cutoff.** The class of cutoff functions employed in this work is also considered to be improved in the sense mentioned above. In this case, the improvement does not refer to the precise shape of the cutoff function, but rather to the choice of its argument. Here, we will use not just the spectrum of the Laplace operator (which would be the gauge-covariant generalization of the momentum squared), but the full second functional derivative of the effective average action \\(\\Gamma_{k}^{(2)}\\) evaluated at the background field. The argument of the cutoff function can be understood as a parameter which controls the order and size of the momentum shell that is integrated out upon lowering the scale from \\(k\\) to \\(k-\\Delta k\\). It appears natural that a truncated flow can be controlled better if each momentum shell covers an equal part of the spectrum of quantum fluctuations. The spectrum itself is not fixed, but \\(k\\) dependent; lower modes get dressed by integrating out higher modes. In order to adapt the cutoff function to this spectral flow, we insert the full \\(\\Gamma_{k}^{(2)}\\) into its argument, and so obtain a \"spectrally adjusted\" cutoff. This has two technical consequences: first, as the flow equation is evaluated at the background field in our truncation, the right-hand side can be transformed into a propertime representation; here, we have powerful tools at our disposal that allow us to keep track of the full dependence of the flow equation on the field strength squared. Secondly, the degree of nonlinearity of the flow equation strongly increases, inhibiting its straightforward analytical or numerical computation even within simple truncations. We solve this technical problem by first expanding the flow for the gauge coupling in an asymptotic series, and then reconstructing an integral representation for this series by analyzing the leading (and subleading) asymptotic growth of the series coefficients. Whereas most parts of our work are formulated in \\(d>2\\) dimensions and for the gauge group SU(\\(N\\)), this final analysis concentrates on the most interesting cases of \\(d=4\\) and \\(N=2\\) or \\(N=3\\). **Results.** As a result, we find a representation of the \\(\\beta\\) function of Yang-Mills theory. For weak coupling, we rediscover an accurate perturbative behavior. As the scale \\(k\\) approaches the infrared, the coupling grows and finally tends to an infrared stable fixed point, \\(\\alpha_{\\rm s}\\to\\alpha_{*}\\). Our quantitative results are \\[\\alpha_{*} \\simeq 11.3\\quad\\mbox{for SU(2)},\\] \\[\\alpha_{*} \\simeq 7.7\\pm 2\\quad\\mbox{for SU(3)}. \\tag{3}\\] The uncertainty in the SU(3) case arises from an unresolved color structure in our calculation (cf. App. E). Figure 1: Running coupling \\(\\alpha_{\\rm s}\\) versus momentum scale \\(k\\) in GeV for gauge group SU(2), using the initial value \\(\\alpha_{\\rm s}(M_{Z})\\simeq 0.117\\). The solid line represents the result of our calculation in comparison with one-loop perturbation theory (dashed line). The complete flow of the running coupling is depicted in Fig. 1 for pure SU(2) Yang-Mills theory in comparison with perturbation theory. For illustrative purposes, we use \\(\\alpha_{\\rm s}(M_{Z})\\simeq 0.117\\) as initial value (\\(M_{Z}\\simeq 91.2\\) GeV). Sizeable deviations from perturbation theory occur for \\(k\\lesssim 1\\) GeV, and the fixed point plateau is reached for \\(k={\\cal O}(10{\\rm MeV})\\). We shall argue below that a larger truncation as well as the inclusion of dynamical quarks are expected to decrease the value of \\(\\alpha_{*}\\). The paper is organized as follows: Sect. 2 briefly recalls the framework of flow equations in gauge theories with the background-field method and describes our basic approximations. In Sect. 3, we boil down the flow equation as required for our truncation. Sect. 4 is devoted to extracting the RG flow of the running gauge coupling, which is the main result of the present work. The role of the spectrally adjusted cutoff is illustrated in Sect. 5. Sect. 6 contains our conclusions and a discussion of our results in the light of related literature. ## 2 Flow equation for Yang-Mills theory We begin with a brief outline of the flow equation and the background-field formalism as they are employed in this work. We focus on direct applicability and the required approximations and leave aside more formal (though important) aspects, as they are presented in [12] and [19]. Let us therefore start with a more explicit representation of the flow equation for the effective average action, \\[\\partial_{t}\\Gamma_{k}[A,\\bar{A}]=\\frac{1}{2}\\,{\\rm STr}\\,\\Biggl{\\{}\\partial_ {t}R_{k}(\\Gamma_{k}^{(2)}[\\bar{A},\\bar{A}])\\,\\left[\\Gamma_{k}^{(2)}[A,\\bar{A}] +R_{k}(\\Gamma_{k}^{(2)}[\\bar{A},\\bar{A}])\\right]^{-1}\\Biggr{\\}}, \\tag{4}\\] where we denote the so-called classical gauge field by \\(A_{\\mu}^{a}\\), which is the usual field variable of the quantum effective action (conjugate to the source). We also introduce a background field \\(\\bar{A}_{\\mu}^{a}\\), and have already inserted \\(\\Gamma_{k}^{(2)}\\) evaluated at the background field into the cutoff function.1 The symbol STr implies tracing over all internal indices and provides for a minus sign in the ghost sector. We aim at solving Eq. (4), using the following truncation: Footnote 1: This \\(\\Gamma_{k}^{(2)}\\) is evaluated at the background field because an \\(A\\) dependence would spoil the one-to-one correspondence of the flow equation to the functional integral. \\[\\Gamma_{k}[A,\\bar{A}]=\\Gamma_{k}^{\\rm inv}[A]+\\Gamma_{k}^{\\rm gf}[A,\\bar{A}]+ \\Gamma_{k}^{\\rm gh}[A,\\bar{A}]+\\Gamma_{k}^{\\rm gauge}[A,\\bar{A}]. \\tag{5}\\] Following [20], the background-field method is introduced to enable us not only to perform a meaningful integration over gauge-fixed quantum fluctuations but to simultaneously arrive at a gauge-invariant effective action. Identifying the quantum fluctuations with \\(A-\\bar{A}\\), the gauge-fixing term \\[\\Gamma_{k}^{\\rm gf}[A,\\bar{A}]=\\frac{1}{2\\alpha}\\int_{x}\\left[D_{\\mu}[\\bar{A} ]\\,(A-\\bar{A})_{\\mu}\\right]^{2} \\tag{6}\\]with a gauge parameter \\(\\alpha\\) is invariant under a simultaneous gauge transformation of \\(A^{a}_{\\mu}\\) and \\(\\bar{A}^{a}_{\\mu}\\), and so is the ghost action \\[\\Gamma^{\\rm gh}_{k}[A,\\bar{A}]=-\\int_{x}\\bar{c}\\,D_{\\mu}[\\bar{A}]D_{\\mu}[A]\\,c, \\tag{7}\\] where the ghosts \\(\\bar{c},c\\) are understood to transform homogeneously. We should stress that with this truncation of the ghost and gauge-fixing sector, we neglect any running there. If we solved the theory completely, the resulting quantum effective action \\(\\Gamma_{k=0}[A,\\bar{A}]\\) would be gauge invariant precisely at \\(A=\\bar{A}\\). Imposing a normalization of \\(\\Gamma^{\\rm gauge}_{k}[A,\\bar{A}]\\) such that \\[\\Gamma^{\\rm gauge}_{k}[A=\\bar{A},\\bar{A}]=0, \\tag{8}\\] we conclude that the so-found solution \\(\\Gamma^{\\rm inv}_{k\\to 0}[A]\\) would be gauge invariant and would represent the desired quantum effective action (provided that we also worked out the complete ghost sector). The quantity \\(\\Gamma^{\\rm gauge}_{k}\\) hence parametrizes the gauge-noninvariant remainder of the action in the physically irrelevant case of \\(A\ eq\\bar{A}\\). Although we are finally interested in \\(\\Gamma_{k}[A=\\bar{A},\\bar{A}]\\), \\(\\Gamma^{\\rm gauge}_{k}\\) cannot be dropped right from the beginning in Eq. (4), because its second functional derivative \\((\\Gamma^{\\rm gauge}_{k})^{(2)}[A,\\bar{A}]\ eq 0\\) in general; \\(\\Gamma^{\\rm gauge}_{k}\\) contributes to the flow of \\(\\Gamma_{k}[A,\\bar{A}]\\) even at \\(A=\\bar{A}\\). Still, neglecting \\(\\Gamma^{\\rm gauge}_{k}\\) seems to be a consistent truncation if we are interested only in \\(\\Gamma^{\\rm inv}_{k}[A=\\bar{A}]\\). But besides Eq. (4), the effective action also has to satisfy the constraints imposed by gauge invariance in the form of the modified Ward identity; in symbolic notation, \\[{\\cal L}_{\\rm W}[\\Gamma_{k}]=\\Delta[R_{k}], \\tag{9}\\] where \\({\\cal L}_{\\rm W}\\) denotes the usual Ward operator constraining the effective action \\(\\Gamma_{k}\\), and the cutoff-dependent right-hand side represents the modification due to the infrared regulator \\(R_{k}\\) (for an explicit representation of Eq. (9), see [12], [19]). In fact, if the cutoff is removed in the limit \\(k\\to 0\\), we rediscover the standard Ward identity \\({\\cal L}_{\\rm W}[\\Gamma_{k}]=0\\). Inserting our truncation (5) into the Ward identity, the first three terms drop out and we are left with \\[{\\cal L}_{\\rm W}[\\Gamma^{\\rm gauge}_{k}]=\\Delta[R_{k}]. \\tag{10}\\] This tells us that, on the one hand, \\(\\Gamma^{\\rm inv}_{k}\\) is indeed not constrained by the modified Ward identity and any gauge-invariant ansatz is allowed; on the other hand, a vanishing \\(\\Gamma^{\\rm gauge}_{k}\\) is generally inconsistent with the constraint. It is a nontrivial assumption of this work that \\(\\Gamma^{\\rm gauge}_{k}\\) as driven by the right-hand side of Eq. (10) does not strongly influence the flow of \\(\\Gamma^{\\rm inv}_{k}\\) at \\(A=\\bar{A}\\), so that we can safely neglect it in a first approximation. With regard to our final asymptotic analysis of the running coupling, we can even weaken this assumption a bit: since we reconstruct the \\(\\beta\\) function from its asymptotic series expansion by analyzing its leading growth, neglecting \\(\\Gamma^{\\rm gauge}_{k}\\) corresponds to assuming that \\(\\Gamma^{\\rm gauge}_{k}\\) does not strongly modify this leading growth. In view of the fact that \\(\\Gamma^{\\rm gauge}_{k}\\) startsfrom zero in the ultraviolet and enters the flow only indirectly, this assumption appears rather natural, at least for a large part of the flow. We should remark that the effective average action \\(\\Gamma_{k}\\) has to satisfy another identity that can be derived by considering the response of \\(\\Gamma_{k}\\) on gauge transformations of the background field only. This background-field identity is in close relation to the modified Ward identity [19] (for an explicit proof in QED, see[21]), and also imposes a constraint only on \\(\\Gamma_{k}^{\\rm gauge}\\) similar to Eq. (10). As has been shown in [19], this identity does not cause further fine-tuning problems which would add to those that are posed by the modified Ward identity. In summary, solving the flow equation (4) with the truncation (5) will result in an action functional \\(\\Gamma_{k}^{\\rm inv}[A=\\bar{A}]\\) which is invariant under the background-field transformation. By neglecting \\(\\Gamma_{k}^{\\rm gauge}\\), this invariance is not identical to full quantum gauge invariance even at \\(A=\\bar{A}\\), since the flow is not completely compatible with the modified Ward identities. This work is based on the assumption that these violations of quantum gauge invariance have little effect on the final result. ## 3 Evaluation of the truncated flow We shall now solve the flow equation (4) within the truncation (5) (neglecting \\(\\Gamma_{k}^{\\rm gauge}\\)) and with \\(\\Gamma_{k}^{\\rm inv}\\) as given in Eq. (2), \\[\\Gamma_{k}^{\\rm inv}[A]=\\int_{x}W_{k}(\\theta),\\quad W_{k}(\\theta)=\\sum_{i=1}^{ \\infty}\\frac{W_{i}}{i!}\\,\\theta^{i}, \\tag{11}\\] where \\(\\theta:=\\frac{1}{4}F_{\\mu\ u}^{a}F_{\\mu\ u}^{a}\\). An important ingredient of the flow equation is the cutoff function \\(R_{k}\\), which we display as \\[R_{k}(x)=x\\,r(y),\\quad y:=\\frac{x}{Z_{k}k^{2}}, \\tag{12}\\] with \\(r(y)\\) being a dimensionless function of a dimensionless argument. We include wave-function renormalization constants \\(Z_{k}\\) in the argument of \\(r(y)\\) for reasons to be discussed below. Note that \\(Z_{k}\\) as well as \\(R_{k}\\) itself are matrices in field space; different field variables may be accompanied by different \\(Z_{k}\\)'s and \\(R_{k}\\)'s. The cutoff function \\(R_{k}\\) has to satisfy the following standard constraints: \\[\\lim_{x/k^{2}\\to 0}R_{k}(x)>0,\\quad\\lim_{k^{2}/x\\to 0}R_{k}(x)=0,\\quad\\lim_{k \\rightarrow\\Lambda}R_{k}(x)\\rightarrow\\infty, \\tag{13}\\] which guarantee that \\(R_{k}\\) provides for an infrared regularization, ensure that the regulator is removed in the limit \\(k\\to 0\\), and control the ultraviolet limit where \\(\\Gamma_{k\\rightarrow\\Lambda}=S_{\\Lambda}\\) should approach its initial condition \\(S_{\\Lambda}\\) at the initial ultraviolet scale \\(\\Lambda\\). These constraints are met by the representation (12) and translate into constraints for \\(r(y)\\). Since we will identify the argument \\(x\\) with the full \\(\\Gamma_{k}^{(2)}\\) at the background field, the first constraint of (13) must be formulated more strongly, \\[\\lim_{x/k^{2}\\to 0}R_{k}(x)=Z_{k}\\,k^{2},\\quad r(y\\to 0)\\to\\frac{1}{y}, \\tag{14}\\] in order to guarantee that the one-loop approximation of the flow equation results in the true one-loop effective action. We shall not specify \\(r(y)\\) any further until we employ an exponential cutoff for the final quantitative computation (see. Eq. (D.4)). Within the approximations mentioned above, the flow equation (15) can be written as \\[\\partial_{t}\\Gamma_{k}[A=\\bar{A},\\bar{A}] = \\frac{1}{2}\\,{\\rm STr}\\,\\frac{\\partial_{t}R_{k}(\\Gamma_{k}^{(2)} )}{\\Gamma_{k}^{(2)}+R_{k}(\\Gamma_{k}^{(2)})} \\tag{15}\\] \\[= \\frac{1}{2}\\,{\\rm STr}\\left[(2-\\eta)\\,h(y)+\\frac{\\partial_{t} \\Gamma_{k}^{(2)}}{\\Gamma_{k}^{(2)}}\\left(g(y)-h(y)\\right)\\right]_{y=\\frac{ \\Gamma_{k}^{(2)}}{Z_{k}k^{2}}},\\] where we abbreviated \\[h(y):=\\frac{-y\\,r^{\\prime}(y)}{1+r(y)},\\quad g(y):=\\frac{r(y)}{1+r(y)}. \\tag{16}\\] In Eq. (15), we also defined the anomalous dimension \\[\\eta:=-\\partial_{t}\\ln Z_{k}=-\\frac{1}{Z_{k}}\\,\\partial_{t}Z_{k}, \\tag{17}\\] which is matrix valued in field space similarly to \\(Z_{k}\\); different field variables can acquire different anomalous dimensions. We would like to draw attention to the appearance of the term \\(\\sim\\partial_{t}\\Gamma_{k}^{(2)}\\) on the right-hand side of the flow equation. This term arises from writing \\(\\Gamma_{k}^{(2)}\\) into the argument of the cutoff function. It reflects the fact that the cutoff adjusts itself under the flow of the spectrum of \\(\\Gamma_{k}^{(2)}\\).2 Now it is useful to introduce (at least formally) the Laplace transforms \\(\\tilde{h}(s)\\) and \\(\\tilde{g}(s)\\) of the functions \\(h(y)\\) and \\(g(y)\\): Footnote 2: Although \\(\\Gamma_{k}^{(2)}\\) was also used as the argument of the cutoff in [12], the term \\(\\sim\\partial_{t}\\Gamma_{k}^{(2)}\\) has been neglected in that calculation. The necessity of this term was pointed out to us by D.F. Litim. \\[h(y)=\\int\\limits_{0}^{\\infty}ds\\,\\tilde{h}(s)\\,{\\rm e}^{-ys},\\quad g(y)=\\int \\limits_{0}^{\\infty}ds\\,\\tilde{g}(s)\\,{\\rm e}^{-ys}. \\tag{18}\\] These Laplace transforms \\(\\tilde{h}(s)\\) and \\(\\tilde{g}(s)\\) can be viewed as cutoff functions in Laplace space: they should drop off sufficiently fast for large \\(s\\) (small \\(s\\)) in order to regularize the infrared (ultraviolet). For instance, the infrared constraint (14) translates into \\[h(0)=\\int\\limits_{0}^{\\infty}ds\\,\\tilde{h}(s)=1=\\int\\limits_{0}^{\\infty}ds\\, \\tilde{g}(s)=g(0). \\tag{19}\\]Additional useful identities for these functions are discussed in Appendix D. Furthermore, introducing the functions \\(\\widetilde{H}(s)\\) and \\(\\widetilde{G}(s)\\) by \\[\\frac{d}{ds}\\,\\widetilde{H}(s)=\\tilde{h}(s),\\quad\\widetilde{H}(0)=0,\\] \\[\\frac{d}{ds}\\,\\widetilde{G}(s)=\\tilde{g}(s),\\quad\\widetilde{G}(0)=0, \\tag{20}\\] a convenient form of the flow equation can be found which reads: \\[\\partial_{t}\\Gamma_{k} = \\frac{1}{2}\\int\\limits_{0}^{\\infty}\\frac{ds}{s}\\big{(}\\widetilde {H}(s)-\\widetilde{G}(s)\\big{)}\\,\\partial_{t}\\,\\mbox{STr}\\,\\exp\\left(-s\\frac{ \\Gamma_{k}^{(2)}}{Z_{k}k^{2}}\\right) \\tag{21}\\] \\[+\\frac{1}{2}\\int\\limits_{0}^{\\infty}ds\\,\\tilde{g}(s)\\,\\mbox{STr} \\,(2-\\eta)\\,\\exp\\left(-s\\frac{\\Gamma_{k}^{(2)}}{Z_{k}k^{2}}\\right).\\] The great advantage of this form is that the right-hand side of the flow equation has been transformed into a propertime representation.3 The (super-)trace calculation reduces to a computation of a heat-kernel trace for which there are powerful techniques available. Footnote 3: This representation of the flow equation should not be confused with the so-called propertime RG [22]. The latter represents a RG flow equation that is derived by RG improving one-loop formulas in propertime representation, and has been used in a variety of studies [23]. However, a propertime flow is generally _not_ exact, as was proved in [24], [25]: generic propertime flows can neither be mapped onto exact flows in a derivative expansion nor correctly reproduce perturbation theory. By contrast, our flow equation is derived from an exact RG flow equation and corresponds to the _generalized_ propertime flow proposed in [25]. The essential difference to (standard) propertime flows is the inclusion of \\(\\partial_{t}\\Gamma_{k}^{(2)}\\) terms. In the present work, particularly these terms will be important and will not be neglected. In agreement with [25], our findings therefore suggest that propertime flows may be improved towards exact RG flows by including the \\(\\sim\\partial_{t}\\Gamma_{k}^{(2)}\\) terms systematically. Within the actual present truncation (neglecting \\(\\Gamma_{k}^{\\rm gauge}\\)), \\[\\Gamma_{k}[A,\\bar{A}]=\\int_{x}W_{k}(\\theta)+\\Gamma_{k}^{\\rm gh}[A,\\bar{A}]+ \\Gamma_{k}^{\\rm gf}[A,\\bar{A}], \\tag{22}\\] \\(\\Gamma_{k}^{(2)}\\) still has a complicated structure which inhibits obtaining general and exact results for the heat-kernel trace. Fortunately, a general solution is not necessary; we merely have to project the right-hand side onto the truncation, implying that we need only the dependence of the right-hand side on the invariant \\(\\theta=\\frac{1}{4}F_{\\mu\ u}^{a}F_{\\mu\ u}^{a}\\); other invariants occurring in the heat-kernel trace are of no importance in the present truncation. Now the crucial observation is that the heat-kernel dependence on \\(\\theta\\) can be reconstructed by performing the computation for the special field configuration of a covariant constant magnetic field (as it is explicitly defined in Eq. (A.1)). In addition to this, we can perform the computation for \\(A=\\bar{A}\\). For this field configuration, the flow equation finally depends only on the field parameter \\(\\theta=\\frac{1}{4}F_{\\mu\ u}^{a}F_{\\mu\ u}^{a}\\equiv\\frac{1}{2}B^{2}\\), where \\(B\\) denotes the strength of the magnetic field; the latter is pseudo-abelian and points into a single direction in color space, characterized by a color unit vector \\(n^{a}\\). Extracting this \\(B\\) dependence of the heat kernel allows us to reconstruct the flow of \\(W_{k}(\\frac{1}{2}B^{2})\\). It should be stressed that considering a covariant constant background field is nothing but a technical trick to project onto the truncation; we do not at all assume that such a background represents the vacuum configuration of Yang-Mills theory, as is the case, e.g., in the Savvidy vacuum model [26]. This trick also allows us to decompose the operator \\(\\Gamma_{k}^{(2)}\\) into linearly independent pieces. In particular, the gauge-field fluctuations can be classified into modes with generalized transversal (T) and longitudinal (L) polarization with respect to the magnetic-field direction in spacetime, and into parallel \\(\\parallel\\) and perpendicular \\(\\perp\\) modes with respect to the field direction in color space. Introducing the corresponding projectors \\(P_{\\rm L,T}\\) and \\(P_{\\parallel,\\perp}\\) (explicitly defined in App. (A)), the operator \\(\\Gamma_{k}^{(2)}\\) can be represented as \\[\\Gamma_{k}^{(2)} = P_{\\rm T}P_{\\perp}\\left[W_{k}^{\\prime}\\,{\\cal D}_{\\rm T}\\right] +P_{\\rm L}P_{\\perp}\\,\\left[\\frac{1}{\\alpha}\\,{\\cal D}_{\\rm T}\\right] \\tag{23}\\] \\[+P_{\\rm T}P_{\\parallel}\\left[W_{k}^{\\prime}\\,(-\\partial^{2})+W_ {k}^{\\prime\\prime}\\,{\\sf S}\\right]+P_{\\rm L}P_{\\parallel}\\,\\left[\\frac{1}{ \\alpha}\\,(-\\partial^{2})\\right]\\] \\[+P_{\\rm gh}\\left[-D^{2}\\right],\\] where \\(\\Gamma_{k}^{(2)}\\equiv\\Gamma_{k}^{(2)}[A=\\bar{A},\\bar{A}]\\), and we drop the bars from now on. Here we also defined the operators \\[({\\cal D}_{\\rm T})^{ab}_{\\mu\ u}=(-D^{2}\\delta_{\\mu\ u}+2{\\rm i}\\bar{g}F_{\\mu \ u})^{ab},\\quad{\\sf S}_{\\mu\ u}={\\sf F}_{\\mu\\alpha}{\\sf F}_{\\beta\ u}\\partial ^{\\alpha}\\partial^{\\beta},\\quad{\\sf F}_{\\mu\ u}=n^{a}F_{\\mu\ u}^{a}. \\tag{24}\\] The formal symbol \\(P_{\\rm gh}\\) in Eq. (23) projects onto the ghost sector, and \\(W_{k}^{\\prime}\\equiv\\frac{d}{d\\theta}W_{k}(\\theta)\\); for details about this decomposition, see App. A. At this point, we are free to choose different cutoff wave-function renormalizations \\(Z_{k}\\) for each of the linearly independent parts in Eq. (23). If we were solving the flow equation exactly, the final result would be independent of this choice; however, for a truncated flow, a clever choice can seriously improve the approximation. With regard to the form of \\(R_{k}\\) in Eq. (12), it is obvious that the \\(Z_{k}\\)'s control the precise position at which the scale \\(k\\) cuts off the infrared of the momentum spectrum. Since the latter is determined by the Laplace-type operators \\(-\\partial^{2},-D^{2},{\\cal D}_{\\rm T}\\) in Eq. (23), we can cut them off at \\(k^{2}\\) by choosing \\[Z_{\\rm ghost,}=1,\\quad Z_{{\\rm L},k}=\\frac{1}{\\alpha},\\quad Z_{{\\rm T},k}=W_{k }^{\\prime}(0)\\equiv Z_{{\\rm F},k}, \\tag{25}\\] for ghost, longitudinal, and transversal fluctuations, respectively.4 This choice guarantees that the longitudinal and ghost modes are cut off at the same point, providing for a necessary cancellation. As a side effect, the flow becomes independent of the gauge-fixing parameter \\(\\alpha\\), so that we can implicitly choose Landau gauge \\(\\alpha\\to 0\\), which is known to be a fixed point of the flow [7],[29]. Finally, the transversal cutoff wave-function renormalization is set equal to the gauge-field wave function renormalization, which can be read off at the weak-field limit, \\(\\Gamma_{k}[A]_{\\rm w.f.}\\simeq W_{k}^{\\prime}(0)\\,\\theta\\equiv\\frac{Z_{\\rm F, k}}{4}F_{\\mu\ u}^{a}F_{\\mu\ u}^{a}\\). Using trace identities found in [12], the heat-kernel trace occurring in Eq. (21) can be further reduced to \\[{\\rm STr}\\,{\\rm e}^{-s\\frac{\\Gamma_{k}^{(2)}}{Z_{k}k^{2}}} = {\\rm Tr}_{x{\\rm L}}{\\rm e}^{-\\frac{s}{k^{2}}\\left(\\frac{W_{k}^{ \\prime}}{Z_{\\rm F,k}}(-\\partial^{2})+\\frac{W_{k}^{\\prime\\prime}}{Z_{\\rm F,k}} \\mbox{\\rm s}\\right)}-d\\,{\\rm Tr}_{x}\\,{\\rm e}^{-\\frac{s}{k^{2}}\\frac{W_{k}^{ \\prime}}{Z_{\\rm F,k}}(-\\partial^{2})} \\tag{26}\\] \\[+{\\rm Tr}_{x{\\rm L}}{\\rm e}^{-\\frac{s}{k^{2}}\\frac{W_{k}^{ \\prime}}{Z_{\\rm F,k}}\\,{\\cal D}_{\\rm T}}-{\\rm Tr}_{x{\\rm c}}\\,{\\rm e}^{-\\frac {s}{k^{2}}\\frac{W_{k}^{\\prime}}{Z_{\\rm F,k}}\\,(-D^{2})}-{\\rm Tr}_{x{\\rm c}}\\, {\\rm e}^{-\\frac{s}{k^{2}}\\,(-D^{2})},\\] where the traces can act on spacetime \\((x)\\), color (c), or Lorentz (L) indices. For the trace in Eq. (23) involving \\(\\eta\\) (matrix-valued), all terms in Eq. (26) containing \\(Z_{{\\rm F},k}\\) will acquire an anomalous-dimension contribution which we will also call \\(\\eta\\) for simplicity: \\[\\eta=-\\partial_{t}\\ln Z_{{\\rm F},k}=-\\frac{1}{Z_{{\\rm F},k}}\\,\\partial_{t}Z_{ {\\rm F},k}. \\tag{27}\\] The various heat-kernel traces are computed in Appendix B. In order to display the result concisely, let us define the auxiliary functions \\[f_{1}(u) = \\frac{1}{u^{d/2}}\\left(\\frac{(d-1)}{2}\\,\\frac{u}{\\sinh u}+2\\,u\\, \\sinh u\\right),\\] \\[f_{2}(u) = \\frac{1}{2}\\frac{1}{u^{d/2}}\\,\\frac{u}{\\sinh u}, \\tag{28}\\] \\[f_{3}(v_{1},v_{2}) = \\frac{1}{v_{1}^{d/2}}\\,(1-v_{2}).\\] Equipped with these abbreviations, the flow equation can be written as \\[\\partial_{t}W_{k}(\\theta) = \\frac{1}{2(4\\pi)^{d/2}}\\int\\limits_{0}^{\\infty}ds\\left\\{\\tilde{g }(s)\\,\\left[\\sum_{l=1}^{N^{2}-1}\\left(2(2-\\eta)f_{1}\\left(\\frac{s}{k^{2}}\\frac {W_{k}^{\\prime}}{Z_{{\\rm F},k}}\\bar{B}_{l}\\right)-4f_{2}\\left(\\frac{s}{k^{2}} \\bar{B}_{l}\\right)\\right)\\bar{B}_{l}^{d/2}\\right.\\right.\\] \\[\\left.\\left.-(2-\\eta)\\,f_{3}\\left(\\frac{s}{k^{2}}\\frac{W_{k}^{ \\prime}}{Z_{{\\rm F},k}},\\frac{W_{k}^{\\prime}}{W_{k}^{\\prime}+B^{2}W_{k}^{ \\prime\\prime}}\\right)\\right]\\right.\\] \\[\\left.+\\frac{1}{2s}\\big{[}\\widetilde{H}(s)-\\widetilde{G}(s) \\big{]}\\,\\partial_{t}\\,\\left[4\\sum_{l=1}^{N^{2}-1}\\,(f_{1}-f_{2})\\,\\bar{B}_{l }^{d/2}-2f_{3}\\right]\\right\\},\\] where \\(\\bar{B}_{l}=\\bar{g}|\ u_{l}|B\\), \\(\\bar{g}\\) denotes the bare coupling, and \\(\ u_{l}\\) represents the \\(l=1,\\ldots,N^{2}-1\\) eigenvalues of the color matrix \\((n^{a}T^{a})^{bc}\\). The auxiliary functions \\(f_{i}\\) in the last line are understood to have the same arguments as in the first lines. It is convenient to express the flow equation in terms of dimensionless renormalized quantities, \\[g^{2} = k^{d-4}\\,Z_{{\\rm F},k}^{-1}\\,\\bar{g}^{2},\\] \\[\\vartheta = g^{2}\\,k^{-d}\\,Z_{{\\rm F},k}\\,\\theta\\equiv k^{-4}\\,\\bar{g}^{2}\\, \\theta, \\tag{30}\\] \\[w_{k}(\\vartheta) = g^{2}\\,k^{-d}\\,W_{k}(\\theta)\\equiv k^{-4}\\,Z_{{\\rm F},k}^{-1}\\, \\bar{g}^{2}\\,W_{k}(k^{4}\\vartheta/\\bar{g}^{2}),\\] and evaluate the derivative \\(\\partial_{t}\\) from now on at fixed \\(\\vartheta\\) instead of fixed \\(\\theta\\). As a result, the flow equation (29) turns into \\[\\partial_{t}w_{k}(\\vartheta)\\] \\[\\quad=-(4-\\eta)\\,w_{k}+4\\,\\vartheta\\,\\dot{w}_{k}(\\vartheta)\\] \\[\\quad+\\frac{g^{2}}{2(4\\pi)^{d/2}}\\Bigg{\\{}\\int\\limits_{0}^{ \\infty}\\!ds\\,\\tilde{h}(s)\\left[4\\sum_{l=1}^{N^{2}-1}\\Big{(}f_{1}(s\\dot{w}_{k}b _{l})\\!-f_{2}(sb_{l})\\Big{)}b_{l}^{d/2}-2f_{3}\\!\\left(\\!s\\dot{w}_{k},\\frac{ \\dot{w}_{k}}{\\dot{w}_{k}\\!+2\\vartheta\\ddot{w}_{k}}\\right)\\!\\right]\\] \\[\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad-\\eta\\int\\limits_{0}^{ \\infty}ds\\,\\tilde{g}(s)\\left[2\\sum_{l=1}^{N^{2}-1}f_{1}(s\\dot{w}_{k}b_{l})\\,b_{ l}^{d/2}-f_{3}\\left(s\\dot{w}_{k},\\frac{\\dot{w}_{k}}{\\dot{w}_{k}+2\\vartheta \\ddot{w}_{k}}\\right)\\right] \\tag{31}\\] \\[\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\left.- \\frac{2}{d}(\\partial_{t}-4\\vartheta\\partial_{\\vartheta})\\,f_{3}\\left(s\\dot{w} _{k},\\frac{\\dot{w}_{k}}{\\dot{w}_{k}+2\\vartheta\\ddot{w}_{k}}\\right)\\right] \\Bigg{\\}},\\] where \\(\\dot{w}_{k}(\\vartheta)=\\partial_{\\vartheta}w_{k}(\\vartheta)\\), and we abbreviated \\(b_{l}=|\ u_{l}|\\sqrt{2\\vartheta}\\). Equation (31) represents one of the main results of the present work. Within the chosen truncation, this flow equation leads to the full quantum effective action of Yang-Mills theory upon integration from its initial condition at \\(\\Lambda\\) down to \\(k=0\\). As a first comment, we would like to mention that we rediscover the flow equation of [12] if we perform an expansion for weak magnetic field and if we neglect all terms proportional to \\(\\partial_{t}\\Gamma_{k}^{(2)}\\). In order to isolate the latter from the rest of Eq. (31), the single factor of \\(\\tilde{g}(s)\\) in the second line should be represented as \\(\\tilde{h}(s)-(\\tilde{h}(s)-\\tilde{g}(s))\\), and then all terms proportional to \\((\\tilde{h}(s)-\\tilde{g}(s))\\) should be dropped (cf. Eq. (15)). Obviously, the \\(\\partial_{t}\\Gamma_{k}^{(2)}\\) terms \\(\\sim(\\tilde{h}(s)-\\tilde{g}(s))\\) modify the flow equation extensively.5 They seriously increase the degree of complexity of this partial differential equation, so that neither an analytic nor a numeric evaluation is straightforward. The next section will be devoted to a search for the simplest possible and consistent approximation. Footnote 5: Incidentally, it is easy to show that no admissible cutoff shape function \\(r(y)\\) exists such that \\(h(y)=g(y)\\). Hence, the \\(\\partial_{t}\\Gamma_{k}^{(2)}\\) terms are present for all cutoff shape functions. Finally, we remark that the flow equation contains a seeming divergence: in the limit of small \\(k\\), the \\(s\\) integrand may not be bounded for \\(s\\to\\infty\\), owing to the last term \\(\\sim\\sinh u\\) in the auxiliary function \\(f_{1}\\) given in Eq. (28). However, this divergence is well understood and can be controlled. It arises from the Nielsen-Olesen unstable mode [30] in the operator \\({\\cal D}_{\\rm T}\\), and can be traced back to the fact that the gluon-spin coupling to the constant magnetic field can lower its energy below zero. Because of this mode, the covariant constant magnetic field is known to be unstable, if considered as the quantum vacuum state of Yang-Mills theory. The divergence can be identified as a pole at complex infinity. The \\(s\\) integral can be properly defined by analytic continuation, resulting in a real part as well as an imaginary part. The real part is indeed important because it contributes to the \\(\\beta\\) function and the form of the effective action in a perturbative computation (see below). The imaginary part is interpreted as a measure for the instability of the constant-field vacuum. As we have stressed before, the constant-magnetic-field background is just a calculational tool in the present context, and the validity of the flow equation is not based on this background. Therefore, the \\(s\\) integral can be properly defined by analytic continuation around this pole at complex infinity. The resulting real part will be a valid and important contribution to the flow, but the imaginary part is of no relevance here. If we were really interested in a constant-field vacuum, the flow generated by this imaginary part would describe how the instability develops upon integrating out the unstable mode in a Wilsonian sense. ## 4 Running gauge coupling in \\(d=4\\) In order to find a strategy for solving the flow equation (31) within a first simple approximation, let us take a closer look at the standard procedure employed for ordinary cutoffs without \\(\\partial_{t}\\Gamma_{k}^{(2)}\\) terms. In such a case, the partial differential equation can be rewritten as an infinite set of coupled ordinary first-order differential equations by expanding the truncation, e.g., \\[w_{k}(\\vartheta)=\\sum_{i=1}^{\\infty}\\frac{w_{i}}{i!}\\,\\vartheta^{i},\\quad w_{1 }=1. \\tag{32}\\] Note that, owing to the choice (25) for \\(Z_{k}\\) and the definition (30), \\(w_{1}=1\\) is fixed, so that the generalized coupling \\(W_{1}\\) is traded for the anomalous dimension \\(\\eta\\). As a result, we obtain infinitely many flow equations for the couplings \\(w_{i}\\) which, for an ordinary cutoff, read \\[\\partial_{t}w_{i}\\big{|}_{\\rm ordinary\\ cutoff}=X_{i}(\\eta,w_{2},\\ldots,w_{i +1}),\\quad i=2,3,\\ldots. \\tag{33}\\] Equation (33) is supplemented by an additional equation for \\(\\eta\\). The functions \\(X_{i}\\) are obtained as the \\(i\\)th coefficient of the \\(\\vartheta\\) series expansion of the flow equation's right-hand side. This infinite tower of equations is then approximated by a finite one by setting all \\(w_{i}=0\\) by hand for some \\(i>i_{\\rm trunc}\\), resulting in \\(i_{\\rm trunc}\\) equations for \\(i_{\\rm trunc}\\) variables. The quality of this further truncation can be checked by varying \\(i_{\\rm trunc}\\). This recipe cannot be directly applied to the present case involving the spectrally adjusted cutoff because an expansion of the flow equation (31) will be of the form \\[\\partial_{t}w_{i}=X_{i}(\\eta,w_{2},\\ldots,w_{i+1})+Y_{i}(\\eta,w_{2},\\ldots,w_ {i+1};\\partial_{t}w_{2},\\ldots,\\partial_{t}w_{i+1}),\\quad i=2,3,\\ldots. \\tag{34}\\]It is tempting to truncate this tower by setting not only \\(w_{i>i_{\\rm trunc}}=0\\) by hand, but also \\(\\partial_{t}w_{i>i_{\\rm trunc}}=0\\). This is too naive, however, because all \\(\\partial_{t}w_{i}\\), if understood as the left-hand side of Eq. (34), receive nonzero contributions on the right-hand side, even if \\(i>i_{\\rm trunc}\\). Neglecting these right-hand sides would correspond to neglecting some \\(w_{i}\\)'s which are in the truncation \\(i\\leq i_{\\rm trunc}\\). In order to apply the above-mentioned recipe, we have to bring Eq. (34) into the form of Eq. (33), i.e., we have to solve for the \\(\\partial_{t}w_{i}\\)'s. Formally, this is possible by observing that the functions \\(Y_{i}\\), as they are derived from Eq. (31), are linear in all \\(\\partial_{t}w_{i}\\) and \\(\\eta\\), and the \\(X_{i}\\) are also linear in \\(\\eta\\). Introducing a \"vector\" \\(\\vec{w}_{t}\\) with components \\[\\vec{w}_{t}:=\\Big{\\{}\\begin{array}{l}w_{t\\,1}=-\\eta\\\\ w_{t\\,i}=\\partial_{t}w_{i}\\,\\,\\,{\\rm for}\\,\\,\\,i=2,3,\\dots\\end{array}\\Big{\\}}, \\tag{35}\\] equation (34) can be written as6 Footnote 6: The meaning of the quantities \\(X_{i}\\) and \\(Y_{ij}\\) changes here slightly, because the \\(\\eta\\) and \\(\\partial_{t}w_{i}\\) dependence is pulled out compared to Eq. (34). \\[w_{t\\,i}=X_{i}(w_{2},\\dots,w_{i+1})+Y_{ij}(w_{2},\\dots,w_{i+1})\\,w_{t\\,j}, \\tag{36}\\] or symbolically, \\(\\vec{w}_{t}=\\vec{X}+Y\\cdot\\vec{w}_{t}\\). Provided that the operator \\(1-Y\\) is invertible, the desired solution is formally given by \\[\\vec{w}_{t}=\\frac{1}{1-Y}\\cdot\\vec{X}, \\tag{37}\\] where the right-hand side is a function of \\(w_{2},w_{3},\\dots\\) only. Now, the approximation strategy for the ordinary cutoff can be applied to Eq. (37). Nevertheless, the resulting finite tower of differential equations is substantially different from the ordinary case, even for the smallest \\(i_{\\rm trunc}\\). This is because \\(X_{i}\\) and \\(Y_{ij}\\) are generally nonzero, even for \\(i,j>i_{\\rm trunc}\\), since they depend on the remaining \\(w_{i\\leq i_{\\rm trunc}}\\) (and numbers such as \\(d\\) and \\(N\\)). And since they are infinite dimensional, we find an infinite number of terms on the right-hand side of the flow equations, in contrast to a finite number for ordinary cutoffs. For the remainder of this section, we shall evaluate Eq. (37) in the simplest possible way by neglecting all \\(w_{i}\\)'s with \\(i=2,3,\\dots\\) and retaining only the anomalous dimension \\(\\eta\\), which is related to the \\(\\beta\\) function of Yang-Mills theory via \\[\\beta(g^{2})\\equiv\\partial_{t}g^{2}=(d-4+\\eta)\\,g^{2}, \\tag{38}\\] so that in \\(d=4\\) we simply have \\(\\beta(g^{2})=\\eta\\,g^{2}\\). We would like to stress that the approximation of neglecting all \\(w_{i}\\)'s at this stage is not at all equal to neglecting them right from the beginning. This further truncation is only consistent _after_ we have disentangled the flows of all \\(w_{i}\\)'s by virtue of Eq. (37). For the investigation of the \\(\\eta\\) equation, corresponding to the first component of the vector equation (37), it is useful to scale out the coupling constant, so that \\(\\vec{X}\\) and \\(Y\\) no longer depend on the coupling: \\[\\vec{X}\\to G\\,\\vec{X},\\quad Y\\to G\\,Y,\\quad G:=\\frac{g^{2}}{2(4\\pi)^{d/2}}. \\tag{39}\\] In \\(d=4\\), the convenient coupling \\(G\\) is related to the standard strong coupling constant \\(\\alpha_{\\rm s}\\equiv\\frac{g^{2}}{4\\pi}=8\\pi G\\). Using Eq. (39), we can perform a \"perturbative\" expansion of the \\(\\eta\\) equation: \\[-\\eta \\equiv w_{t\\,1}=\\left(\\frac{1}{1-G\\,Y}\\right)_{1j}\\,G\\,X_{j}=G(1+GY+G^ {2}Y^{2}+\\dots)_{1j}\\,X_{j} \\tag{40}\\] \\[= G\\left(\\sum_{m=0}^{\\infty}G^{m}\\,Y^{m}\\right)_{1j}X_{j}.\\] The explicit representation of \\(Y\\) and \\(\\vec{X}\\) can be found by inserting the expansions developed in Appendix C into Eq. (31), and performing the propertime \\(s\\) integration; the latter results in the moments \\(h_{j},g_{j}\\) of the cutoff functions \\(\\tilde{h}(s),\\tilde{g}(s)\\), \\[h_{j}:=\\int\\limits_{0}^{\\infty}ds\\,s^{j}\\,\\tilde{h}(s),\\quad g_{j}:=\\int \\limits_{0}^{\\infty}ds\\,s^{j}\\,\\tilde{g}(s), \\tag{41}\\] which are discussed in Appendix D. In conclusion, we find: \\[X_{i} = -2^{i+1}\\,\\tau_{i}\\,h_{2i-d/2}\\,i!\\left((d-2)\\frac{(2^{2i}-2)}{(2 i)!}\\,B_{2i}-\\frac{4}{(2i-1)!}\\right),\\] \\[Y_{ij} = A_{ij}+B_{ij}+C_{ij}, \\tag{42}\\] where \\(B_{2i}\\) denotes the Bernoulli numbers, and the auxiliary matrices \\(A,B,C\\) are given by \\((i,j=1,2,\\dots)\\): \\[A = \\left\\{\\begin{array}{l}A_{i1}=0\\\\ A_{ij}=0\\,\\,\\,{\\rm if}\\,\\,\\,j>i+1\\\\ A_{ij}=\\frac{i!}{(j-1)!}\\left[2^{n}\\tau_{n}(h_{2n-d/2}-g_{2n-d/2})\\left((d-1) \\frac{2^{2n}-2}{(2n)!}B_{2n}-\\frac{4}{(2n-1)!}\\right)\\right]_{n=1+i-j}\\\\ \\\\ B&=\\end{array}\\right.\\left\\{\\begin{array}{l}B_{ij}=0\\,\\,\\,{\\rm if}\\,\\,\\,j> 1\\\\ B_{i1}=-2^{i}\\tau_{i}h_{2i-d/2}i!\\left((d-1)\\frac{2^{2i}-2}{(2i)!}B_{2i}-\\frac{ 4}{(2i-1)!}\\right)\\\\ \\\\ C_{i,i+1}=-\\frac{4}{d}\\,i\\left(h_{-2}-g_{-2}\\right)\\end{array}\\right.. \\tag{43}\\] These explicit representations (42) and (43) can be inserted into Eq. (40), and the anomalous dimension and the \\(\\beta\\) function can be computed straightforwardly to any finite order in perturbation theory within our truncation. As an example, let us compute the two-loop \\(\\beta\\) function in \\(d=4\\) spacetime dimensions for SU(\\(N\\)) gauge theories: \\[\\beta(g^{2})=\\partial_{t}g^{2}=-\\frac{22N}{3}\\frac{g^{4}}{(4\\pi)^{2}}-\\left( \\frac{77N^{2}}{3}-\\frac{127(3N^{2}-2)}{45}\\big{(}h_{-2}-g_{-2}\\big{)}h_{2}\\, \\tau_{2}\\right)\\frac{g^{6}}{(4\\pi)^{4}}+\\dots. \\tag{44}\\] Here we already used the fact that \\(h_{0}=1=g_{0}\\) are independent of the shape of the cutoff function, so that the one-loop coefficient turns out to be universal as it should be and agrees with the standard perturbative result; this should serve as a (rather trivial) check of our computation. Within our truncation, the two-loop coefficient does depend on the cutoff function. In order to compare with [12] where the \\(\\partial_{t}\\Gamma_{k}^{(2)}\\) terms have been neglected, we choose the exponential cutoff defined in Eq. (D.4), implying that \\(g_{-2}=1\\), \\(h_{-2}=2\\zeta(3)\\simeq 2.404\\), and \\(h_{2}=1/6\\) as computed in Appendix D. From Appendix E, we take over that \\(\\tau_{2}^{N=2}=2\\) and \\(\\tau_{2}^{N=3}=9/4\\). Inserting all these numbers and comparing this to the perturbative two-loop result, \\[\\beta_{\\rm pert.}(g^{2})=-\\frac{22N}{3}\\,\\frac{g^{4}}{(4\\pi)^{2}}-\\frac{68N^{ 2}}{3}\\,\\frac{g^{6}}{(4\\pi)^{4}}+\\dots, \\tag{45}\\] we find a remarkable agreement of 99% for the two-loop coefficient for SU(2), and 95% for SU(3). This should be compared to only a 113% agreement of these coefficients in the case where the \\(\\partial_{t}\\Gamma_{k}^{(2)}\\) terms are neglected [12]. The inclusion of these terms appears to represent a serious improvement. However, the picture is not so rosy as it seems to be in view of this result. The reason is that our two-loop result is cutoff-scheme dependent, and we may easily choose a cutoff with a worse agreement at two loop.7 Only recently has it been explicitly shown how to obtain the correct scheme-independent two-loop \\(\\beta\\) function within the framework of the exact RG [27]; for this, a careful distinction has to be drawn between the running of the coupling with respect to \\(k\\) or the RG scale \\(\\mu\\) (see also [31]). Within our truncation here, we can nevertheless turn the argument around by remarking that the exponential cutoff is obviously well suited for the present truncation in the sense that it minimizes the combined effect of the neglected terms such as \\(w_{2}\\), \\((F_{\\mu\ u}^{a}\\widetilde{F}_{\\mu\ u}^{a})^{2}\\), or \\(\\Gamma_{k}^{\\rm gauge}\\), etc. on the two-loop coefficient. This particularly justifies the use of the exponential cutoff for an investigation of the complete sum in Eq. (40) and the strong-coupling domain. Footnote 7: Actually, this point is even more subtle [27]: cutoff-scheme independence of the two-loop \\(\\beta\\) function coefficient holds only for mass-independent regulators. The regulator \\(R_{k}\\) obviously does not belong to this class, so that cutoff-scheme dependence has to be expected. However, since we have no information about the true two-loop coefficient for our \\(k\\)-dependent regulator \\(R_{k}\\), we shall still use the two-loop coefficient for a mass-independent scheme as our benchmark test. Let us summarize what has been achieved so far: in order to extract the flow of the gauge coupling, the flow equation (31) has to be studied near \\(\\vartheta=0\\), where the information about \\(\\eta\\) is encoded. This suggests an expansion of \\(w_{k}(\\vartheta)\\) in powers of \\(\\vartheta\\), leading to completely disentangled flow equations (37) for the generalized couplings \\(\\eta,w_{2},w_{3},\\dots\\). However, since the original flow equation (31) is represented as a parameter integral, its expansion can be asymptotic, which implies that the series expansion in terms of the coupling \\(G\\) in Eq. (40) will be asymptotic as well. This agrees with the general expectation that perturbative expansions of quantum field theories are generically asymptotic. In practice, this means that the coefficients (for later convenience, we shift the index \\(m\\) here) \\[a_{m}:=-(Y^{m-1})_{1j}X_{j},\\quad m=1,2,\\dots \\tag{46}\\] in Eq. (40) grow rapidly, so that any arbitrarily large but finite truncation of the series does not make sense. It turns out that these coefficients grow even more strongly than factorially and alternate in sign for the exponential cutoff (we shall comment on other cutoffs later). This does not mean that any physical meaning is lost, but, loosely speaking, that we have expanded an integrand which we should not have expanded. Yet, there are well-defined mathematical tools for reconstructing the integrand representation out of the diverging sum [32]. In other words, we are looking for a (well-defined) integral representation that upon asymptotic expansion leads to a series that agrees with Eq. (40). As is known also from various physical examples [33], just taking only the leading growth of the coefficients into account leads to a good approximation of the integral representation. Concerning the coefficients \\(a_{m}\\), the leading growth (l.g.) can be isolated in the term that contains the highest component of \\(\\vec{X}\\), i.e., \\(X_{m}\\), yielding \\[a_{1}^{\\rm l.g.}=-X_{1},\\quad a_{2}^{\\rm l.g.}=-Y_{12}X_{2},\\quad a_{m}^{\\rm l.g.}=-Y_{12}Y_{23}Y_{34}\\dots Y_{m-1,m}X_{m}. \\tag{47}\\] Inserting the representations (43) into Eq. (47) for the exponential cutoff, we find \\[a_{m}^{\\rm l.g.}=4(-2c)^{m-1}\\frac{\\Gamma(m+3(N^{2}-1))\\Gamma(m+1)}{\\Gamma(3N ^{2}-2)}\\,\\tau_{m}\\,B_{2m-2}\\left(2\\frac{2^{2m}-2}{(2m)!}B_{2m}-\\frac{4}{ \\Gamma(2m)}\\right), \\tag{48}\\] where we abbreviated \\(c=2\\zeta(3)-1\\). Let us first concentrate on SU(2), where \\(\\tau_{m}=2\\) for \\(m=1,2,\\dots\\) (see Appendix E); let us nevertheless retain the \\(N\\) dependence in all other terms in order to facilitate the generalization to SU(3). Actually, Eq. (48) also contains subleading terms. First, we observe that the last term \\(\\sim 1/\\Gamma(2m)\\) is negligible compared to the term \\(\\sim B_{2m}\\) for large \\(m\\). Nevertheless we also retain this subleading term, since the \\(m=1\\) term contributes significantly to the one-loop \\(\\beta\\)-function coefficient which we want to maintain in our approximation. Furthermore using the identity \\[B_{2m-2}=\\frac{(-1)^{m}\\Gamma(2m-1)}{2^{2m-3}\\pi^{2m-2}}\\,\\zeta(2m-2), \\tag{49}\\] it is tempting to use the \\(\\zeta\\)-function representation \\[\\zeta(z)=\\sum_{l=1}^{\\infty}\\frac{1}{l^{z}}, \\tag{50}\\]and retain only the \\(l=1\\) term, since the others are subleading. Whereas this approximation is indeed justified for the Bernoulli number \\(B_{2m}\\) in Eq. (48), we have to retain the full \\(\\zeta\\) function for the \\(B_{2m-2}\\) factor, since here we encounter the \\(\\zeta\\) function at zero argument for \\(m=1\\) where Eq. (50) is no longer valid. In conclusion, we resum the complete coefficient \\(a_{m}^{\\rm l.g.}\\) as displayed in Eq. (48), including the leading and also subleading terms. In the spirit of Borel summation [32], we introduce integral representations for the special functions occurring in Eq. (48), in particular the representation [34] \\[\\Gamma(2m-1)\\,\\zeta(2m-2)=\\frac{1}{1-2^{3-2m}}\\int\\limits_{0}^{\\infty}dt\\, \\frac{{\\rm e}^{t}}{(e^{t}+1)^{2}}\\,t^{2m-2},\\quad m>1/2 \\tag{51}\\] for the \\(\\zeta\\) function in Eq. (49), and an Euler \\(B\\) function representation for a combination involving the last term in Eq. (48): \\[\\frac{\\Gamma(m+3(N^{2}-1))\\Gamma(m+1)}{\\Gamma(2m)} \\equiv \\frac{\\Gamma(m+3(N^{2}-1))}{\\Gamma(m)}\\,\\frac{\\Gamma(m+1)}{\\Gamma (m)}\\,B(m,m) \\tag{52}\\] \\[= m(m+1)\\ldots(m+3N^{2}-4)\\cdot m\\int\\limits_{0}^{1}ds\\,s^{m-1} \\,(1-s)^{m-1}\\] \\[= \\int\\limits_{0}^{1}ds\\left[\\left(\\frac{d}{ds}\\right)^{(3N^{2}-3) }s^{m+(3N^{2}-4)}\\right]\\left(\\frac{d}{ds^{\\prime}}\\,s^{\\prime m}\\right)_{s^ {\\prime}=1-s}.\\] For the remaining \\(\\Gamma\\) functions, we use the standard Euler representation. Exploiting these identities, we are able to resum Eq. (40) to this order: \\[\\eta = -G\\left(\\sum_{m=1}^{\\infty}G^{m-1}\\,Y^{m-1}\\right)_{1j}\\,X_{j} \\stackrel{{\\rm l.g.}}{{\\simeq}}\\sum_{m=1}^{\\infty}a_{m}^{\\rm l.g.}\\,G^{m} \\tag{53}\\] \\[=: \\eta_{\\rm a}+\\eta_{\\rm b},\\] where \\(\\eta_{\\rm a}\\) is related to the term \\(\\sim B_{2m}\\) in Eq. (48), whereas \\(\\eta_{\\rm b}\\) is related to the term \\(\\sim 1/\\Gamma(2m)\\). The integral representation of \\(\\eta_{\\rm a}\\) reads \\[\\eta_{\\rm a}^{N=2}\\!=\\frac{32NG}{\\Gamma(3N^{2}\\!-2)\\pi^{2}}\\sum_{l=1}^{\\infty }\\!\\int\\limits_{0}^{\\infty}\\!ds_{1}ds_{2}dt\\frac{{\\rm e}^{t-(s_{1}+s_{2})}}{({ \\rm e}^{t}+1)^{2}}\\,\\frac{s_{1}s_{2}^{3N^{2}-3}}{l^{2}}\\!\\left[S\\!\\left(\\! \\frac{cGs_{1}s_{2}t^{2}}{2\\pi^{4}l^{2}}\\right)-\\frac{1}{2}\\,S\\!\\left(\\!\\frac{ cGs_{1}s_{2}t^{2}}{8\\pi^{4}l^{2}}\\right)\\!\\right]\\!, \\tag{54}\\] where we defined the sum \\[-\\sum_{m=1}^{\\infty}\\frac{(-q)^{m-1}}{1-2^{3-2m}}=1+\\sum_{j=0}^{\\infty}\\frac{ q}{2^{j}+\\frac{q}{2^{j}}}=:S(q). \\tag{55}\\]The first sum arises from the asymptotic expansion and is strictly valid only for \\(|q|<1\\); however, the second sum is valid for arbitrary \\(q\\), apart from simple poles at \\(q=-2^{2j}\\), and rapidly converging, so that this equation should be read from right to left. The second part \\(\\eta_{\\rm b}\\) deserves a comment: as it arises from the last term in Eq. (48), \\(\\sim 1/\\Gamma(2m)\\), it originates in the last term \\(\\sinh u\\) of the auxiliary function \\(f_{1}\\) in Eq. (28), which stems from the lower end of the spectrum; in particular, it contains the Nielsen-Olesen unstable mode. This mode is reflected in a simple pole in the following integral representation for \\(\\eta_{\\rm b}\\). This pole gives rise to an imaginary part of the full integrand. As we have stressed above, the imaginary part created by this unstable mode is of no relevance for the flow equation here, so that the proper treatment of the integral results in a principal-value prescription maintaining the important real part. For a numerical realization, this prescription can best be established by rotating the \\(t\\) integral arising from Eq. (51) by an angle of, e.g., \\(\\pi/4\\) from the real axis into the upper complex plane and then taking the real part. In conclusion, we get: \\[\\eta_{\\rm b}^{N=2}\\!=-\\frac{32NG}{\\Gamma(3N^{2}\\!-\\!2)}\\,{\\rm Re}\\int\\limits_{0 }^{1}\\!\\!ds\\!\\int\\limits_{0}^{\\infty}\\!\\!\\!\\frac{{\\rm(i+i)}}{\\sqrt{2}}dt\\frac{{ \\rm e}^{\\frac{1+i}{\\sqrt{2}}t}}{({\\rm e}^{\\frac{1+i}{\\sqrt{2}}t}\\!+1)^{2}}\\! \\left(\\!\\frac{d}{ds}\\!\\right)^{\\!\\!(3N^{2}-3)}\\!\\!\\frac{d}{ds^{\\prime}}\\,s^{3N^ {2}-3}s^{\\prime}\\,S\\!\\left(\\!-\\!{\\rm i}\\frac{cGss^{\\prime}t^{2}}{2\\pi^{2}} \\right)\\!\\!\\Bigg{|}_{s^{\\prime}=s-1}. \\tag{56}\\] Although it seems that we have seriously complicated the problem by trading the single \\(m\\) sum in Eq. (40) for a number of integrals and sums, we stress that all integrals and sums in Eqs. (54) and (56) are finite and well defined. Before we present numerical evaluations of these integrals and sums, let us discuss some features analytically. For small coupling \\(G=g^{2}/[2(4\\pi)^{2}]\\), we can again expand the integrals asymptotically and obtain \\[\\eta_{\\rm a}=\\frac{2}{3}N\\,\\frac{g^{2}}{(4\\pi)^{2}}+\\ldots,\\quad\\eta_{\\rm b}= -8N\\,\\frac{g^{2}}{(4\\pi)^{2}}+\\ldots, \\tag{57}\\] so that we rediscover the one-loop \\(\\beta\\) function (cf. Eq. (44)) as a check. Next, we observe that \\(\\eta_{\\rm a}\\) (containing the true leading-order growth of the \\(a_{m}^{\\rm l.g.}\\)'s) is positive not only for small but arbitrary \\(G\\). In order to extract large-\\(G\\) information, we note that the sum \\(S(q)\\) can be fitted by \\[S(q)\\simeq c_{1}\\sqrt{\\frac{1}{c_{1}^{2}}-c_{2}\\sqrt{q}+q},\\quad c_{1}\\simeq 2.27,\\quad c_{2}\\simeq 0.7 \\tag{58}\\] within \\(1\\%\\) accuracy, implying that \\(S(q)\\simeq c_{1}\\sqrt{q}\\) for large \\(q\\). In this limit, which corresponds to large \\(G\\), \\(\\eta_{\\rm a}\\) can be evaluated analytically and we find \\[\\eta_{\\rm a}^{N=2}(G\\gg 1)\\simeq\\frac{24Nc_{1}}{\\pi^{4}}\\sqrt{\\frac{c}{2}} \\zeta(3)\\ln 2\\,\\Gamma(5/2)\\frac{\\Gamma(3N^{2}-3/2)}{\\Gamma(3N^{2}-2)}\\,G^{3/2} \\simeq 3.24\\,G^{3/2}. \\tag{59}\\]Without going into details, we note that there exist fits for \\(S(-{\\rm i}q)\\) similarly to Eq. (58) involving a square-root behavior for large \\(q\\) (large \\(G\\)). It turns out that the \\(G^{3/2}\\) coefficient vanishes exactly, so that8 Footnote 8: We do not evaluate the precise coefficient of the \\(G^{1/2}\\) here, since the large-\\(G\\) expansion introduces artificial singularities for the \\(t\\) integration at \\(t\\to 0\\). A more careful treatment reveals that the coefficient is positive. \\[\\eta_{\\rm b}^{N=2}\\sim+G^{1/2}. \\tag{60}\\] Obviously, \\(\\eta_{\\rm b}\\) is subleading for large \\(G\\), which agrees with the fact that it arises from subleading parts in the coefficients \\(a_{m}^{\\rm l.g.}\\). Moreover, \\(\\eta_{\\rm b}\\) becomes positive for large \\(G\\), so that there should be a zero in between. This is already the first sign of an infrared stable fixed point at which \\(\\eta(G_{*})=0\\). For a numerical evaluation of \\(\\eta_{\\rm a}\\) and \\(\\eta_{\\rm b}\\), we employ the representations given in Appendix F. We depict the anomalous dimension \\(\\eta\\) and its parts \\(\\eta_{\\rm a}\\) and \\(\\eta_{\\rm b}\\) in Fig. 2 for the gauge group SU(2). The plots agree with the analytical estimates given above, and we find an infrared stable fixed point at \\[G_{*}^{N=2}\\simeq 0.45\\quad\\Rightarrow\\quad\\alpha_{*}^{N=2}\\simeq 11.3. \\tag{61}\\] By virtue of Eq. (38), the running gauge coupling approaches this fixed point upon lowering the scale \\(k\\) in the infrared, implying scale invariance. The complete flow of the coupling is obtained by integrating \\(\\beta(g^{2})\\equiv\\partial_{t}g^{2}=\\eta\\,g^{2}\\) and has been plotted already in Fig. 1. For the gauge group SU(3), we do not have the explicit representation of the color factors \\(\\tau_{m}\\) at our disposal. As discussed in Appendix E, we instead study the two extremal cases for the color vector \\(n^{a}\\) pointing into the 3 or 8 direction in color space. Inserting Figure 2: Anomalous dimension \\(\\eta=\\beta(g^{2})/g^{2}\\) for SU(2) Yang-Mills theory in \\(d=4\\) versus \\(G=g^{2}/[2(4\\pi)^{2}]\\). The long-dashed line represents the contribution \\(\\eta_{a}\\), the short-dashed line \\(\\eta_{b}\\), as defined in Eqs. (54) and (56); the solid line is the sum of both. the corresponding quantities \\(\\tau_{i,3}^{N=3}\\) or \\(\\tau_{i,8}^{N=3}\\) as found in Eq. (E.5) into Eq. (48) allows us to display the anomalous dimension \\(\\eta^{N=3}\\) in terms of the formulas deduced for SU(2): \\[\\eta_{3}^{N=3} = \\frac{2}{3}\\,\\eta^{N=2}\\Big{|}_{N\\to 3}+\\frac{1}{3}\\eta^{N=2} \\Big{|}_{N\\to 3,c\\to c/4},\\] \\[\\eta_{8}^{N=3} = \\eta^{N=2}\\Big{|}_{N\\to 3,c\\to 3c/4}. \\tag{62}\\] The notation here indicates that the quantities \\(N\\) and \\(c=2\\zeta(3)-1\\) appearing on the right-hand sides of Eqs. (54) and (56) will be replaced in the prescribed way. Figure 3 depicts our numerical results, and we identify the position of the infrared fixed point in the interval \\[G_{*}^{N=3}=[G_{*,8},G_{*,3}]\\simeq[0.225,0.385]\\quad\\Rightarrow\\quad\\alpha_{* }^{N=3}\\simeq[5.7,9.7]. \\tag{63}\\] This uncertainty of the precise position of the fixed point is not a shortcoming of the techniques involved (e.g., using the covariant-constant magnetic background), but is due to our ignorance of the exact color factors \\(\\tau_{m}\\). Let us conclude this section with some remarks on the resummation: first, we should stress that the results for the fixed point are derived from a resummation of leading and subleading parts of the complete asymptotic series (40). We have checked that the sub-subleading parts (not included in the present resummation) alternate in sign, so that their contribution will be regular. However, we were not able to systematize the sub-subleading terms in a way that they can be resummed further in a consistent way. Secondly, we performed the computation for the exponential cutoff. It would be desirable to test the stability of the fixed point by using different cutoffs. Unfortunately, we could not find another cutoff shape function \\(r(y)\\) for which the resummation could be Figure 3: Anomalous dimension \\(\\eta=\\beta(g^{2})/g^{2}\\) for SU(3) Yang-Mills theory in \\(d=4\\) versus \\(G=g^{2}/[2(4\\pi)^{2}]\\). The black lines correspond to \\(\\eta_{8}^{N=3}\\), the grey lines to \\(\\eta_{3}^{N=3}\\), as defined in Eq. (62). The meaning of the dashed lines is as in Fig. 2. done. For many cutoff shape functions used in the literature, the series is also asymptotic and alternating, but the sign changes not from one coefficient to the other but from one group of coefficients to the next, i.e., \\(a_{n_{1}}\\ldots a_{n_{2}}>0\\) and \\(a_{n_{2}+1}\\ldots a_{n_{3}}<0\\) for \\(n_{1}<n_{2}<n_{3}\\), etc. The outstanding role of the exponential cutoff may be attributed to its close relation to the Bernoulli numbers and their properties. Let us finally stress once more that the generalized Borel resummation of Eq. (40) does not represent an uncontrolled extrapolation of finite-order perturbation theory. As we have the exact all-order result at our disposal, the resummation corresponds simply to an - also mathematically - well-defined transformation of a series into an integral. ## 5 The role of the spectrally adjusted cutoff This short section is devoted to a heuristic discussion of the special role played by the spectrally adjusted cutoff in this work, focusing on the truncation employed. The spectral adjustment of the cutoff function to the spectral flow of \\(\\Gamma_{k}^{(2)}\\) arises from two sources: first, from using \\(\\Gamma_{k}^{(2)}\\) in the argument of \\(R_{k}\\) and, second, from including a carefully chosen wave-function renormalization constant \\(Z_{k}\\) in the cutoff. The latter technique is well known in the problem of calculating anomalous dimensions in scalar and fermionic theories. In order to get a feeling for these two improvements, let us first consider the flow equation, neglecting all \\(\\partial_{t}\\Gamma_{k}^{(2)}\\) and \\(\\partial_{t}Z_{k}\\) terms. For the anomalous dimension, we would then obtain: \\[\\eta=-b_{0}\\frac{g^{2}}{(4\\pi)^{2}}-b_{1}\\,\\frac{g^{2}}{(4\\pi)^{2}}\\,w_{2}, \\tag{64}\\] with some coefficient \\(b_{1}\\), and \\(b_{0}\\) being the correct one-loop result. Obviously, choosing furthermore the truncation \\(w_{2},w_{3},\\cdots=0\\) leaves us with a purely perturbative lowest-order result. This means that all nonperturbative information is contained in the flow of \\(w_{2}\\), which in turn can be reliably computed only by including \\(w_{3}\\), etc. A good estimate therefore probably requires a very large truncation. Even if the precise infrared values of the higher couplings \\(w_{i}\\) may not be very important, their flow exerts a strong influence on the running coupling in this approximation. Let us now take the \\(\\partial_{t}Z_{k}\\) terms into account, but still neglect the \\(\\partial_{t}\\Gamma_{k}^{(2)}\\) terms. In this case, the flow equation results in the following expression for the anomalous dimension: \\[\\eta=-\\frac{b_{0}\\frac{g^{2}}{(4\\pi)^{2}}+b_{1}\\,\\frac{g^{2}}{(4\\pi)^{2}}w_{2 }}{1+d_{1}\\,\\frac{g^{2}}{(4\\pi)^{2}}+d_{2}\\,\\frac{g^{2}}{(4\\pi)^{2}}w_{2}}, \\tag{65}\\] with further coefficients \\(d_{1},d_{2}\\), where \\(d_{1}<0\\). Particularly this \\(d_{1}\\) makes an important contribution to the two-loop \\(\\beta\\) function coefficient. Contrary to Eq. (64), this equation contains information to all orders in \\(g^{2}\\), even for the strict truncation \\(w_{2},w_{3},\\cdots=0\\). We have to conclude that an adjustment of the cutoff function using a cutoff wave-function renormalization \\(Z_{k}\\) is an effective way to put essential information of the flow of the higher couplings \\(w_{2},w_{3}\\dots\\) into \\(\\eta\\). In other words, the truncated RG trajectory better exploits the degrees of freedom left in the truncation. Let us note in passing that the flow governed by Eq. (65) runs into a kind of Landau pole for \\(\\frac{g^{2}}{(4\\pi)^{2}}\\simeq 1/|d_{1}|\\), even if the flows of \\(w_{2}\\) and higher couplings are included. This \"disease\" has occurred in many flow equation studies in Yang-Mills theory [5], [7], [12], [35]. Now let us turn to the full flow equation, including the terms generated by \\(\\partial_{t}\\Gamma_{k}^{(2)}\\). As explained in the previous section, the right-hand side cannot be displayed in terms of the \\(w_{i}\\)'s in closed form, because infinitely many terms contribute. Even if we set all \\(w_{i}\\)'s to zero, which indeed corresponds to our final approximation, the anomalous dimension reads \\[\\eta=-\\frac{b_{0}\\frac{g^{2}}{(4\\pi)^{2}}+b_{1}\\,\\frac{g^{4}}{(4\\pi)^{4}}+b_{2 }\\,\\frac{g^{6}}{(4\\pi)^{6}}+\\dots}{1+d_{1}\\,\\frac{g^{2}}{(4\\pi)^{2}}+d_{2}\\, \\frac{g^{4}}{(4\\pi)^{4}}+d_{3}\\,\\frac{g^{6}}{(4\\pi)^{6}}+\\dots}, \\tag{66}\\] with some real coefficients \\(b_{i}\\) and \\(d_{i}\\); (expanding Eq. (66) in powers of \\(g^{2}\\) results in Eq. (40)). Whereas the nonperturbative dependence of \\(\\eta\\) in Eq. (65) resembles that of a Dyson series and is controlled by one coefficient (\\(d_{1}\\) in that case), Eq. (66) contains nonperturbative information from infinitely many coefficients. The latter arises from the flows \\(\\partial_{t}w_{i}\\) which all contribute to Eq. (66). We conclude that the spectrally adjusted cutoff provides for an efficient reorganization of the flow equation, so that a small truncation can contain information which, for ordinary cutoffs, is distributed over infinitely many couplings of a larger truncation. From this observation, we conjecture that the spectrally adjusted cutoff selects a truncated RG trajectory which is \"optimized\" with respect to the degrees of freedom within a chosen truncation. We furthermore conjecture that this trajectory does not flow into regions of theory space where the exact flow would be mainly driven by couplings which are not contained in the truncation, but is always driven by the couplings within the truncation in an optimized way. Whether the truncated RG trajectory flows to the true quantum action or not depends, of course, on the quality of the truncation. We have obviously verified these conjectures only for a truncation (\\(\\eta\\)) within a truncation (\\(\\eta,w_{i}\\)). In fact, in order to exploit the properties of the spectrally adjusted cutoff, we first have to discuss the flow of a larger truncation, then disentangle the flows of the single couplings and finally restrict the calculation to the most relevant part under consideration. Let us finally point out that using the spectrally adjusted cutoff necessarily requires introducing a background field, because \\(\\Gamma_{k}^{(2)}\\) in the cutoff function is not allowed to depend on the actual field variable. A background field generally complicates the formulation, and the technical advantages of the spectrally adjusted cutoff may be compensated for by these further complications. Gauge theories, however, may serve as a natural testing ground for the spectrally adjusted cutoff, since the background-field formalism is advantageous here for further reasons. ## 6 Conclusions Starting from the exact renormalization group flow equation for the effective average action in SU(\\(N\\)) Yang-Mills theory in \\(d\\) dimensions, we derive within a series of systematic approximations the \\(\\beta\\) function of the gauge coupling. In \\(d=4\\) spacetime dimensions, the resulting flow of the gauge coupling exhibits accurate perturbative behavior and approaches a fixed point in the infrared. The fixed-point results are displayed in Eqs. (61) and (63) for gauge groups SU(2) and SU(3). In view of the approximations involved, a number of improvements are desirable in order to confirm the existence of the infrared fixed point. Above all other possible improvements, such as enlarged truncations and explicit cutoff-shape independence (or insensitivity), a better control of gauge invariance under the flow is necessary. Nevertheless, in view of the flow equation studies performed in the literature for gauge theories so far, it is already remarkable that our approximation to the exact flow equation is integrable down to \\(k\\to 0\\); in many instances, the truncation revealed an explicit insufficiency by developing a Landau-pole type of singularity at some finite \\(k\\) in one or more couplings. The new technique in the present work is the use of a cutoff function that adjusts itself permanently to the actual spectrum under the flow. From a practical viewpoint, this cutoff condenses information, which is usually distributed over the flow equations of infinitely many couplings, into the flow equation of a single coupling (in this case the gauge coupling). We have reason to believe that the information, which is reorganized in this way into a single flow equation, is the relevant information that mainly drives the flow of the corresponding coupling. The fact that we improved the agreement with the perturbative two-loop running from merely 113% to 99% for SU(2) (using the exponential cutoff shape function) may serve as a hint in this direction. If the fixed point exists and our truncation even covers the true mechanism, it is still unlikely that our present results for \\(\\alpha_{*}\\) are also quantitatively correct. We expect a lowering of \\(\\alpha_{*}\\) for larger truncations owing to the following argument: in our calculation, the position of \\(\\alpha_{*}\\) is strongly governed by those modes which are also responsible for asymptotic freedom (contained in \\(\\eta_{\\rm b}\\)). If, in a larger truncation, operators of higher order are generated under the flow, these modes will generically lose influence, and the effects of the remaining spectrum contributing to \\(\\eta_{\\rm a}\\) will be enhanced. This will shift \\(\\alpha_{*}\\) to smaller values. A similar effect occurs upon the inclusion of quark degrees of freedom. The perturbative quark contribution to the \\(\\beta\\) function is already positive. And since no ultraviolet stable fixed point is known in QED, we also do not expect negative quark contributions beyond the perturbative regime. Therefore, we expect not only the presence of the fixed point in full QCD, but also a substantial shift towards lower values of \\(\\alpha_{*}\\). Work in this direction is in progress. A comparison of our result with the literature is in order now, although it is generally difficult, owing to the various nonperturbative definitions of the gauge coupling; different definitions may agree perturbatively, but differ beyond perturbation theory. Our definition is standard in pure continuum gauge theory; moreover, it is equal to the interaction strength of static quarks with the gauge field. Nevertheless, it is not immediately clear to us how it can be related to a definition which is used, for instance, in lattice gauge theory [36]. This may serve as a word of caution. The notion of an infrared fixed point for the gauge coupling has been used extensively in recent years, especially in connection with the phenomenology of power corrections in QCD [37]. Furthermore, such a so-called freezing of the coupling has been discussed in phenomenological low-energy models [38], and deduced from an analysis of the famous \\(R_{e^{+}e^{-}}\\) ratio [39]. There are also various theoretical arguments favoring an infrared fixed point, e.g., even within a perturbative framework for a finite number of flavors [40]. Furthermore, investigating analyticity properties in the time-like and space-like (Euclidean) region, a scheme called analytic perturbation theory has been proposed, yielding an infrared finite coupling [41]; this program has been successfully applied to hadron and lepton-hadron phenomenology [42]. Having the above-mentioned reservations in mind concerning the various different nonperturbative definitions of the coupling, the question of how they are related to each other deserves further study. Moreover, an actual nonperturbative computation of gluon and ghost propagators has been set up in the framework of truncated Schwinger-Dyson equations in Landau gauge [43], revealing an infrared fixed point; these results also receive some support from lattice calculations [44]. Again, the relation to our results is not immediately obvious, since the running coupling as defined in [43] is obtained from the ghost-gluon vertex; furthermore, a nonperturbative treatment of the ghost sector turned out to be crucial in that work, but the four-gluon vertex was neglected. Nevertheless, there are also similarities: on very general grounds, it was found in the approximation of [43] that the fixed point scales with the number of colors as \\(\\alpha_{*}\\sim 1/N\\). We observe that the central value of our SU(3) result and the SU(2) result fulfil exactly this relation, although this is far from self-evident in our calculation. Let us finally discuss further implications of our result: comparing the full \\(\\beta\\) function with its perturbative counterpart, we observe a quantitative agreement up to \\(\\alpha_{\\rm s}\\sim 1\\). This does not, of course, justify the use of perturbation theory up to \\(\\alpha_{\\rm s}\\sim 1\\)_in general_, but may explain why perturbation theory gives an accurate answer to _some_ questions, even at its validity limit. Concerning the low-energy fixed-point region, one may ask whether our result provides for some signals of confinement and an expected mass gap in gauge theories. In the first place, the answer is no, since a strong coupling does not necessarily imply confinement. It is rather likely that the strong coupling of the gauge fields is necessary to give rise to a change of the effective degrees of freedom. These degrees of freedom (not necessarily included in our truncation) with probably nontrivial topological properties will then act as \"confiners\". Also the picture of confinement arising in the framework of Landau-gauge Dyson-Schwinger equations [43] cannot be contained in our truncation, since it is based on an infrared enhancement of the ghosts which are treated rather poorly in the present work. Improvements in this direction are also subject to future work. As far as a mass gap is concerned, the infrared fixed point behavior is compatible with such a gap; this is because a mass gap cuts off all quantum fluctuations of lower momentum, so that nothing remains to drive the flow. But the mere existence of an infrared fixed point does not require a mass gap. An indirect signal of a mass gap may be found in the analysis of the different spectral contributions; as we have mentioned above, the perturbative \\(\\beta\\) function is mainly determined by the lowest modes in the spectrum, i.e., the lowest Landau levels in the covariant-constant field analysis. As is familiar from QED calculations, the lowest-Landau-level approximation is always appropriate if the field strength exceeds the mass of the fluctuating particle. This is certainly the case in the perturbative domain where the gluon is massless; hence the picture is complete. When we enter the infrared fixed-point region, the contributions from the remaining part of the spectrum \\(\\eta_{\\sf a}\\) become important. In the Landau-level picture, this is always the case if a mass of the order of the lowest Landau level and beyond is present. The value of the mass then controls the influence of the remaining spectrum. Therefore, the influence of the complete spectrum at the fixed point may be a hint for a hidden new mass scale in low-energy Yang-Mills theory. ## Appendix A Decomposition of \\(\\Gamma_{k}^{(2)}\\) Here we briefly describe the method developed in [12] for decomposing \\(\\Gamma_{k}^{(2)}\\) into smaller building blocks suitable for further diagonalization. The method is based on the observation that it is sufficient to consider only a covariant constant magnetic background field in order to project the flow equation onto the present truncation. The method consists of identifying those components of the quantum fluctuations which are appropriately oriented with respect to the background field; the latter is chosen to be which obey \\(P_{\\rm T,L}^{2}=P_{\\rm T,L}\\), \\(P_{\\rm T}+P_{\\rm L}=1\\), \\(P_{\\rm T}P_{\\rm L}=0=P_{\\rm L}P_{\\rm T}\\). The subscripts indicate that these projectors reduce to the standard longitudinal and transverse projectors in the limit \\(A_{\\mu}\\to 0\\). Another pair of projectors can be defined which act solely in color space: \\[P_{\\parallel}^{ab}=n^{a}n^{b},\\quad P_{\\perp}^{ab}=\\delta^{ab}-n^{a}n^{b}.\\] (A.4) These four projectors are remarkably efficient in the present case; differentiating our truncation for \\(\\Gamma_{k}[A,\\bar{A}]\\), as given in Eq. (22), twice with respect to \\(A\\) and the ghost fields, then setting \\(A=\\bar{A}\\) and dropping the bar, we can represent the result as \\[\\Gamma_{k}^{(2)}[A,A] = P_{\\rm T}P_{\\perp}\\left[W_{k}^{\\prime}\\,{\\cal D}_{\\rm T}\\right]+P _{\\rm L}P_{\\perp}\\,\\left[\\frac{1}{\\alpha}\\,{\\cal D}_{\\rm T}\\right]\\] (A.5) \\[+P_{\\rm T}P_{\\parallel}\\left[W_{k}^{\\prime}\\left(-\\partial^{2} \\right)+W_{k}^{\\prime\\prime}\\,{\\sf S}\\right]+P_{\\rm L}P_{\\parallel}\\,\\left[ \\frac{1}{\\alpha}\\,(-\\partial^{2})\\right]\\] \\[+P_{\\rm gh}\\left[-D^{2}\\right],\\] where we introduced \\[{\\sf S}_{\\mu\ u}={\\sf F}_{\\mu\\alpha}{\\sf F}_{\\beta\ u}\\partial^{\\alpha} \\partial^{\\beta},\\] (A.6) and \\(P_{\\rm gh}\\) projects trivially onto the ghost sector. Equation (A.5) is perfectly suited for further manipulation, since the spectra of the operators occurring in the square brackets is known. This decomposition also offers the possibility of conveniently implementing different wave-function renormalization constants for each subcomponent. ## Appendix B Heat-kernel computations In this appendix, we summarize the results for the heat-kernel traces appearing in Eq. (26). Again, it is sufficient to perform the calculation for a covariant constant background field in order to disentangle the contributions to the flow of different operators. Let us first mention that all color traces occurring in Eq. (26) are of the form \\[{\\rm tr}_{\\rm c}\\,f\\big{(}n^{c}\\,(T^{c})^{ab}\\big{)}=\\sum_{l=1}^{N^{2}-1}f(\ u _{l}),\\] (B.1) where \\(f\\) is an arbitrary function, and \\(\ u_{l}\\) denotes the eigenvalues of the matrix \\((n^{c}\\,T^{c})^{ab}\\). We begin with the heat-kernel trace involving the Laplacian in the covariant constant magnetic background; the spectrum is given by \\[{\\rm Spect.}D^{2}:\\quad q^{2}+(2n+1)\\bar{B}_{l},\\quad\\bar{B}_{l}=\\bar{g}|\ u_ {l}|B,\\quad n=0,1,\\ldots,\\] (B.2)where \\(q_{\\mu}\\) denotes the \\((d-2)\\) dimensional Fourier momentum in those spacetime directions which are not affected by the magnetic field. The index \\(n\\) labels the Landau levels; their corresponding density of states is \\(\\bar{B}_{l}/(2\\pi)\\). Tracing over the spectrum, we obtain \\[\\frac{1}{\\Omega}\\,{\\rm Tr}_{xc}\\,{\\rm e}^{-\\lambda(-D^{2})}=\\sum_{l=1}^{N^{2}-1} \\frac{2}{2(4\\pi)^{d/2}}\\,\\frac{1}{\\lambda^{d/2}}\\,\\frac{\\lambda\\bar{B}_{l}}{ \\sinh\\lambda\\bar{B}_{l}}.\\] (B.3) Here, \\(\\Omega\\) denotes the spacetime volume. With reference to Eq. (26), the parameter \\(\\lambda\\) can be identified with \\(\\lambda=sW_{k}^{\\prime}/(Z_{k}k^{2})\\) or \\(\\lambda=s/k^{2}\\). Next, we turn to the heat-kernel trace involving the operator \\({\\cal D}_{\\rm T}\\) as defined in Eq. (A.2). The spectrum is given by \\[{\\rm Spect.}{\\cal D}_{\\rm T}: q^{2}+(2n+1)\\bar{B}_{l},\\quad{\\rm multiplicity}\\ (d-2)\\] (B.4) \\[{\\rm Spect.}{\\cal D}_{\\rm T}: q^{2}+(2n+3)\\bar{B}_{l},\\quad{\\rm multiplicity}\\ 1\\] \\[q^{2}+(2n-1)\\bar{B}_{l},\\quad{\\rm multiplicity}\\ 1,\\] with \\(q\\) and \\(n\\) as in Eq. (B.2). The last line contains the Nielsen-Olesen unstable mode for \\(n=0\\)[30], which has a tachyonic part for small momenta \\(q^{2}\\). Tracing over the spectrum, we find \\[\\frac{1}{\\Omega}\\,{\\rm Tr}_{x{\\rm cL}}\\,{\\rm e}^{-\\lambda{\\cal D}_{\\rm T}}= \\sum_{l=1}^{N^{2}-1}\\frac{2}{2(4\\pi)^{d/2}}\\,\\frac{1}{\\lambda^{d/2}}\\left(d\\, \\frac{\\lambda\\bar{B}_{l}}{\\sinh\\lambda\\bar{B}_{l}}+4\\lambda\\bar{B}_{l}\\sinh \\lambda\\bar{B}_{l}\\right).\\] (B.5) Finally, we need the following traces \\[\\frac{1}{\\Omega}\\,{\\rm Tr}_{x}\\,{\\rm e}^{-\\lambda(-\\vartheta^{2})} = \\frac{2}{2(4\\pi)^{d/2}}\\,\\frac{1}{\\lambda^{d/2}},\\] \\[\\frac{1}{\\Omega}\\,{\\rm Tr}_{x{\\rm cL}}\\,{\\rm e}^{-\\lambda(- \\vartheta^{2})-\\lambda^{\\prime}{\\sf S}} = \\frac{2(d-1)}{2(4\\pi)^{d/2}}\\,\\frac{1}{\\lambda^{d/2}}+\\frac{2}{2( 4\\pi)^{d/2}}\\,\\frac{1}{\\lambda^{d/2}}\\,\\frac{\\lambda}{\\lambda+B^{2}\\, \\lambda^{\\prime}},\\] (B.6) where \\({\\sf S}\\) has been defined in Eq. (A.6). Here and in Eq. (B.5), the \\(\\lambda\\) parameters abbreviate \\(\\lambda=sW_{k}^{\\prime}/(Z_{k}k^{2})\\) and \\(\\lambda^{\\prime}=sW_{k}^{\\prime\\prime}/(Z_{k}k^{2})\\). Equations (B.3), (B.5), (B.6) serve as the main input for evaluating the right-hand side of the flow equation in Sect. 3. ## Appendix C Expansions Here we shall explicitly display the expansions which are required for the analysis of the anomalous dimension in Sect. 4. The series given below are expanded in terms of the renormalized dimensionless field strength squared \\(\\vartheta\\), but they are also related to expansions in terms of the propertime parameter \\(s\\) or the renormalized coupling \\(g^{2}\\). Since we are expanding an integrand and then interchange integration with expansion, the resulting series can (and will) be asymptotic, involving strongly increasing coefficients. Neglecting all \\(w_{i}\\)'s in the expansion of \\(w_{k}(\\vartheta)=\\vartheta+w_{2}\\frac{\\vartheta^{2}}{2}+w_{3}\\frac{\\vartheta^{3}} {6}\\ldots\\), we obtain for the expansions of the auxiliary functions \\(f_{1,2,3}\\) as defined in Eq. (28) (recall that \\(b_{l}=|\ u_{l}|\\sqrt{2\\vartheta}\\)): \\[2\\sum_{l=1}^{N^{2}-1}f_{1}(s\\dot{w}_{k}b_{l})\\,b_{l}^{d/2}\\Bigg{|} _{w_{i}\\to 0} -(d-1)\\sum_{i=0}^{\\infty}\\frac{2^{i}(2^{2i}-2)}{(2i)!}\\,\\tau_{i}\\,B _{2i}\\,s^{2i-d/2}\\,\\vartheta^{i}+4\\sum_{i=0}^{\\infty}\\frac{2^{i}}{(2i-1)!}\\, \\tau_{i}\\,s^{2i-d/2}\\,\\vartheta^{i}\\] \\[2\\sum_{l=1}^{N^{2}-1}f_{2}(sb_{l})\\,b_{l}^{d/2} = -\\sum_{i=0}^{\\infty}\\frac{2^{i}(2^{2i}-2)}{(2i)!}\\,\\tau_{i}\\,B_{2 i}\\,s^{2i-d/2}\\,\\vartheta^{i},\\] (C.1) where \\(B_{2i}\\) denotes the Bernoulli numbers, and we define \\(1/(-1)!=0\\). The \\(\\tau_{i}\\) are defined in Appendix E and are related to the group theoretical factors \\(\\sum_{l=1}^{N^{2}-1}(\ u^{2})^{i}\\) that occur in the expansions given above. Whereas the expansion of \\(f_{3}\\) vanishes in the present approximation, the expansion of its derivatives, as they occur in the last line of Eq. (31), must be retained: \\[\\left(\\partial_{t}-4\\vartheta\\partial_{\\vartheta}+d\\right)f_{3}\\left(s\\dot{w }_{k},\\frac{\\dot{w}_{k}}{\\dot{w}_{k}+2\\vartheta\\ddot{w}_{k}}\\right)\\Bigg{|}_{ w_{i}\\to 0}=\\sum_{i=1}^{\\infty}\\frac{2i}{s^{d/2}}\\,\\frac{\\vartheta^{i}}{i!} \\,\\partial_{t}w_{i+1}.\\] (C.2) ## Appendix D Cutoff functions In Eq. (12), we introduce the cutoff function \\(R_{k}(x)=x\\,r\\big{(}\\frac{x}{x_{k}k^{2}}\\big{)}\\), where \\(r(y)\\) is a dimensionless function of a dimensionless argument. For actual computations, we need the combinations \\(h(y)\\) and \\(g(y)\\) as well as their Laplace transforms \\(\\tilde{h}(s)\\) and \\(\\tilde{g}(s)\\) as defined in Eqs. (16) and (18). Instead of choosing a certain cutoff function by specifying \\(r(y)\\), we can specify a function \\(h(y)\\), or alternatively \\(g(y)\\), which fixes the remaining functions by virtue of Eq. (16); the direct connection between \\(h(y)\\) and \\(g(y)\\) can be formulated as \\[y\\frac{d}{dy}g(y)=\\big{(}g(y)-1\\big{)}\\,h(y).\\] (D.1) A similar reasoning holds for a definition of the cutoff in Laplace space by specifying one of the functions \\(\\tilde{h}(s)\\) or \\(\\tilde{g}(s)\\), for which Eq. (D.1) translates into \\[\\tilde{g}(s)+s\\frac{d}{ds}\\tilde{g}(s)=\\tilde{h}(s)-\\int_{0}^{s}dt\\,\\tilde{h} (t)\\,\\tilde{g}(s-t).\\] (D.2) These identities can be used to define a desired cutoff in its simplest representation without the need to specify the corresponding function \\(r(y)\\) explicitly; the latter might look very complicated. Of course, one has to take care of all the necessary conditions that a cutoff has to satisfy as listed in Eqs. (13) and (14). During the expansion of the propertime integrand in Sect. 4, we encounter the moments of \\(\\tilde{h}(s)\\) and \\(\\tilde{g}(s)\\) as defined in Eq. (41). These moments can also be translated into a momentum space calculation (\"\\(y\\) space\"): \\[h_{-j}:=\\int\\limits_{0}^{\\infty}\\frac{ds}{s^{j}}\\,\\tilde{h}(s) = \\frac{1}{\\Gamma(j)}\\int\\limits_{0}^{\\infty}dy\\,y^{j-1}\\,h(y), \\quad j>0,\\] \\[h_{j}:=\\int\\limits_{0}^{\\infty}ds\\,s^{j}\\,\\tilde{h}(s) = \\lim_{y\\to 0}(-1)^{j}\\,\\left(\\frac{d}{dy}\\right)^{(j)}h(y), \\quad j\\geq 0\\] (D.3) and equivalently for the \\(g_{j}\\)'s. In this work, the exponential cutoff is technically advantageous; all functions involved have a simple representation: \\[r(y) = \\frac{1}{{\\rm e}^{y}-1},\\quad h(y)=\\frac{y}{{\\rm e}^{y}-1},\\quad g (y)={\\rm e}^{-y},\\] \\[\\tilde{h}(s) = -\\sum_{m=1}^{\\infty}\\delta(s-m)\\,\\frac{d}{ds},\\quad\\tilde{g}(s)= \\delta(s-1),\\] (D.4) where the \\(s\\) derivative acts on the remaining propertime integrand. For the moments required in \\(d=4\\), we find \\[g_{j} = 1,\\] \\[h_{-2} = 2\\,\\zeta(3)\\simeq 2.404\\ldots,\\] (D.5) \\[h_{j} = B_{j},\\quad j=1,2,\\ldots,\\] where \\(B_{j}\\) symbolizes the Bernoulli numbers. ## Appendix E SU(2) versus SU(3) Gauge group information enters the flow equation via the color traces. In Appendix B, we evaluated these traces formally by introducing the eigenvalue of \\((n^{c}\\,T^{c})^{ab}\\to\ u_{l}\\), \\(l=1,\\ldots,N^{2}-1\\). During the expansion of the right-hand side of the flow equation in Sect. 4, we encounter the following factors: \\[\\sum_{l=1}^{N^{2}-1}\ u_{l}^{2i}=n^{a_{1}}n^{a_{2}}\\ldots n^{a_{2i}}\\,{\\rm tr} _{\\rm c}[T^{(a_{1}}T^{a_{2}}\\ldots T^{a_{2i})}],\\] (E.1) where the parentheses at the color indices denote symmetrization. For general gauge groups, these factors are not independent of the direction of \\(n^{a}\\). Contrary to this, the left-hand side of the flow equation is a function of \\(\\frac{1}{4}F_{\\mu\ u}^{a}F_{\\mu\ u}^{a}\\to\\frac{1}{2}B^{2}\\), which is independent of \\(n^{a}\\). Therefore, we do not need the complete factor of Eq. (E.1), but only that part of the symmetric invariant tensor \\({\\rm tr}_{\\rm c}[T^{(a_{1}}\\ldots T^{a_{2i})}]\\) which is proportional to the trivial one: \\[{\\rm tr}_{\\rm c}[T^{(a_{1}}T^{a_{2}}\\ldots T^{a_{2i})}]=\\tau_{i}\\,\\delta_{(a _{1}a_{2}}\\ldots\\delta_{a_{2i-1}a_{2i})}+\\ldots,\\] (E.2)where we omitted further nontrivial symmetric invariant tensors. These omitted terms do not contribute to the flow of \\(W_{k}(\\theta)\\), but to the flow of other operators which do not belong to our truncation, e.g., operators involving contractions of the field strength tensor with the \\(d_{abc}\\) symbols. For SU(\\(N\\)) gauge groups, we trivially deduce that \\[\\tau_{0}=N^{2}-1,\\quad\\tau_{1}=N.\\] (E.3) For the gauge group SU(2), all complications are absent, since there are no further symmetric invariant tensors in Eq. (E.2), implying \\[\\tau_{i}^{N=2}=2,\\quad i=1,2,\\ldots\\.\\] (E.4) For the gauge group SU(3), we do not evaluate the \\(\\tau_{i}\\)'s from Eq. (E.2) directly; instead, we exploit the fact that the color unit vector can always be rotated into the Cartan subalgebra. For SU(3), we choose a color vector \\(n^{a}\\) pointing into the 3 or 8 direction in color space, representing the two possible extremal cases: \\[\\tau_{i,3}^{N=3}=2+\\frac{1}{2^{2i-2}},\\quad\\tau_{i,8}^{N=3}=\\frac{3^{i}}{2^{2i -2}}.\\] (E.5) Note that their limiting behavior is rather different: for \\(i\\to\\infty\\), we find \\(\\tau_{i,3}^{N=3}\\to 2\\), but \\(\\tau_{i,8}^{N=3}\\to 0\\). The uncertainty introduced by the artificial \\(n^{a}\\) dependence of the color traces is finally responsible for the uncertainty of our result for the SU(3) infrared fixed point. ## Appendix F Numerical computations Since the numerical evaluation of the anomalous dimension \\(\\eta\\) depending on the coupling \\(G=g^{2}/[2(4\\pi)^{2}]\\) as represented in Eqs. (54) and (56) is not straightforward, we mention here some details about the multidimensional integration and summation. We begin with the part \\(\\eta_{\\rm a}\\) in Eq. (54): substituting \\(s_{1}/s_{2}\\to s_{1}\\), the \\(s_{2}\\) integral can be performed, resulting in the modified Bessel function \\(K_{3N^{2}-4}(2\\sqrt{s_{1}})\\). Substituting furthermore \\(t\\to t/l\\), and defining the expressions \\[L(t):=\\sum_{l=1}^{\\infty}\\frac{1}{2}\\frac{1}{1+\\cosh{lt}}\\,\\frac{1}{l},\\quad \\widetilde{K}(s_{1}):=s_{1}^{3N^{2}/2-1}\\,K_{3N^{2}-4}(2\\sqrt{s_{1}}),\\] (F.1) we obtain the representation \\[\\eta_{\\rm a}^{N=2}=\\frac{64NG}{\\Gamma(3N^{2}\\!-\\!2)\\pi^{2}}\\int \\limits_{0}^{\\infty}dt\\,L(t)\\int\\limits_{0}^{\\infty}ds_{1}\\,\\widetilde{K}(s_{ 1})\\,\\left[S\\left(\\frac{cGs_{1}t^{2}}{2\\pi^{4}}\\right)-\\frac{1}{2}\\,S\\left( \\frac{cGs_{1}t^{2}}{8\\pi^{4}}\\right)\\right].\\] (F.2)Apart from an easily integrable \\(1/\\sqrt{t}\\) singularity induced by \\(L(t)\\), the integrals are smooth and drop off exponentially for large \\(t\\) and \\(s_{1}\\) in the required \\(G\\) range. The sum \\(S(q)\\) defined in Eq. (55) converges quickly and an accuracy with error \\(<1\\%\\) requires only \\({\\cal O}(100)\\) terms or less. The sum \\(L(t)\\) is rather slowly converging for small \\(t\\), but the same accuracy can be obtained by including \\({\\cal O}(10^{5}-10^{6})\\) terms. Depending on the actual value of the arguments \\(t\\) and \\(q\\), we adjust the included number of terms dynamically. For the part \\(\\eta_{\\rm b}\\), different complications occur. Beginning with Eq. (56), we substitute \\(s\\to st\\sqrt{cG/(2\\pi^{2})}\\) (and similarly for \\(s^{\\prime}\\)) and find \\[\\eta_{\\rm b}^{N=2}=-\\frac{32NG}{\\Gamma(3N^{2}\\!-\\!2)}\\,{\\rm Re}\\int\\limits_{0} ^{\\infty}\\frac{(1+i)}{\\sqrt{2}}dt\\frac{{\\rm e}^{\\frac{1+i}{\\sqrt{2}}t}}{({\\rm e }^{\\frac{1+i}{\\sqrt{2}}t}+1)^{2}}\\,I_{s}\\left(\\sqrt{\\frac{cG}{2\\pi^{2}}}\\,t \\right),\\] (F.3) where we defined \\[I_{s}\\left(x\\right)=\\frac{1}{x}\\int\\limits_{0}^{x}ds\\left(\\frac{d}{ds}\\right) ^{(3N^{2}-3)}\\frac{d}{ds^{\\prime}}\\,s^{3N^{2}-3}s^{\\prime}\\,S(-{\\rm i}ss^{ \\prime})\\Big{|}_{s^{\\prime}=x-s}.\\] (F.4) The problem here is that the derivatives cannot be carried out numerically with a sufficient accuracy, but have to be computed analytically within the sum representation for \\(S(-{\\rm i}ss^{\\prime})\\). This implies that each term in the sum then consists of \\(\\sim 20\\) terms for SU(2) and \\(\\sim 50\\) for SU(3). This limits the generalization of the calculation to higher gauge groups for technical reasons. The remaining \\(s\\) and \\(t\\) integrations can easily be performed to a high accuracy. We estimate the total error of the numerical computation to be within a few percent. ## Acknowledgment The author would like to thank D.F. Litim and J.M. Pawlowski for numerous discussions, for comments on the manuscript, and for communicating their results of Refs. [24] and [25] prior to publication. The author is also grateful to R. Alkofer, W. Dittrich, G.V. Dunne, C.S. Fischer, K. Langfeld, J.I. Latorre, S. Sint and C. Wetterich for helpful information and correspondence, and he wishes to thank W. Dittrich for carefully reading the manuscript. This work is supported by the Deutsche Forschungsgemeinschaft under contract Gi 328/1-1. ## References * [1] F. J. Wegner and A. Houghton, Phys. Rev. A **8**, 401 (1973); K. G. Wilson and J. B. Kogut, Phys. Rept. **12**, 75 (1974); S. Weinberg, in _C76-07-23.1_ HUTP-76/160, Erice Subnucl. Phys., 1, (1976); J. Polchinski, Nucl. Phys. B **231**, 269 (1984); A. Hasenfratz and P. Hasenfratz, Nucl. Phys. B **270**, 687 (1986) [Helv. Phys. Acta **59**, 833 (1986)]. * [2] C. Wetterich, Phys. Lett. B **301**, 90 (1993); Nucl. Phys. B **352**, 529 (1991). * [3] M. Bonini, M. D'Attanasio and G. Marchesini, Nucl. Phys. B **421**, 429 (1994) [arXiv:hep-th/9312114]. * [4] U. Ellwanger, Phys. Lett. B **335**, 364 (1994) [arXiv:hep-th/9402077]. * [5] M. Reuter and C. Wetterich, Nucl. Phys. B **417**, 181 (1994). * [6] M. Bonini, M. D'Attanasio and G. Marchesini, Nucl. Phys. B **437**, 163 (1995) [arXiv:hep-th/9410138]; Phys. Lett. B **346**, 87 (1995) [arXiv:hep-th/9412195]. * [7] U. Ellwanger, M. Hirsch and A. Weber, Z. Phys. C **69**, 687 (1996) [arXiv:hep-th/9506019]; Eur. Phys. J. C **1**, 563 (1998) [arXiv:hep-ph/9606468]. * [8] M. D'Attanasio and T. R. Morris, Phys. Lett. B **378**, 213 (1996) [arXiv:hep-th/9602156]. * [9] D. F. Litim and J. M. Pawlowski, Proceedings of the workshop on the ERG, Faro, Portugal, Sept. 1998, World Scientific, [arXiv:hep-th/9901063]. * [10] T. R. Morris, Nucl. Phys. B **573**, 97 (2000) [arXiv:hep-th/9910058]. * [11] T. R. Morris, JHEP **0012**, 012 (2000) [arXiv:hep-th/0006064]; S. Arnone, Y. A. Kubyshin, T. R. Morris and J. F. Tighe, Int. J. Mod. Phys. A **16**, 1989 (2001) [arXiv:hep-th/0102054]. * [12] M. Reuter and C. Wetterich, Phys. Rev. D **56**, 7893 (1997) [arXiv:hep-th/9708051]. * [13] U. Ellwanger, Nucl. Phys. B **560**, 587 (1999) [arXiv:hep-th/9906061]; Eur. Phys. J. C **7**, 673 (1999) [arXiv:hep-ph/9807380]; Nucl. Phys. B **531**, 593 (1998) [arXiv:hep-ph/9710326]. * [14] F. Freire, arXiv:hep-th/0110241. * [15] J. I. Latorre and T. R. Morris, JHEP **0011**, 004 (2000) [hep-th/0008123]. * [16] H. Gies and C. Wetterich, Phys. Rev. D **65**, 065001 (2002) [arXiv:hep-th/0107221]. * [17] D. F. Litim, Phys. Lett. B **486**, 92 (2000) [arXiv:hep-th/0005245]; Phys. Rev. D **64**, 105007 (2001) [arXiv:hep-th/0103195]. * [18] D. F. Litim, JHEP **0111**, 059 (2001) [arXiv:hep-th/0111159]. * [19] F. Freire, D. F. Litim and J. M. Pawlowski, Phys. Lett. B **495**, 256 (2000) [arXiv:hep-th/0009110]. * [20] L. F. Abbott, Nucl. Phys. B **185**, 189 (1981); W. Dittrich and M. Reuter, Lect. Notes Phys. **244**, 1 (1986). * [21] F. Freire and C. Wetterich, Phys. Lett. B **380**, 337 (1996) [arXiv:hep-th/9601081]. * [22] S. B. Liao, Phys. Rev. D **53**, 2020 (1996) [arXiv:hep-th/9501124]. * [23] S. B. Liao, Phys. Rev. D **56**, 5008 (1997) [arXiv:hep-th/9511046]; R. Floreanini and R. Percacci, Phys. Lett. B **356**, 205 (1995) [arXiv:hep-th/9505172]; B. J. Schaefer and H. J. Pirner, Nucl. Phys. A **660**, 439 (1999) [arXiv:nucl-th/9903003]; A. Bonanno and D. Zappala, Phys. Lett. B **504**, 181 (2001) [arXiv:hep-th/0010095]. * [24] D. F. Litim and J. M. Pawlowski, Phys. Lett. B **516**, 197 (2001) [arXiv:hep-th/0107020]; arXiv:hep-th/0111191. * [25] D. F. Litim and J. M. Pawlowski, arXiv:hep-th/0202188 (2002). * [26] G. K. Savvidy, Phys. Lett. B **71**, 133 (1977). * [27] J. M. Pawlowski, Int. J. Mod. Phys. A **16**, 2105 (2001). * [28] D. F. Litim and J. M. Pawlowski, arXiv:hep-th/0203005. * [29] D. F. Litim and J. M. Pawlowski, Phys. Lett. B **435**, 181 (1998) [arXiv:hep-th/9802064]; O. Lauscher and M. Reuter, Phys. Rev. D **65**, 025013 (2002) [arXiv:hep-th/0108040]. * [30] N. K. Nielsen and P. Olesen, Nucl. Phys. B **144**, 376 (1978). * [31] M. Bonini, G. Marchesini and M. Simionato, Nucl. Phys. B **483**, 475 (1997) [arXiv:hep-th/9604114]. * [32] G. Hardy, \"Divergent Series,\" Oxford Univ. Press (1949); C.M. Bender and S.A. Orszag, \"Advanced Mathematical Methods for Scientists and Engineers,\" McGraw-Hill, New York (1978). * [33] J. C. Le Guillou and J. Zinn-Justin, \"Large Order Behavior Of Perturbation Theory,\" North-Holland Amsterdam (1990); G. V. Dunne and T. M. Hall, Phys. Rev. D **60**, 065002 (1999) [arXiv:hep-th/9902064]; G. V. Dunne and C. Schubert, Nucl. Phys. B **564**, 591 (2000) [arXiv:hep-th/9907190]. * [34] I.S. Gradshteyn and I.M. Ryzhik, \"Table of integrals, series, and products\", 6th ed., Jeffrey, Alan (ed.), Academic Press, San Diego (2000). * [35] B. Bergerhoff and C. Wetterich, Phys. Rev. D **57**, 1591 (1998) [arXiv:hep-ph/9708425]. * [36] M. Luscher, R. Sommer, P. Weisz and U. Wolff, Nucl. Phys. B **413**, 481 (1994) [arXiv:hep-lat/9309005]. * [37] Y. L. Dokshitzer, A. Lucenti, G. Marchesini and G. P. Salam, JHEP **9805**, 003 (1998) [arXiv:hep-ph/9802381]; Y. L. Dokshitzer, arXiv:hep-ph/9812252. * [38] E. Eichten _et al._,Phys. Rev. Lett. **34**, 369 (1975) [Erratum-ibid. **36**, 1276 (1975)]; T. Barnes, F. E. Close and S. Monaghan, Nucl. Phys. B **198**, 380 (1982); S. Godfrey and N. Isgur, Phys. Rev. D **32**, 189 (1985). * [39] A. C. Mattingly and P. M. Stevenson, Phys. Rev. Lett. **69**, 1320 (1992) [arXiv:hep-ph/9207228]. * [40] T. Banks and A. Zaks, Nucl. Phys. B **196**, 189 (1982); G. Grunberg, Phys. Rev. D **65**, 021701 (2002) [arXiv:hep-ph/0009272]; E. Gardi and G. Grunberg, JHEP **9903**, 024 (1999) [arXiv:hep-th/9810192]. * [41] D. V. Shirkov and I. L. Solovtsov, Phys. Rev. Lett. **79**, 1209 (1997) [arXiv:hep-ph/9704333]; Theor. Math. Phys. **120**, 1220 (1999) [Teor. Mat. Fiz. **120**, 482 (1999)] [arXiv:hep-ph/9909305]. * [42] N. G. Stefanis, W. Schroers and H. C. Kim, Eur. Phys. J. C **18**, 137 (2000) [arXiv:hep-ph/0005218]; D. V. Shirkov, Eur. Phys. J. C **22**, 331 (2001) [arXiv:hep-ph/0107282]. * [43] L. von Smekal, R. Alkofer and A. Hauck, Phys. Rev. Lett. **79**, 3591 (1997) [arXiv:hep-ph/9705242]; Annals Phys. **267**, 1 (1998) [Erratum-ibid. **269**, 182 (1998)] [arXiv:hep-ph/9707327]; D. Atkinson and J. C. Bloch, Phys. Rev. D **58**, 094036 (1998) [arXiv:hep-ph/9712459]; D. Zwanziger, arXiv:hep-th/0109224;C. Lerche and L. von Smekal, arXiv:hep-ph/0202194; C. S. Fischer and R. Alkofer, arXiv:hep-ph/0202202. * [44] F. D. Bonnet, P. O. Bowman, D. B. Leinweber, A. G. Williams and J. M. Zanotti, Phys. Rev. D **64**, 034501 (2001) [arXiv:hep-lat/0101013]; K. Langfeld, H. Reinhardt and J. Gattnar, Nucl. Phys. B **621**, 131 (2002) [arXiv:hep-ph/0107141]; K. Langfeld, Talk delivered at NATO workshop on \"Confinement, Topology and other Non-perturbative Aspects of QCD\", Stara Lesna, Slovakia, Jan. 21-27 (2002), to appear in the proceedings.
The effective average action of Yang-Mills theory is analyzed in the framework of exact renormalization group flow equations. Employing the background-field method and using a cutoff that is adjusted to the spectral flow, the running of the gauge coupling is obtained on all scales. In four dimensions and for the gauge groups SU(2) and SU(3), the coupling approaches a fixed point in the infrared. CERN-TH/2002-047 **Running coupling in Yang-Mills theory** **- a flow equation study -** Holger Gies _CERN, Theory Division, CH-1211 Geneva 23, Switzerland_ _E-mail: [email protected]_
Give a concise overview of the text below.
arxiv-format/0203311v1.md
Comparison of XMM-Newton EPIC, Chandra ACIS-S3, ASCA SIS and GIS, and ROSAT PSPC results for G21.5-0.9, 1e0102.2-7219, and Ms1054.4-0321 S.L. Snowden\\({}^{1,2}\\) ## 1 Introduction In all X-ray observatory missions a great deal effort goes into the calibration of the scientific instruments with goals of an absolute accuracy usually better than, or much better than 10%, depending on the quantity (e.g., energy scale, relative area, total flux, etc.). The calibrations are usually based on extensive ground calibration data (which are never as complete as one would like) coupled with extensive in-flight observations of celestial objects (which are always problematic as nature has not seen fit to provide ideal calibration sources). In addition, there is the fact that instrument responses can and will vary with time (e.g., the increasing charge transfer inefficiency, CTI, of CCDs). Thus instrument calibration is therefore a long-term endeavor where occasionally the final step is just to declare victory and move on. As a final editorial comment, the astronomical community owes a great debt of gratitude to those individuals who undertake this very difficult task. But back to the issue at hand, one practical way of examining the reliability of calibrations is to compare the results of various observations of celestial objects using various instruments. This at least provides an estimate of the relative errors between the different instruments. (There is an old joke from the early X-ray missions that nobody has ever measured the spectrum of the Crab as the calibrations of some instruments were fudged to give the accepted results.) While simultaneous observations of the same source by different instruments are ideal, for spectral calibration comparisons independent observations of spectrally constant sources can be substituted. Thus distant supernova remnants and high redshift clusters are the targets of choice. However, there are problematic issues with both types of sources, and nature has not provided convenient \"standard candles\" for X-ray astronomy. SNRs can have complex line spectra and those in the Milky Way which are small enough in solid angle to be useful are distant and therefore heavily absorbed. High redshift clusters are not particularly bright so the photon statistics can be quite limited. This paper will present results from three sources which provide useful results but all suffer from limitations noted above. They are: 1) The Galactic SNR G21.5-0.9 which is heavily absorbed but provides a constant power law spectrum visible from \\(\\sim 1-10\\) keV. 2) The SMC SNR 1E0102.2-7219 which suffers relatively little absorption but has a soft, very complicated, and line-rich spectrum. 3) The high redshift cluster MS1054.4-0321 which also suffers little absorption, has a relatively simple thermal spectrum, but has limited photon statistics. Not all instruments have observations of all of the sources, which is another limitation for this study. ## 2 Data Reduction and Analysis To provide the pedestrian's view of the current status of the cross calibration, only publicly released software and calibration data files have been used for this work. For _XMM-Newton_ EPIC data, SAS V5.2 ([http://xmm.vilspa.esa.es/user/sas_Jop.html](http://xmm.vilspa.esa.es/user/sas_Jop.html)) has been used to extract source and background spectra, create the spectral redistribution matrices (RMFs), and create the ancillary region files (ARFs, effective area vectors). For _Chandra_ ACIS-S3 data, CIAO 2.1 ([http://asc.harvard.edu/ciao/](http://asc.harvard.edu/ciao/)) was used with occasional help from the scripts of Keith Arnaud. Spectra for _ASCA_ GIS and SIS data as well as _ROSAT_ PSPC data were extracted, and RMFs (where necessary, otherwise standard RMFs from the public calibration data base were used, [ftp://legacy.gsfc.nasa.gov/](ftp://legacy.gsfc.nasa.gov/)) and ARFs were created using the HEASoft software package ([http://heasarc.gsfc.nasa.gov/docs/corp/software.html](http://heasarc.gsfc.nasa.gov/docs/corp/software.html)). In all cases, _Xspec_ was used to fit the data after grouping for statistical purposes using _grppha_ (_Xspec_ and _grppha_ are also part of the HEASoft software package). ## 3 The Cross Calibration ### g21.5\\(-\\)0.9 G21.5-0.9 is a Galactic SNR consisting of a Crab-like bright inner region and a fainter but clearly visible X-ray halo (Figure 1). (Note, for some of the \"science\" of this source, see the poster papers in these proceedings by La Palombara and Mereghetti, and Bocchino and Bandiera.) Data for this source are available from all instruments, although the _ROSAT_ PSPC observation is of limited utility because the source is so heavily absorbed. Because of the relatively poor angular resolution of the _ASCA_ instruments, extraction regions large enough to include the entire remnant were used (165\\({}^{\\prime\\prime}\\) extraction radii for _XMM-Newton_, _Chandra_, and _ROSAT_ data and 240\\({}^{\\prime\\prime}\\) for the _ASCA_ data). Source and background spectra were extracted for all instruments. The data were fit over the \\(0.5-10.0\\) keV energy range with variation in the endpoints due to the individual spectral responses of the various instruments. A simple absorbed power law spectrum was first fit simultaneously to the data with only the overall normalization being allowed to vary between the various instruments. The fits are displayed in Figure 2. While the fits are a bit rough below 1 keV, at higher energies they look quite good. (At energies below 1 keV interstellar absorption has removed most X-rays from the spectrum so what are typically detected are events which have lost some of their energy due to incomplete charge collection by the CCDs and electronics.) The fitted values for the relative fluxes (scaled to the av Figure 1: XMM-Newton EPIC MOS1 image of G21.5-0.9 from the Science Validation observation. Figure 3: Confidence contours for the spectral parameters for fits to the G21.5-0.9 data. The color coding is listed on the plot (the PSPC results are not shown). For this plot the EPIC data, GIS data, and SIS data were fit together to improve the statistical precision. Figure 2: Spectral fits of the G21.5-0.9 data. The color coding is listed on the plot as are the fitted fluxes in the 2–10 keV band and the relative normalizations for the different instruments (the PSPC results are not shown but are listed in Table 1). erage value) are in good agreement and range from 0.89 (_ROSAT_ PSPC) to 1.07, with EPIC and ACIS-S3 values in the range 1.00 to 1.07. Figure 3 shows the confidence contours for the fitted values of the power law index and absorbing column density. The spectral parameters of the EPIC PN and MOS detectors were fit simultaneously only allowing the normalizations to vary. This was also done for the SIS and GIS data to improve the statistics. The average results for the EPIC data are completely consistent with those of the SIS. The ACIS-S3 and EPIC slopes agree but there is a \\(\\sim 10\\%\\) difference in the fitted values for the absorption column densities. The GIS and EPIC results for the absorption column densities agree well but there is a difference of \\(\\sim 0.12\\) in the fitted values for the slope. Figure 4 shows a confidence contour plot for the EPIC and ACIS-S3 data when the EPIC data are fit independently. The PN and MOS1 values agree well while the MOS2 values are somewhat lower in both slope and column density. ### 1e0102.2\\(-\\)7219 1E0102.2-7219 is a SNR in the Small Magellanic Cloud. It is beautifully resolved in the _Chandra_ data as a shell-like remnant. It's spectrum is soft and line-dominated, and very difficult to model short of fitting a vast number of Gaussians to the data. Unfortunately, it was not feasible to use the PN data from the EPIC observation as the positioning of the source for the advantage of the RGS caused part of the remnant to fall on a gap between the CCDs. For the fits two absorbed APEC models (see [http://hea-www.harvard.edu/APEC/](http://hea-www.harvard.edu/APEC/)) with variable abundances were used. The data were fit over the \\(0.3-2.0\\) keV band, and the fit was not particularly significant. However, the fits can still be used to compare the relative normalizations. As can be seen in Figure 6 (and Table 1), the relative normalizations range from 0.92 to 1.07, with the values for the ACIS-S3 and MOS detectors ranging from 0.96 to 1.05. As an aside, note the difference between the energy resolution of ACIS-S3 (green curve in Figure 6) and MOS Figure 4: Confidence contours for the spectral parameters for fits to the MOS1, MOS2, PN, and ACIS-S3 G21.5-0.9 data. The color coding is listed on the plot. Figure 5: XMM-Newton EPIC MOS1 image of 1E0102.2-7219 from the Calibration/Performance Verification observation. Figure 6: Spectral fits of the 1E0102.2-7219 data. The color coding is listed on the plot as are the fitted fluxes and relative normalizations for the different instruments. spectra (the black and red curves) due to the differences in the response between backside and frontside illuminated CCDs. ### MS1054.4-0321 MS1054.4-0321 (Figure 7) is a high redshift cluster in a direction of low Galactic column density. The limitation for this object as a good calibration source is its low brightness and therefore poorer statistics. Reasonable data are available for the EPIC MOS and PN, ACIS-S3, and SIS. While the SIS data aren't particularly useful for constraining the spectral parameters, they do provide a reasonable flux comparison. Figure 8 shows the spectral fits and relative fluxes for the EPIC, ACIS-S3, and SIS data. For these data an absorbed thermal model (Raymond & Smith 1977) was fit where the abundance was allowed to vary. For the EPIC and ACIS-S3 data, Figure 9 shows the confidence contours for the fitted values for the temperature and absorption column density. The EPIC data were fit simultaneously to improve the statistical results. The fitted values for the parameters are completely consistent. ## 4 Conclusions Table 1 gives a summary of the relative flux normalizations for the simultaneous spectral fits for the three objects. In all cases the full range in the _XMM-Newton_ and _Chandra_ values is better than \\(\\sim 10\\%\\), which is fairly remarkable at this early a stage in the missions. When the _ROSAT_ and _ASCA_ data are included the full range is still \\(<20\\%\\). One consistent systematic difference in the data is that the fluxes measured by the EPIC PN instrument are \\(\\sim 7\\%\\) lower than the fluxes measured by the EPIC MOS. This discrepancy is also seen in the results of Griffiths (this workshop) for the hard band, and both his paper and that of Haberl should be noted for their comparisons of the EPIC MOS and PN calibrations. The cross calibration situation is also fairly good when the rest of the spectral parameters are considered, although the number of useful comparisons are much more limited. The G21.5-0.9 results show that for a hard source the fitted values for the power law indecies are completely Figure 8: Spectral fits of the MS1054.4-0321 data. The color coding is listed on the plot as are the fitted fluxes and relative normalizations for the different instruments. Figure 7: XMM-Newton EPIC MOS1 image of MS1054.4-0321 from the GT observation kindly provided by Mike Watson. Figure 9: Confidence contours for the spectral parameters for the EPIC and ACIS-S3 fits to the MS1054.4-0321 data. The color coding is listed on the plot. consistent to better than 0.05 (\\(\\sim\\) 3%) for EPIC, ACIS-S3, and SIS data, and agree to \\(\\sim\\) 0.1 when the GIS data are included. The MS1054.4-0321 results for EPIC and ACIS-S3 also show good agreement, but the statistics are much poorer. **Caveats:** There are a number of caveats which go along with these results. First, the calibrations and software were current as of the end of November, 2001. Both the calibration and the software implementation for _Chandra_ and especially for _XMM-Newton_ are changing with time, almost invariably for the better. Second, a fudge was included for the ACIS-S3 fits with a carbon K\\(\\alpha\\) absorption edge of optical depth 1.0 being added to attempt to account for a recently observed systematic discrepancy in the area calibration. The _Chandra_ CIAO software and calibration data are being modified to include this effect. Third, there are clear sensitivities to the energy range, background selection, spectral model, which data are being fit, and what parameters are being fit simultaneously. But this is expected and one of the challenges in trying to separate the \"calibration\" from the \"science\". Fourth, the _Chandra_ ACIS results are for the S3 CCD only. As the _XMM-Newton_ and _Chandra_ missions progress, the instrument calibrations will also improve beyond the current levels. With additional data, the identification of systematic discrepancies between the results of various instruments will allow the calibration teams to refine their efforts. ## Acknowledgements I would like to thank a number of people who have aided me significantly in the course of this work. Dave Lumb (ESTEC) and Richard Saxton and Steve Sembay (Leicester) with the EPIC data, Paul Plucinsky and Dick Edgar (CXC) and Kip Kuntz (NASA/GSFC and UMBC) with the ACIS-S3 data, and Ian George (NASA/GSFC and UMBC) with the SIS and GIS data. \\begin{table} \\begin{tabular}{l c c c} \\hline Object & G21.5-0.9 & 1E0102.2-7219 & MS1054.4-0321 \\\\ Band & 2.0 – 10.0 keV & 0.5 – 2.0 keV & 1.0 – 5.0 keV \\\\ \\hline MOS1 & 1.06 & 1.05 & 0.97 \\\\ MOS2 & 1.07 & 1.03 & 0.98 \\\\ PN & 1.00 & – & 0.92 \\\\ ACIS-S3 & 1.03 & 0.96 & 1.03 \\\\ SIS0 & 0.95 & 1.07 & 1.07 \\\\ SIS1 & 1.01 & 1.00 & 1.02 \\\\ GIS2 & 0.93 & 0.92 & – \\\\ GIS3 & 0.95 & 0.98 & – \\\\ PSPC & 0.89\\({}^{*}\\) & 0.99 & – \\\\ \\hline \\end{tabular} \\({}^{*}\\)Flux compared over the 0.5 - 2.5 keV band. \\end{table} Table 1: Summary table of relative flux normalizations.
This paper presents a \"man on the street\" view of the current status of the spectral cross calibration between the _XMM-Newton_ EPIC, _Chandra_ ACIS-S3, _ASCA_ SIS and GIS, and _ROSAT_ PSPC instruments. Using publicly released software for the extraction of spectra and the production of spectral redistribution response matrices and effective areas, the spectral fits of data from three astronomical objects are compared. The three sources are G21.5-0.9 (a heavily absorbed Galactic SNR with a power law spectrum), 1E0102.2-7219 (a SNR in the SMC with a line-dominated spectrum), and MS1054.4-0321 (a high redshift cluster with a thermal spectrum). The agreement between the measured fluxes of the various instruments is within the \\(\\pm 10\\%\\) range, and is better when just _XMM-Newton_ and _Chandra_ are compared. Fitted spectral parameters are also in relatively good agreement although the results are more limited. Missions: XMM-Newton, Chandra, ASCA, ROSAT - calibration: cross calibration \\({}^{1}\\)NASA/Goddard Space Flight Center, Code 662, Greenbelt, MD 20771, USA \\({}^{2}\\)Universities Space Research Association
Summarize the following text.
arxiv-format/0205085v2.md
# On Arithmetic Detection of Grey Pulses With Application to Hawking Radiation HARET C. ROSU [email protected] Dept. of Applied Mathematics, IPICyT, Apdo Postal 3-74 Tangamanga, San Luis Potosi, MEXICO MICHEL PLANET [email protected] Laboratoire de Physique et Metrologie des Oscillateurs du CNRS, 25044 Besancon Cedex, FRANCE ## 1 Introduction There may exist black holes in the micron size range carrying on some external distribution of matter. Theoretical examples are (i) down-scaled _Weyl black holes_,[1] for which the metric potentials are solutions of polar Laplace equations (ii) _black holes with Einstein shells_,[2] (iii) _primordial black holes_ (PBH) and/or any _mini black hole_ hovering through the universe and carrying on some matter distributions, (iv) _hairy black holes_,[3] with additional conserved quantum numbers beyond those allowed by the classical no hair theorems and _dirty black holes_ in the sense of Visser,[4] i.e., black holes in interaction with various classical fields, for which the Hawking temperature appears to be supressed relative to the vacuum black holes of equal area. In some cases, for small enough black holes, the external distribution of matter can be of such a kind as to disturb only slightly the pure horizon Hawking radiation and consequently from the praxis standpoint we have a grey-body radiation problem. Hawking radiation by itself is distorted with respect to a pure black-body spectrum, especially in the low frequency regime due to a grey-body factor usually identified with the square of the absorption amplitude for the mode.[5] A useful work on the nature of the grey body problem for black holes has been written by Schiffer.[6] In this letter, we first review the grey body inverse problem and the modified Mobius inverse transform (Chen's transform) in sections 2 and 3, respectively.[7]In section 4, we hint on a possible Ramanujan extension of Chen's transform with possible application to small black holes and the cosmological background radiation. ## 2 Inverse Grey-Body Problem Planck's law provides the analytical formula for the emitted power spectrum from black body sources. In laboratory physics the emitted power spectrum is also called spectral brightness, or spectral radiance of the black body radiation. The latter notion is used in radiometry to characterize the spectral properties of the source as a function of position and direction from the source. For point, i.e., far away, grey sources the total radiated power spectrum, also called radiant spectral intensity is \\[W(\ u)\\sim\\int_{0}^{\\infty}A(T)B(\ u,T)dT\\, \\tag{1}\\] where \\(A(T)\\) is the area temperature distribution of the grey body, \\(B(\ u,T)\\) is the Boltzmann-Planck occupation factor. Finding out \\(A(T)\\) at given \\(W(\ u)\\) is known as the inverse grey-body problem.\\({}^{8}\\)\\(W(\ u)\\) may be known either experimentally or within some theoretical model. This inverse problem was solved in principle by Bojarski,\\({}^{9}\\) by means of a thermodynamic coldness parameter \\(u=h/kT\\), and an area coldness distribution \\(a(u)\\), as more convenient variables than \\(T\\) and \\(A(T)\\) to get an inverse Laplace transform of the total radiated power. The coldness distribution is obtained as an expansion in this Laplace transform. Explicitly, the total grey power spectrum is rewritten as an integral over the coldness variable \\[W(\ u)=\\frac{2h\ u^{3}}{c^{2}}\\int_{0}^{\\infty}\\frac{a(u)}{\\exp(u\ u)-1}du \\tag{2}\\] and furthermore as \\[W(\ u)=\\frac{2h\ u^{3}}{c^{2}}\\int_{0}^{\\infty}\\exp(-u\ u)\\Big{[}\\sum_{n=1}^ {\\infty}(1/n)a(u/n)\\Big{]}du. \\tag{3}\\] Therefore the sum under the integral that we shall denote by \\(f(u)\\) is the Laplace transform of \\(g(\ u)=\\frac{c^{2}}{2h\ u^{3}}W(\ u)\\) and the inverse Laplace transform of \\(g\\) will provide the sought coldness distribution. Despite the formal mathematical solution the inverse grey body problem is unstable for most numerical implementations, i.e., it belongs to the broad class of ill-posed inverse problems.\\({}^{10}\\) ## 3 Modified Mobius Transform (MMT) Chen,\\({}^{11}\\) obtained \\(a(u)\\) by means of the so-called modified Mobius transform ( MMT) of \\(f(u)\\) \\[a(u)=\\sum_{n=1}^{\\infty}\\frac{\\mu(n)}{n}f(u/n). \\tag{4}\\] To understand Eq. (4) we recall a few basic results from the theory of numbers.\\({}^{12}\\) The Mobius expansion refers to special sums over prime factor divisors, (d-sums) running over all the prime factors of \\(n\\), \\(1\\) and \\(n\\) included, of any function \\(f(n)\\) defined on the positive integers \\[S_{f}(n)=\\sum_{d|n}^{n}f(d). \\tag{5}\\] The remarkable fact in this case is that the _last_ term of the sum can be written in turn as a sum over the \\(S_{f}\\) arithmetical functions. The latter sum is called the inverse Mobius transform (or the Mobius d-sum) of \\(f\\) \\[f(n)=\\sum_{d|n}^{n}\\mu(d)S_{f}(n/d)\\, \\tag{6}\\] in which the d-sum \\(S_{f}(n)\\) becomes the _first_ term of the Mobius d-sum, and where \\(\\mu(d)\\) is the famous Mobius function. Since at the left hand side of (6) one has only a term of a d-sum whereas on the right hand side there is a sum of d-sums there is clear overcounting, unless the Mobius function is sometimes either naught or negative. The partition of the prime factors of \\(n\\) implied by the Mobius function is such that, by definition, \\(\\mu(1)\\) is \\(1\\), \\(\\mu(n)\\) is \\((-1)^{r}\\) if \\(n\\) includes \\(r\\) distinct prime factors, and \\(\\mu(n)\\) is naught in all the other cases. In particular, all the squares have no contribution to the inverse Mobius transforms. That is why the integers selected by the Mobius function are also called square-free integers. Chen's MMT means to apply such an inversion of finite sums to infinite summations, and to ordinary functions of real continuous variable(s). MMT means that if \\[y_{1}(x)=\\sum_{n=1}^{\\infty}y_{2}(x/n)\\, \\tag{7}\\] then \\[y_{2}(x)=\\sum_{n=1}^{\\infty}\\mu(n)y_{1}(x/n). \\tag{8}\\] For the inverse grey-body problem, \\(y_{1}(u)=uf(u)\\) and \\(y_{2}(u)=ua(u)\\). So, one can get the coldness distribution by multiplying the Laplace transform of the total power spectrum by the coldness parameter, and then applying the MMT. ## 4 Ramanujan Generalization of Chen's Transform Ramanujan sums are well known in number theory but only recently some physical applications have been suggested [13]. They are of the form \\[c_{q}(m)=\\sum_{p=1}^{q}\\cos(2\\pi mp/q)\\, \\tag{9}\\] with irreducible fractions \\(p/q\\). The sums are quasiperiodic in \\(m\\) and aperiodic in the denominator \\(q\\). They are a generalization of the Mobius function since \\(c_{q}(m)=\\mu(q)\\) whenever \\(q\\) and \\(m\\) are coprimes. The MMT for black holes in the Ramanujan notation will be \\[[ua(u)]_{(1)}=\\sum_{q=1}^{\\infty}c_{q}(1)uf(u/q)\\, \\tag{10}\\] but the point is that many other Ramanujan inverse transforms can be introduced through \\[[ua(u)]_{(i)}=\\sum_{q=1}^{\\infty}c_{q}(i)uf(u/q)\\, \\tag{11}\\] for integers \\(i>1\\) and \\((q,i)=1\\). One can see that a sequence of two-dimensional analyses \\((i,q)\\) of the signal are available in this approach similarly to the time-frequency analysis that is so characteristic to wavelets. As an example, for micron-sized Schwarzschild black holes (\\(M\\sim 10^{24}\\) g), no known massive particles are thermally emitted, and according to the calculations of Page [14], about 16% of the Hawking flux goes into photons, the rest being neutrino emission. Let us consider these black holes as grey objects of the following two classes (i) by their own [5], and (ii) of the Weyl type. The coldness parameter will be in the first case \\(u_{S}=\\frac{1}{\ u}\\ln\\left(1+\\frac{e^{\\beta_{h}h\\omega}-1}{\\Gamma(\\omega)}\\right)\\), where \\(\\beta_{h}\\) is the horizon inverse temperature parameter, and \\(\\Gamma(\\omega)\\) is the penetration factor of the curvature and angular momentum barrier around the black hole [5], whereas in the latter case \\(u_{S}=h/kT_{d}\\), where \\(T_{d}\\) can be considered as an effective horizon temperature of the distorted black holes \\(T_{d}=(8\\pi M)^{-1}\\exp(2{\\cal U})\\), where \\({\\cal U}\\) is given in the work of Geroch and Hartle. With the coldness parameter at hand one can apply the aforementioned number methods for the black hole emissivity problem. Before closing, we mention that an alternative viewpoint on MMT developed by Hughes _et al_[15], in terms of Mellin transform and Riemann's \\(\\zeta\\) function is also extremely interesting if one takes into account that the Bekenstein-Mukhanov spectrum [16], could be considered as the eigenvalue problem of relativistic Schroedinger equations in finite differences [17]. Thus, it appears that a richer information content of the spectrum of important astrophysical signals that may reveal hidden discrete features can be obtained by extensive use of number theoretical techniques. The cosmological background radiation signal can be studied by the same approach and with the same aim. Thereare claims by Hogan,\\({}^{18}\\) that inflationary perturbations display discreteness not predicted by the standard field theory and that this discreteness may be observable in cosmic background anisotropy. ## References * [1] H. Weyl, _Ann. Phys._**54** (1917) 117; for recent works, see R. Geroch and J. Hartle, _J. Math. Phys._**23** (1982) 680; J.P.S. Lemos and P.S. Letelier, _Phys. Rev._**D49** (1994) 5135, and references therein. * [2] A. Einstein, _Ann. Math._**40** (1939) 922. For recent papers see P.R. Brady, J. Louko, and E. Poisson, _Phys. Rev._**D44** (1992) 1891; G.L. Comer, D. Langlois, and P. Peter, _Class. Quantum Grav._**10** (1993) L127; G.L. Comer and J. Katz, _Class. Quantum Grav._**10** (1993) 1751. * [3] M.J. Bowick _et al_, _Phys. Rev. Lett._**61** (1988) 2823. * [4] M. Visser, _Phys. Rev._**D46** (1992) 2445. * [5] J.D. Bekenstein, _Phys. Rev. Lett._**70** (1993) 3680; C.F.E. Holzhey and F. Wilczek, _Nucl. Phys._**B380** (1992) 447; V. Balasubramanian and F. Larsen, _Nucl. Phys._**B495** (1997) 206; J.M. Maldacena and A. Strominger, _Phys. Rev._**D55** (1997) 861. * [6] M. Schiffer, _Gen. Rel. Grav._**27** (1995) 1, and references therein. * [7] H.C. Rosu, _Mod. Phys. Lett._**A13** (1998) 695. * [8] For a concise review, see A. Lakhtakia, _Mod. Phys. Lett._**B5** (1991) 491. * [9] N.N. Bojarski, _IEEE Trans. Antennas Propag._**30** (1982) 778. * [10] Xin Tan _et al_, _J. Opt. Soc. Am._**A11** (1994) 1068. * [11] N.X. Chen, _Phys. Rev. Lett._**64** (1990) 1193, 3202(E); J. Maddox, _Nature_**344** (1990) 377. * [12] A. Baker, _A Concise Introduction to the Theory of Numbers_, (Cambridge University Press, Cambridge 1984). * [13] M. Planat, _Ramanujan sums for signal processing of low frequency noise_, contribution at IEEE International Frequency Control Symposium, New Orleans, USA, (29-31 May 2002); M. Planat, H.C. Rosu, S. Perrine, _Arithmetical chaology and the signatures of 1/f noise_, contribution at TH-2002, Paris, (22-27 July 2002). * [14] D.N. Page, _Phys. Rev._**D13** (1976) 198. * [15] B.D. Hughes, N.E. Frankel, B.W. Ninham, _Phys. Rev._**A42** (1990) 3643. * [16] J.D. Bekenstein, V.F. Mukhanov, _Phys. Lett._**B360** (1995) 7. * [17] V.A. Berezin, A.M. Boyarsky, A. Yu. Neronov, _On the spectrum of relativistic Schroedinger equation in finite differences_, gr-qc/9902028. * [18] C.J. Hogan, _Holographic discreteness of inflationary perturbations_, _Phys. Rev._**D** (2002) to appear, astro-ph/0201020.
Micron-sized black holes do not necessarily have a constant horizon temperature distribution. The black hole remote-sensing problem means to find out the'surface' temperature distribution of a small black hole from the spectral measurement of its (Hawking) grey pulse. This problem has been previously considered by Rosu, who used Chen's modified Mobius inverse transform. Here, we hint on a Ramanujan generalization of Chen's modified Mobius inverse transform that may be considered as a special wavelet processing of the remote-sensed grey signal coming from a black hole or any other distant grey source. Mobius transform, Ramanujan sums, grey-body, Hawking radiation, black hole
Summarize the following text.
arxiv-format/0205226v1.md
# Renormalization Flow From UV to IR Degrees of Freedom1 Footnote 1: Talk given by H.G. at the conference RG-2002, March 10-16, 2002, Strba, Slovakia. Holger Gies2 TH Division, CERN, CH-1211 Geneva 23, Switzerland Christof Wetterich3 Institut fur theoretische Physik, Universitat Heidelberg, Philosophenweg 16, D-69120 Heidelberg, Germany Footnote 2: E-mail address: [email protected] Footnote 3: E-mail address: [email protected] Submitted April 18, 2002 ## 1 Introduction For an investigation of a system of interacting quantum fields, it is mandatory to identify the \"true\" degrees of freedom of the system. As we know from many physical systems such as QCD, or the plethora of condensed-matter systems, the nature of these degrees of freedom of one and the same system can be very different at different momentum (or length) scales. Of course, the first physical task is the identification of these relevant degrees of freedom at the various scales. Simplicity can be an appropriate criterion for this, in particular, simplicity of the effective action governing these degrees of freedom. Whereas quantum field theory is usually defined in terms of a functional integral over quantum fluctuations of those field variables that correspond to the degrees of freedom in the ultraviolet (UV), we are often interested in the properties of the system in the infrared (IR). In some but rare instances, we know not only the true degrees of freedom at these different scales, but also the formal translation prescription of one set of variablesinto the other in terms of a discrete integral transformation. An example is given by the Nambu-Jona-Lasinio (NJL) model [1] in which self-interacting fermions (UV variables; \"quarks\") can be translated into an equivalent system of (pseudo-)scalar bosons (IR variables; \"mesons\") with Yukawa couplings to the fermions. This is done by means of a Hubbard-Stratonovich transformation, also called partial bosonization. A purely bosonic theory can then be obtained by integrating out the fermions. Integrating out the fermions at once leads, however, to highly nonlocal effective bosonic interactions. This problem can be avoided by integrating out the short distance fluctuations stepwise by means of the renormalization group. In this context, a continuous translation from multifermion to bosonic interactions would be physically more appealing, since it would reflect the continuous transition from the ultraviolet to the infrared more naturally. Furthermore, phases in which different degrees of freedom coexist could be described more accurately. In the following, we will report on a new approach which is capable of describing such a continuous translation. The approach is based on an exact renormalization group flow equation for the effective average action [2] allowing for a scale-dependent transformation of the field variables [3].4 In order to keep this short presentation as transparent as possible, we will discuss our approach by way of example, focussing on the gauged version of the NJL model which shares many similarities with, e.g., building blocks of the standard model. Footnote 4: For an earlier approach, see [4]. A general account of field transformations within flow equations has been given in [5]. We shall consider the gauged NJL model for one fermion flavor in its simplest version characterized by two couplings in the UV: the gauge coupling \\(e\\) of the fermions to an abelian gauge field \\(\\sim\\bar{\\psi}A\\psi\\), and the chirally invariant four-fermion self-interaction in the (pseudo-)scalar channel \\(\\sim\\bar{\\psi}_{\\rm R}\\psi_{\\rm L}\\bar{\\psi}_{\\rm L}\\psi_{\\rm R}\\) with coupling \\(\\lambda_{\\rm NJL}\\). Depending on the values of these couplings, the gauged NJL model interpolates between the pure NJL model entailing chiral-symmetry breaking (\\(\\chi\\)SB) for strong \\(\\lambda_{\\rm NJL}\\) coupling and (massless) QED for weak \\(\\lambda_{\\rm NJL}\\); for simplicity, the gauge coupling is always assumed to be weak in the present work. The physical properties and corresponding degrees of freedom in the infrared depend crucially on \\(\\lambda_{\\rm NJL}\\): we expect fermion condensates and bosonic excitations on top of the condensate in the case of strong coupling, but bound states such as positronium at weak coupling. We shall demonstrate that our flow equation describes these features in a unified manner. The question as to whether the fields behave like fundamental particles or bound states thereby receives a scale-dependent answer; in particular, this behavior can be related to a new infrared fixed-point structure with interesting physical implications. ## 2 Fundamental particles versus bound states Let us study the scale-dependent effective action \\(\\Gamma_{k}\\) for the abelian gauged NJL model (\\(N_{\\rm f}=1\\)) including the scalars arising from bosonization in the following simple truncation, \\[\\Gamma_{k} = \\int d^{4}x\\bigg{\\{}\\bar{\\psi}{\\rm i}\\partial\\!\\!\\!/\\psi+2\\bar{\\lambda }_{\\sigma,k}\\,\\bar{\\psi}_{\\rm R}\\psi_{\\rm L}\\bar{\\psi}_{\\rm L}\\psi_{\\rm R}-e \\bar{\\psi}A\\!\\!\\!/\\psi+\\frac{1}{4}F_{\\mu\ u}F_{\\mu\ u}\\] \\[\\qquad\\qquad+Z_{\\phi,k}\\partial_{\\mu}\\phi^{*}\\partial_{\\mu}\\phi+ \\bar{m}_{k}^{2}\\,\\phi^{*}\\phi+\\bar{h}_{k}(\\bar{\\psi}_{\\rm R}\\psi_{\\rm L}\\phi- \\bar{\\psi}_{\\rm L}\\psi_{\\rm R}\\phi^{*})\\bigg{\\}},\\] where we take over the conventions from [3]. Beyond the kinetic terms, we focus on the fermion self-interaction \\(\\sim\\bar{\\lambda}_{\\sigma,k}\\), the scalar mass \\(\\sim\\bar{m}_{k}^{2}\\), and the Yukawa coupling between the fermions and the scalars \\(\\sim\\bar{h}_{k}\\). In the framework of exact RG equations, the infrared scale \\(k\\) divides the quantum fluctuations into modes with momenta \\(k<p<\\Lambda\\) that have been integrated out, so that \\(\\Gamma_{k}\\) governs the dynamics of those modes with momenta \\(p<k\\) which still have to be integrated out in order to arrive at the full quantum effective action \\(\\Gamma_{k\\to 0}\\). The RG flow of \\(\\Gamma_{k}\\) to the quantum effective action is described by a functional differential equation [2] which we solve within the truncation given by Eq. (1). The flow is initiated at the UV cutoff \\(\\Lambda\\), which in our case also serves as the bosonization scale, and we fix the couplings according to \\[\\lambda_{\\rm NJL}=\\frac{1}{2}\\,\\frac{\\bar{h}_{\\Lambda}^{2}}{\\bar{m}_{\\Lambda} ^{2}},\\quad\\bar{\\lambda}_{\\sigma,\\Lambda}=0,\\quad Z_{\\phi,\\Lambda}=0. \\tag{1}\\] In other words, all fermion self-interactions are put into the Yukawa interaction \\(\\bar{h}_{k}\\) and the scalar mass \\(\\bar{m}_{k}^{2}\\) at the bosonization scale \\(\\Lambda\\), and the standard form of the gauged NJL model in a purely fermionic language could be recovered by performing the Gaussian integration over the scalar field. Concentrating on the flow of the couplings \\(\\bar{m}_{k}^{2},\\bar{h}_{k},\\bar{\\lambda}_{\\sigma,k}\\), we find5 (\\(\\partial_{t}\\equiv k(d/dk)\\)): Footnote 5: The numerical coefficients on the RHS’s of Eqs. (2) depend on the implementation of the IR cutoff procedure at the scale \\(k\\) and on the choice of the Fierz decomposition of the four-fermion interactions. For the former point, we use a linear cutoff function [6] (see also D.F. Litim’s contribution to this volume). For the latter, we choose a \\((S-P)\\), \\((V)\\) decomposition, but display only the (pseudo-)scalar channels here; the vectors are discussed in [3]. Furthermore, we work in the Feynman gauge. \\[\\partial_{t}\\bar{m}_{k}^{2} = \\frac{k^{2}}{8\\pi^{2}}\\,\\bar{h}_{k}^{2},\\] \\[\\partial_{t}\\bar{h}_{k} = -\\frac{1}{2\\pi^{2}}\\,e^{2}\\,\\bar{h}_{k}+{\\cal O}(\\bar{\\lambda}_{ \\sigma,k}), \\tag{2}\\] \\[\\partial_{t}\\bar{\\lambda}_{\\sigma,k} = -\\frac{9}{8\\pi^{2}k^{2}}\\,e^{4}+\\frac{1}{32\\pi^{2}Z_{\\phi,k}^{2} }\\,\\frac{3+\\frac{\\bar{m}_{k}^{2}}{Z_{\\phi,k}k^{2}}}{(1+\\frac{\\bar{m}_{k}^{2}} {Z_{\\phi,k}k^{2}})^{3}}\\,\\bar{h}_{k}^{4}+{\\cal O}(\\bar{\\lambda}_{\\sigma,k}).\\] We observe that, although the four-fermion interaction has been bosonized to zero at \\(\\Lambda\\), \\(\\bar{\\lambda}_{\\sigma,\\Lambda}=0\\), integrating out quantum fluctuations reintroduces four-fermion interactions again owing to the RHS of the last equation; for instance, the first term \\(\\sim e^{4}\\) arises from gauge boson exchange. Bosonization in the standard approach is obviously complete only at \\(\\Lambda\\). However, guided by the demand for simplicity of the effective action at any scale \\(k\\), we would like to get rid of the fermion self-interaction at all scales, i.e., re-bosonize under the flow. Here the idea is to employ a flow equation for a scale-dependent effective action \\(\\Gamma_{k}[\\phi_{k}]\\), where the scalar field variable \\(\\phi_{k}\\) is allowed to vary during the flow; this flow equation is derived in [3], and can be written in a simple form as \\[\\partial_{t}\\Gamma_{k}[\\phi_{k}]=\\partial_{t}\\Gamma_{k}[\\phi_{k}]\\big{|}_{\\phi_ {k}}+\\int_{q}\\left(\\frac{\\delta\\Gamma_{k}}{\\delta\\phi_{k}(q)}\\,\\partial_{t} \\phi_{k}(q)+\\frac{\\delta\\Gamma_{k}}{\\delta\\phi_{k}^{*}(q)}\\,\\partial_{t}\\phi_{ k}^{*}(q)\\right), \\tag{3}\\] where the notation omits the remaining fermion and gauge fields for simplicity. The first term on the RHS is nothing but the flow equation for fixed variables evaluated at fixed \\(\\phi_{k}\\) instead of \\(\\phi=\\phi_{\\Lambda}\\). The second term reflects the flow of the variables. In the present case, we may choose \\[\\partial_{t}\\phi_{k}(q)=-(\\bar{\\psi}_{\\rm L}\\psi_{\\rm R})(q)\\,\\partial_{t} \\alpha_{k},\\quad\\partial_{t}\\phi_{k}^{*}(q)=(\\bar{\\psi}_{\\rm R}\\psi_{\\rm L})( -q)\\,\\partial_{t}\\alpha_{k}, \\tag{4}\\] where the transformation parameter \\(\\alpha_{k}(q)\\) is an a priori arbitrary function. Projecting Eq. (3) onto our truncation (1), we arrive at modified flows for the couplings (the equation for \\(\\bar{m}_{k}^{2}\\) remains unmodified): \\[\\partial_{t}\\bar{h}_{k} = \\partial_{t}\\bar{h}_{k}\\big{|}_{\\phi_{k}}+\\bar{m}_{k}^{2}\\, \\partial_{t}\\alpha_{k}, \\tag{5}\\] \\[\\partial_{t}\\bar{\\lambda}_{\\sigma,k} = \\partial_{t}\\bar{\\lambda}_{\\sigma,k}\\big{|}_{\\phi_{k}}-\\bar{h}_{k }\\,\\partial_{t}\\alpha_{k}.\\] We can now obtain bosonization at all scales, \\(\\bar{\\lambda}_{\\sigma,k}=0\\), if we adjust \\(\\alpha_{k}\\) in such a way that the RHS of the \\(\\partial_{t}\\bar{\\lambda}_{\\sigma,k}\\) equation equals zero for all \\(k\\). This, of course, affects the flow of the Yukawa coupling \\(\\bar{h}_{k}\\). The physical effect can best be elucidated with the aid of the convenient coupling \\(\\tilde{\\epsilon}_{k}:=\\frac{\\bar{m}_{k}^{2}}{k^{2}h_{k}^{2}}\\) and its RG flow: \\[\\partial_{t}\\tilde{\\epsilon}_{k}=-2\\tilde{\\epsilon}_{k}+\\frac{1}{8\\pi^{2}}+ \\frac{e^{2}}{\\pi^{2}}\\tilde{\\epsilon}_{k}+\\frac{9e^{4}}{4\\pi^{2}}\\tilde{ \\epsilon}_{k}^{2}-\\frac{1}{16\\pi^{2}}\\frac{\\epsilon_{k}^{2}(3+\\epsilon_{k})}{ (1+\\epsilon_{k})^{3}}, \\tag{6}\\] where we also abbreviated \\(\\epsilon_{k}:=\\frac{\\bar{m}_{k}^{2}}{Z_{\\phi,k}k^{2}}\\). A schematic plot of \\(\\partial_{t}\\tilde{\\epsilon}_{k}\\) is displayed in Fig. 1 where the occurence of two fixed points is visible (note that all qualitative features discussed here are insensitive to the last term of Eq. (6)). The first fixed point \\(\\tilde{\\epsilon}_{1}^{*}\\) is infrared unstable and corresponds to the inverse critical \\(\\lambda_{\\rm NJL}\\) coupling. Starting with an initial value of \\(0<\\tilde{\\epsilon}_{\\Lambda}<\\tilde{\\epsilon}_{1}^{*}\\) (strong coupling), the flow of the scalar mass-to-Yukawa-coupling ratio will be dominated by the first two terms in the flow equation (6) \\(\\sim-2\\tilde{\\epsilon}_{k}+1/(8\\pi^{2})\\). This is a typical flow of a theory involving a \"fundamental\" scalar with Yukawa coupling to a fermion sector. Moreover, we will end in a phase with (dynamical) chiral symmetry breaking, since \\(\\tilde{\\epsilon}\\sim\\bar{m}_{k}^{2}\\) is driven to negative values. On the other hand, if we start with \\(\\tilde{\\epsilon}_{\\Lambda}>\\tilde{\\epsilon}_{1}^{*}\\), the flow will necessarily be attracted towards the second infrared-stable fixed point \\(\\tilde{\\epsilon}_{2}^{*}\\). There will be no dynamical symmetry breaking, since the mass remains positive. The effective four-fermion interaction corresponding to the second fixed point reads: \\(\\lambda_{\\sigma}^{*}=\\frac{1}{2k^{2}\\tilde{\\epsilon}_{2}^{*}}\\approx\\frac{9}{ 16\\pi^{2}}\\frac{\\epsilon^{4}}{k^{2}}\\), which coincides with the perturbative value of massless QED. We conclude that the second fixed point characterizes massless QED. The scalar field shows a typical bound-state behavior with mass and couplings expressed by \\(e\\) and \\(k\\). A more detailed analysis reveals that the scalar field corresponds to positronium at this fixed point [3]. Our interpretation is that the \"range of relevance\" of these two fixed points tells us whether the scalar appears as a \"fundamental\" or a \"composite\" particle, corresponding to the state of the system being governed by \\(\\tilde{\\epsilon}_{1}^{*}\\) or \\(\\tilde{\\epsilon}_{2}^{*}\\), respectively. ## 3 Physics at the bound-state fixed point The bound-state fixed point \\(\\tilde{\\epsilon}_{2}^{*}\\) shows further interesting physical properties. In order to unveil them, we have to include momentum dependences of the couplings; in particular, we study the momentum dependence of \\(\\bar{\\lambda}_{\\sigma,k}\\) in the \\(s\\) channel. Then we can generalize the fermion-to-boson translation (4), \\[\\partial_{t}\\phi_{k}(q)=-(\\bar{\\psi}_{\\rm L}\\psi_{\\rm R})(q)\\,\\partial_{t} \\alpha_{k}(q)+\\phi_{k}(q)\\,\\partial_{t}\\beta_{k}(q), \\tag{7}\\] (and similarly for \\(\\phi^{*}\\)) with another a priori arbitrary function \\(\\beta_{k}(q)\\). Now we can fix \\(\\alpha_{k}(q)\\) and \\(\\beta_{k}(q)\\) in such a way that \\(\\bar{\\lambda}_{\\sigma,k}(s)\\) vanishes simultaneously for all \\(s\\) and \\(k\\) and that \\(\\bar{h}_{k}\\) becomes momentum-independent. Defining the dimensionless renormalized couplings \\(\\epsilon_{k}=\\bar{m}_{k}^{2}/(Z_{\\phi,k}k^{2}),h_{k}=\\bar{h}_{k}\\,Z_{\\phi,k}^ {-1/2}\\), this procedure leads us to the final flow equations [3], \\[\\partial_{t}\\epsilon_{k} = -2\\epsilon_{k}+\\frac{h_{k}^{2}}{8\\pi^{2}}-\\frac{\\epsilon_{k}( \\epsilon_{k}+1)}{h_{k}^{2}}\\left(\\frac{9e^{4}}{4\\pi^{2}}-\\frac{h_{k}^{4}}{16 \\pi^{2}}\\frac{3+\\epsilon_{k}}{(1+\\epsilon_{k})^{3}}\\right)\\big{(}1+(1+ \\epsilon_{k})Q_{\\sigma}\\big{)},\\] \\[\\partial_{t}h_{k} = -\\frac{e^{2}}{2\\pi^{2}}\\,h_{k}-\\frac{2\\epsilon_{k}+1+(1+\\epsilon_ {k})^{2}Q_{\\sigma}}{h_{k}}\\left(\\frac{9e^{4}}{8\\pi^{2}}-\\frac{h_{k}^{4}}{32\\pi ^{2}}\\frac{3+\\epsilon_{k}}{(1+\\epsilon_{k})^{3}}\\right). \\tag{8}\\] Using \\(\\tilde{\\epsilon}_{k}=\\epsilon_{k}/h_{k}^{2}\\), Eq. (6) can be rediscovered from Eqs. (8). Defining \\(\\Delta\\bar{\\lambda}_{\\sigma,k}:=\\bar{\\lambda}_{\\sigma,k}(k^{2})-\\bar{\\lambda}_ {\\sigma,k}(0)\\), the quantity \\(Q_{\\sigma}\\equiv\\partial_{t}\\Delta\\bar{\\lambda}_{\\sigma,k}/\\partial_{t}\\bar{ \\lambda}_{\\sigma,k}(0)\\) measures the suppression of \\(\\bar{\\lambda}_{\\sigma,k}(s)\\) for large external momenta. Without an explicit computation, we may conclude that this suppression implies \\(Q_{\\sigma}<0\\) in agreement with unitarity (e.g., \\(Q_{\\sigma}\\simeq-0.1\\)). In Fig. 2, a numerical solution of Eqs. (8) is presented in which we release the system at \\(\\Lambda\\) at \\(\\tilde{\\epsilon}_{\\Lambda}>\\tilde{\\epsilon}_{1}^{*}\\), so that it approaches \\(\\tilde{\\epsilon}_{2}^{*}\\) in the IR. We observe that both \\(h_{k}\\) and \\(\\epsilon_{k}\\) approach fixed points in the IR. (For analytical results for the fixed points, see [3].) In particular, this implies that the scalar mass term \\(m_{k}^{2}=\\epsilon_{k}k^{2}\\) decreases with \\(k^{2}\\) in the symmetric phase. This is clearly a nonstandard running of a scalar particle mass. As a consequence, a large scale separation \\(\\Lambda\\gg k\\) gives rise to a large mass scale separation \\(m_{\\Lambda}\\gg m_{k}\\) without any fine-tuning of the initial parameters. Figure 1: Schematic plot of the fixed-point structure of the \\(\\tilde{\\epsilon}_{k}\\) flow equation after fermion-boson translation. Arrows indicate the flow towards the infrared, \\(k\\to 0\\). ## 4 Conclusions and Outlook Within the framework of exact renormalization group flow equations for the effective average action, a scale-dependent transformation of the field variables provides for a continuous translation of UV to IR degrees of freedom. This concept is able to realize the physical criterion of desired simplicity of the effective action. Using the gauged NJL model as an example, this translation can be regarded as partial bosonization at all scales. Here we identified an infrared fixed-point structure which can be associated with a bound-state behavior. One main result is that the RG flow of the scalar mass at the bound-state fixed point is \"natural\" in 't Hooft's sense so that no fine-tuning problem arises if we want to have small masses at scales far below the UV cutoff \\(k\\ll\\Lambda\\). It should be interesting to see if this possibility of a naturally small scalar mass is applicable for the gauge hierarchy problem of the standard model. For this purpose, a mechanism has to be identified that causes the system to flow into the phase with spontaneous symmetry breaking after it has spent some \"RG time\" at the bound-state fixed point. Phrased differently, the bound-state fixed point has to disappear in the deep IR. Taking a first glance at Eq. (6), or its immediate nonabelian generalization for SU(\\(N_{\\rm c}\\)) gauge groups (here we use the Landau gauge), \\[\\partial_{t}\\tilde{\\epsilon}_{k}=-\\left(2-\\frac{3\\,C_{2}}{4\\pi^{2}}\\,g_{k}{}^ {2}\\right)\\tilde{\\epsilon}+\\frac{N_{\\rm c}}{8\\pi^{2}}+\\frac{9}{8\\pi^{2}}\\frac {C_{2}}{N_{\\rm c}}\\left(C_{2}-\\frac{1}{2N_{\\rm c}}\\right)\\,g_{k}{}^{4}\\, \\tilde{\\epsilon}^{2}+{\\cal O}(\\epsilon^{2}), \\tag{9}\\] where \\(g_{k}\\) is the running gauge coupling and \\(C_{2}=(N_{\\rm c}^{2}-1)/(2N_{\\rm c})\\), we find that the parabola depicted in Fig. 1 is lifted and the fixed points vanish for large gauge coupling. In this case, the system would finally run into the \\(\\chi\\)SB phase once the gauge coupling has grown large enough. The question as to whether this mechanism can successfully be applied to a sector of the standard model is currently under investigation. ## Acknowledgment H.G. would like to thank the organizers of this conference for creating a lively and stimulating atmosphere. This work has been supported by the Deutsche Forschungsgemeinschaft under contract Gi 328/1-1 and KON 362/2002. ## References * [1] Y. Nambu, G. Jona-Lasinio: Phys. Rev.**122** (1961) 345 : ibid.**124** (1961) 246 * [2] C. Wetterich: Phys. Lett. B**301** (1993) 90 * [3] H. Gies, C. Wetterich: Phys. Rev. D**65** (2002) 065001 * [4] U. Ellwanger, C. Wetterich: Nucl. Phys. B**423** (1994) 137 ; D. U. Jungnickel, C. Wetterich,in \"The Exact Renormalization Group\", eds. A. Krasnitz, Y. Kubyshin, R. Potting and P. Sa, World Scientific, Singapore (1999). * [5] J. I. Latorre, T. R. Morris: JHEP**0011** (2000) 004 * [6] D. F. Litim: Phys. Lett. B**486** (2000) 92 : Phys. Rev. D**64** (2001) 105007
Within the framework of exact renormalization group flow equations, a scale-dependent transformation of the field variables provides for a continuous translation of UV to IR degrees of freedom. Using the gauged NJL model as an example, this translation results in a construction of partial bosonization at all scales. A fixed-point structure arises which makes it possible to distinguish between fundamental-particle and bound-state behavior of the scalar fields. submitted to **acta physica slovaca** CERN-TH/2002-105 1- 7
Provide a brief summary of the text.
arxiv-format/0205274v1.md
# Chondrules and Nebular Shocks E. I. Chiang Center for Integrative Planetary Sciences Astronomy Department University of California at Berkeley Berkeley, CA 94720, USA [email protected] ###### meteors, meteoroids Beneath the fusion-encrusted surfaces of the most primitive stony meteorites lies not homogeneous rock, but a profusion of millimeter-sized igneous spheres [see Hewins (1996) and other articles in the excellent compendium edited by Hewins, Jones, & Scott]. These _chondrules_, and their centimeter-sized counterparts, the _CAIs_ (calcium-aluminum-rich inclusions), comprise more than half of the volume fraction of chondritic meteorites. They are the oldest creations of the solar system; the oft-quoted age of the solar system of \\(4.566\\pm 0.002\\) billion years refers to the crystallization ages of CAIs as determined from radioactive isotope dating. Their chemical composition matches that of the solar photosphere in all but the most volatile of elements, reflecting their condensation from the same pristine gas that formed the sun. Their petrology is consistent with their being heated to super-liquidus temperatures for a period of a few minutes; their roundness suggests that the heating occurred while chondrule precursors were suspended in space, so that surface tension pulled their shapes into spheres. In the two hundred years since the discovery of chondrules, this heating event has shrouded itself in secrecy. Identifying the mechanism would beprize enough in itself; but the stakes are potentially even greater for those who suspect that the primitive character of chondrules and their substantial volume-filling fraction implicate them in the equally mysterious process of planet formation; that micron-sized dust grains could agglomerate to kilometer-sized planetesimals only by first taking the form of millimeter-sized, molten marbles; that the chondrule heating mechanism and the means of planetesimal assembly are part and parcel of the same physical process. Numerous theories have been proposed for the formation of chondrules. Many are staged within the primordial solar nebula, the circumsolar disk of gas and dust from which the planets congealed. None of these proposals has gained general acceptance. The proposals (see the summary by Boss 1996) range from nebular lightning (but how can we hope to understand electrical discharges in the solar nebula when we fail to understand them on Earth?), to collisions between molten planetesimals, to irradiation by particle flares in the vicinity of a magnetically active early sun (Shu et al. 2000). Rarely are the predictions of any given theory so detailed as to warrant comparison with the wealth of experimental data available on chondrites. The article by Desch and Connolly constitutes one of these welcome exceptions. The potent combination of theoretical astrophysicist and experimental petrologist consider the hypothesis that chondrules were heated by shock waves propagating through the nebula. Without devoting much attention to the question of the origin of these shocks, they ask whether a nebular shock wave could _in principle_ generate thermal histories for chondrules that are consistent with the mineralogical and textural evidence (see also Hood & Horanyi 1993; Connolly & Love 1998). The answer is enthusiastically affirmative. Solving the equations of conservation of mass, momentum, and energy, they compute the detailed temperature and density profile of a one-dimensional, plane-parallel, steady shock. They conclude that shock waves propagating at velocities of \\(v_{s}\\sim 7\\) km/s through gas having initially undisturbed temperatures of \\(T_{1}\\approx 300\\) K, densities \\(\\rho_{1}\\approx 10^{-9}\\) g/cm\\({}^{3}\\), and chondrule concentrations of \\(10^{-8}\\)-\\(10^{-6}\\) precursor particles/cm\\({}^{3}\\) (where a chondrule precursor is a ferromagnesian sphere of radius 0.3 mm and density 3 g/cm\\({}^{3}\\)) can reproduce empirically determined thermal histories for chondrules. These initial environmental parameters are chosen to resemble those of standard models of protoplanetary disks at a heliocentric distance of 2.5 AU [see, e.g., the minimum-mass solar nebula, obtained by augmenting the masses of the planets to solar composition and spreading that material in radius (Weidenschilling 1977).] On its approach to the shock front, the precursor is heated to temperatures of \\(\\sim\\)1500 K (just below the liquidus) for \\(\\sim\\)1 hour by absorbing radiation emitted by yet hotter chondrules and nebular dust that are further downstream past the front. Immediately after crossing the front, the precursor encounters a supersonic headwind of gas and is frictionally heated to temperatures of \\(\\sim\\)1800 K for a few minutes. Subsequent cooling is slowed by the fact that chondrules remain in thermal contact with hot shocked gas. The time to cool through solidus is given by the time the chondrule takes to travel several optical depths away from the intensely luminous shock front; this cooling timescale is several hours for the parameters above, in accord with experiment. Increasing the precursor concentration enhances the degree of pre-shock radiative heating and thereby increases the peak temperatures that chondrules attain. Since greater chondrule concentrations imply greater rates of collisions among them, and since higher peak temperatures give rise to radial or barred textures as opposed to porphyritic textures in chondrules synthesized in the lab, the model predicts the incidence of compound chondrules (two or more chondrules bound together by a collision in which one or more of the bodies was still plastic) to be higher among radial/barred chondrules than among porphyritic ones. This correlation is, in fact, observed in nature. Can shocks explain naturally the fact that chondrules have nearly uniform sizes of \\(\\sim\\)1 millimeter? Desch and Connolly defer this question to future work but we may guess that the answer is positive here as well. The maximum size of a chondrule would be set by the same physics that sets the size of a raindrop (P. Goldreich, 1997). By balancing the cohesive force of surface tension against the destructive force of turbulent gas drag, we estimate a maximum chondrule radius of \\(\\sim\\)4\\(\\gamma/\\rho_{2}v_{s}^{2}\\sim 4\\,\\)mm, where \\(\\rho_{2}\\approx 10^{-8}\\,\\)g/cm\\({}^{3}\\) is the density just past the shock front and \\(\\gamma\\approx 500\\,\\)dyne/cm is the surface tension of molten rock. Molten droplets greater than this size would bifurcate as they plow supersonically through shocked gas. The minimum size would be set by evaporation in the post-shock flow; particles of radius \\(\\lesssim 0.1\\,\\)mm sublimate away if kept at temperatures of \\(\\sim\\)1700 K for a few hours. Nebular shocks may even provide a means of agglomerating chondrules after heating them. At post-shock distances greater than those considered by Desch and Connolly, we expect chondrules and gas to cool back down to their initial temperatures of \\(\\sim\\)300 K. The jump conditions for a one-dimensional, plane-parallel isothermal shock yield a final post-shock gas density \\(\\rho_{3}\\) that is greater than \\(\\rho_{1}\\) by a factor of order \\(mv_{s}^{2}/kT_{1}\\sim 10^{2}\\), where \\(m\\) is the mass of a hydrogen molecule and \\(k\\) is Boltzmann's constant. For standard parameters, \\(\\rho_{3}\\sim 10^{-7}\\,\\)g/cm\\({}^{3}\\)--great enough in the context of standard nebular models at heliocentric distances of 2.5 AU that matter may clump together under its own self-gravity. There is, however, an entire other dimension to the experimental data that the mechanism of nebular shocks does not address: the presence of once-active, short-lived radionuclides such as \\({}^{26}\\)Al in CAIs (Lee et al., 1976). The long-standing hypothesis that these radionuclides were produced in supernovae that externally seeded the solar nebula has been called into question by the discovery of \\({}^{10}\\) Be in CAIs (McKeegan et al., 2000), since \\({}^{10}\\)Be is not a stellar nucleosynthetic product. The recent competing theory of Shu et al. (2000) and Gounelle et al. (2001) that chondrules were irradiated by particle flares in the vicinity of an active early Sun offers a framework for understanding the origin of radioactive nuclides in CAIs, including \\({}^{10}\\)Be. It is not clear, however, that the latter theory can account for the extensive petrographic data on which Desch and Connolly focus. Finally, what is the origin of nebular shocks? Desch and Connolly espouse self-gravitating clumps of matter (the potential building blocks of proto-Jupiter) that orbit at \\(\\sim\\)5 AU and that gravitationally drive density structures within the chondrule forming region at \\(\\sim\\)2.5 AU. This and other proposals are not sufficiently developed to predict the number of times a given chondrule passes through shock fronts. Empirically, the number of times a chondrule is heated must be between 1 and \\(\\sim\\)3, based on observations of chondrule rims (Rubin & Krot, 1996; Hewins, 1996). The proposal for the origin of shock waves by Desch and Connolly may run aground on this point--if the massive bodies at \\(\\sim\\)5 AU are present for the lifetime of the nebula, \\(T\\sim 10^{6}\\) yr, then shock fronts will have processed material along the entire circumference of the asteroid belt \\(T/T_{con}\\sim 3\\times 10^{4}\\) times, where \\(T_{con}\\sim 30\\) yr is the time between conjunctions of a body at 5 AU and a body at 2.5 AU. While Desch and Connolly provide useful, state-of-the-art computations of the thermal histories of particles traversing shock fronts, until a convincing source of shock waves is identified, the problem of chondrule formation will remain unsolved at the zeroth order level. ## References * (1) * (2) Boss, A.P. (1996) A concise guide to chondrule formation models. In _Chondrules and the Protoplanetary Disk_ (eds. R.H. Hewins, R.H. Jones, & E.R.D. Scott), pp. 257-263. Cambridge Univ. Press, Cambridge. * (3) * (4) Connolly, H.C., Jr., & Love, S.G. (1998) The formation of chondrules: Petrologic tests of the shock wave model. _Science_**280**, 62-67. * (5) * (6) Goldreich, P. (1997) personal communication. * (7) * (8) Gounelle, M., et al. (2001) Extinct radioactivities and protosolar cosmic rays: self-shielding and light elements. _Astrophys. J._**548**, 1051-1070. * (9) * (10) Hewins, R.H. (1996) Chondrules and the protoplanetary disk: An overview. In _Chondrules and the Protoplanetary Disk_ (eds. R.H. Hewins, R.H. Jones, & E.R.D. Scott), pp. 3-9. Cambridge Univ. Press, Cambridge. * (11) * (12) Hood, L.L., & Horanyi, M. (1993) The nebula shock wave model for chondrule formation--one-dimensional calculations. _Icarus_**106**, 179-189. * (13) * (14) Lee, T., Papanastassiou, D.A., & Wasserburg, G.J. (1976) Demonstration of Mg-26 excess in Al-lende and evidence for Al-26. _Geophys. Res. Lett._**3**, 41-44. * (15) * (16) * (17) McKeegan, K.D., Chaussidon, M., & Robert, F. (2000) Evidence for the in situ decay of 10Be in an Allende CAI and implications for short-lived radioactivity in the early solar system. _31st Annual Lunar and Planetary Science Conference_**31**, abstract no. 1999. * (18) * (19) Rubin, A.E., & Krot, A.N. (1996) Multiple heating of chondrules. In _Chondrules and the Protoplanetary Disk_ (eds. R.H. Hewins, R.H. Jones, & E.R.D. Scott), pp. 173-180. Cambridge Univ. Press, Cambridge. * (20) * (21) Shu, F.H., et al. (2000) The origin of chondrules and refractory inclusions in chondritic meteorites. _Astrophys. J._**548**, 1029-1050. * (22) * (23) Weidenschilling, S.J. (1977) The distribution of mass in the planetary system and solar nebula. _Astrophysics and Space Science_**51**, 153-158.
Beneath the fusion-encrusted surfaces of the most primitive stony meteorites lies not homogeneous rock, but a profusion of millimeter-sized igneous spheres. These chondrules, and their centimeter-sized counterparts, the calcium-aluminum-rich inclusions, comprise more than half of the volume fraction of chondritic meteorites. They are the oldest creations of the solar system. Their chemical composition matches that of the solar photosphere in all but the most volatile of elements, reflecting their condensation from the same pristine gas that formed the sun. In this invited editorial, we review the nebular shock wave model of Desch & Connolly (Meteoritics and Planetary Science 2002, 37, 183) that seeks to explain their origin. While the model succeeds in reproducing the unique petrological signatures of chondrules, the origin of the required shock waves in protoplanetary disks remain a mystery. Outstanding questions are summarized, with attention paid briefly to competing models.
Condense the content of the following passage.
arxiv-format/0206039v1.md
# Hidden Markov model segmentation of hydrological and environmental time series Ath. Kehagias ## 1 Introduction In this paper we discuss the following problem of _time series segmentation_: given a time series, divide it into two or more _segments_ (i.e. blocks of contiguous data) such that each segment is homogeneous, but contiguous segments are heterogeneous. Homogeneity / heterogeneity is described in terms of some appropriate statistics of the segments. The term _change point detection_ is also used to describe the problem. Examples of this problem arise in a wide range of fields, including engineering, computer science, biology and econometrics. The segmentation problem is also relevant to hydrology and environmetrics. For instance, in climate change studies it is often desirable to test a time series (such as river flow, rainfall or temperature records) for one or more sudden changes of its mean value. The time series segmentation problem has been studied in the hydrological literature. The reported approaches can be divided into two categories: _sequential_ and _nonsequential_. Sequential approaches often involve _intervention models_; see for example [14] and, for a critique of intervention models, [32]. Most of the nonsequential time segmentation work appearing in the hydrological literature involves _two_ segments. In other words, the goal is to detect the existence and estimate the location of a _single_ change point. A classical early study of changes in the flow of Nile appears in [8]. Buishand's work [6, 7] is also often cited. For some case studies see [15, 21, 34]. Bayesian approaches have recently generated considerable interest [27, 28, 29, 30, 32]. It appears that the _multiple_ change point problem has not been studied as extensively. Hubert's segmentation procedure [16, 17] is an important step in this direction. The goodness of a segmentation is evaluated by the sum squared deviation of the data from the means of their respective segments; in what follows we will use the term _segmentation cost_ for this quantity. Given a time series, Hubert's procedure computes the _minimal cost_ segmentation with \\(K=\\)2, 3, change points. The procedure gradually increases \\(K\\); for every value of \\(K\\) the best segmentation is computed; the procedure is terminated when differences in the means of the obtained segments are no longer statistically significant (as measuredby Scheffe's contrast criterion [33]). Hubert mentions that this procedure can segment time series with several tens of terms but is \" unable at the present state to tackle series of much more than a hundred terms \" because of the combinatorial increase of computational burden [17]. The work reported in this paper has been inspired by Hubert's procedure. Our goal is to develop an algorithm which can locate multiple change points in hydrological and/or environmental time series with several hundred terms or more. To achieve this goal, we adapt some _hidden Markov models (HMM)_ algorithms which have originally appeared in the speech recognition literature. (A survey of the relevant literature is postponed to Section 3.3.) We introduce a HMM of hydrological and/or environmental time series with change points and describe an approximate _Expectation / Maximization (EM) algorithm_ which produces a converging sequence of segmentations. The algorithm also produces a sequence of estimates for the HMM parameters. Time series of several hundred points can be segmented in a few seconds (see Section 4), hence the algorithm can be used in an interactive manner as an exploratory tool. Even for time series of several thousand points the segmentation time is in the order of seconds. This paper is organized as follows. In Section 2 we review Hubert's formulation of the time series segmentation problem. In Section 3 we formulate the segmentation problem in terms of hidden Markov models and present a segmentation algorithm; also we compare the hidden Markov model approach with that of Hubert. We present some segmentation experiments in Section 4. In Section 5 we summarize our results. Finally, in the Appendix we present an alternative, non-HMM segmentation method, which is more accurate but also slower. ## 2 Time Series Segmentation as an Optimization Problem In this section we formulate time series segmentation as an optimization problem. We follow Hubert's presentation, but we modify his notation. Given a time series \\(\\ \\mathbf{x}=(x_{1},\\,x_{2},\\, \\,\\,x_{T})\\) and a number \\(K\\), a _segmentation_ is a sequence of times \\(\\mathbf{t}=(t_{0},\\,t_{1},\\, \\,\\,t_{K})\\) which satisfy \\[0=t_{0}<t_{1}< <t_{K-1}<t_{K}=T. \\tag{1}\\] The intervals of integers \\([t_{0}+1,\\,t_{1}]\\), \\([t_{1}+1, ,t_{2}]\\), , \\([t_{K-1}+1,t_{K}]\\) are the _segments_; the times \\(t_{0}\\), \\(t_{1}\\), , \\(t_{K}\\) are the _change points_. \\(K\\), the number of segments, is the _order_ of the segmentation. The length of the \\(k\\)-th segment (for \\(k=1,2, ,K\\)) is denoted by \\(T_{k}=t_{k}-t_{k-1}\\). The following notation is used for a given segmentation \\(\\mathbf{t}=(t_{0},\\,t_{1},\\, \\,\\,t_{K})\\). For \\(k=1,2, ,K\\), define \\[\\widehat{\\mu}_{k}=\\frac{\\sum_{t=t_{k-1}+1}^{t_{k}}x_{t}}{T_{k}},\\qquad d_{k}= \\sum_{t=t_{k-1}+1}^{t_{k}}\\left(x_{t}-\\widehat{\\mu}_{k}\\right)^{2}. \\tag{2}\\] Define the _cost_ of segmentation \\(\\mathbf{t}=(t_{0}, ,t_{K})\\) by \\[D_{K}(\\mathbf{t})=\\sum_{k=1}^{K}d_{k}=\\sum_{k=1}^{K}\\sum_{t=t_{k-1}+1}^{t_{k}} \\left(x_{t}-\\widehat{\\mu}_{k}\\right)^{2}. \\tag{3}\\] If \\(D_{K}\\) has a small value, then the segments are homogeneous, i.e. the \\(x_{t}\\)'s are close to \\(\\widehat{\\mu}_{k}\\) for \\(k=1,2, ,K\\) and for \\(t=t_{k-1}+1, ,t_{k}\\). Now we can define the best \\(K\\)-th order segmentation \\(\\widehat{\\mathbf{t}}\\) to be the one minimizing \\(D_{K}(\\mathbf{t})\\) and denote the minimal cost by \\(\\widehat{D}_{K}=\\widehat{D}_{K}(\\widehat{\\mathbf{t}})\\). Note that for every \\(K\\) we have \\(\\widehat{D}_{K}\\geq\\widehat{D}_{K+1}\\)[16]. Also, there is only one segmentation \\(\\mathbf{t}\\) of order \\(T\\); in this case every time instant \\(t\\) is a segment by itself and \\(D_{T}(\\mathbf{t})=0\\). It can be seen [16] that the number of possible segmentations grows exponentially with \\(T\\). To efficiently search the set of all possible segmentations, Hubert uses a _branch-and-bound_ approach. Even so, the computational load increases excessively with \\(T\\) and this approach is not able currently (in 2000) to segment series of much more than a hundred terms [17]. Minimization of \\(D_{K}\\) can be achieved by several alternative (and faster than branch-and-bound) algorithms. A _dynamic programming_ approach is presented in the Appendix to obtain the globally minimum cost; this is feasible for \\(T\\) in the order of several hundreds and will be reported in greater length in a future publication [20]. In this paper a different approach is followed, which is based on HMM's. ## 3 Hidden Markov Models We now present a HMM formulation of the time series segmentation problem. HMM's have been used for runoff modeling [25] and the possibility of using them for hydrological time series segmentation has been mentioned in [30] but, as far as the author knows, an actual implementation has not been presented yet. On the other hand, we have already mentioned that HMM's are used for segmentation of time series in several other fields (see the discussion in Section 3.3). The term \"hidden Markov model\" is used to denote a broad class of stochastic processes; here we use a particular and somewhat restricted species of HMM to model a hydrological time series and present an approximate Expectation / Maximization (EM) algorithm to perform _Maximum Likelihood_ (ML) segmentation. In addition to the standard probabilistic interpretation of the algorithm, a numerical optimization point of view is also possible and we use the latter to prove the convergence of the algorithm. Finally we discuss related algorithms and possible extensions. ### HMM's and Hydrological Time Series We will use a pair of stochastic processes \\((Z_{t},X_{t})\\) to model a hydrological time series with change points. We start by considering a simple example. The annual flow of a river is denoted by \\(X_{t}\\). We assume that, for the years \\(t=1,2, ,t_{1}\\), \\(X_{t}\\) is a normally distributed random variable with mean \\(\\mu_{1}\\) and standard deviation \\(\\sigma\\). In year \\(t_{1}\\) a _transition_ takes place and, for the years \\(t=t_{1}+1\\), \\(t_{1}+2, ,t_{2}\\), \\(X_{t}\\) is normally distributed with mean \\(\\mu_{2}\\) and standard deviation \\(\\sigma\\). This process continues with transitions taking place in years \\(t_{2}\\), \\(t_{3}\\), , \\(t_{K-1}\\). This process is illustrated in Figure 1. We indicate the _states_ of the river flow by circles and the possible transitions from state to state by arrows; note that the states are _unobservable_. We indicate the observable time series by the double arrows emanating from the states. **Figure 1 to appear here** The above mechanism can be modeled by a pair of stochastic processes \\((Z_{t},X_{t})\\) (with \\(t=0,1,2, \\)) defined as follows. 1. \\(Z_{t}\\), which is the _state process_, is a finite state Markov chain with \\(K\\) states; it has initial probability vector \\(\\pi\\) and transition probability matrix \\(P\\). Hence, for any \\(T\\), the joint probability function of \\(Z_{0},Z_{1}, ,Z_{T}\\) is \\[\\Pr(Z_{1}=z_{1},Z_{2}=z_{2}, ,Z_{T}=z_{T})=\\pi_{z_{0}}\\cdot P_{z_{0},z_{1} }\\cdot P_{z_{1},z_{2}}\\cdot \\cdot P_{z_{T-1},z_{T_{n}}}.\\] (4) For the specific example discussed above, it will also be true that: (a) \\(\\pi_{1}=1\\), \\(\\pi_{k}=0\\) for \\(k=2,3, ,K\\), (b) \\(P_{k,j}=0\\) for \\(k=1,2, ,K\\) and all \\(j\\) other than \\(k\\), \\(k+1\\). The parameters of this process are \\(K\\) and \\(P\\). 2. \\(X_{t}\\), which is the _observation process_, is a sequence of _conditionally independent,_ normally distributed random variables with mean \\(\\mu_{Z_{t}}\\) and standard deviation \\(\\sigma\\). More precisely, for every \\(t\\), the joint probability density of \\(X_{1},X_{2}, ,X_{t}\\) conditioned on \\(Z_{1},Z_{2}, ,Z_{t}\\) is \\[f_{X_{1},X_{2}, ,X_{t}|Z_{1},Z_{2}, ,Z_{t}}(x_{1},x_{2}, ,x_{t}|z_{1},z_{ 2}, ,z_{t})=\\prod_{i=1}^{n}e^{-\\left(x_{t}-\\mu_{z_{t}}\\right)^{2}/2\\sigma^{ 2}}.\\] (5) The parameters of this process are \\(\\mu_{1}\\), \\(\\mu_{2}\\), , \\(\\mu_{K}\\) and \\(\\sigma\\). We will often use the notation \\(\\mathbf{M}=[\\mu_{1}\\), \\(\\mu_{2}\\), , \\(\\mu_{K}\\) ]. The \\((Z_{t},X_{t})\\) pair is a HMM, in particular a _left-to-right continuous HMM_[31]. \"Left-to-right\" refers to the structure of state transitions (as depicted in Figure 1) and \"continuous\" refers to the fact that the observation process is continuous valued. The model parameters are \\(K,\\)\\(P,\\)\\(\\mathbf{M},\\)\\(\\sigma\\). There is a one-to-one correspondence between state sequences \\(\\mathbf{z}=(z_{1},\\)\\(z_{2}\\), , \\(z_{T})\\) and segmentations \\(\\mathbf{t}=(t_{0}\\), \\(t_{1}\\), , \\(t_{K^{\\prime}})\\). For example, given a particular \\(\\mathbf{z}\\), we obtain the corresponding \\(\\mathbf{t}\\) by locating the times \\(t_{k}\\) such that \\(z_{t_{k}}\ eq z_{t_{k}+1}\\), for \\(k=1,2, ,K^{\\prime}-1\\) (and setting \\(t_{0}=0\\) and \\(t_{K^{\\prime}}=T\\)). The postulated Markov chain only allows left-to-right transitions, hence \\(K^{\\prime}\\leq K\\), i.e. there will be _at most_\\(K\\) segments, and every segment will be uniquely associated with a state. The _conditional likelihood_ of a state sequence \\(\\mathbf{z}\\) (_given_ an observation sequence \\(\\mathbf{x}\\)) is denoted by \\[L^{1}_{K,T}(\\mathbf{z}|\\mathbf{x};P,\\mathbf{M},\\sigma)=L^{1}_{K,T}(z_{1},z_{2 }, ,z_{T}|x_{1},x_{2}, ,x_{T};P,\\mathbf{M},\\sigma) \\tag{6}\\] and the _joint likelihood_ of a state sequence \\(\\mathbf{z}\\)_and_ an observation sequence \\(\\mathbf{x}\\) is denoted by \\[L^{2}_{K,T}(\\mathbf{z},\\mathbf{x};P,\\mathbf{M},\\)\\(\\sigma)=L^{2}_{K,T}(z_{1},z_{2}, ,z_{T},x_{1},x_{2}, ,x_{T};P,\\mathbf{M}, \\sigma). \\tag{7}\\] \\(L^{1}_{K,T}\\) and \\(L^{2}_{K,T}\\) are understood as functions of \\(\\mathbf{z}=(z_{1},z_{2}, ,z_{T})\\); the observations \\(\\mathbf{x}=(x_{1}\\), \\(x_{2}\\), , \\(x_{T})\\), the number of segments \\(K\\), and the length of the time series \\(T\\), as well as the parameters \\(P,\\)\\(\\mathbf{M},\\)\\(\\sigma\\) are assumed _fixed_. In place of \\(T\\) any \\(t\\) can be used, to indicate the likelihood of the subsequence \\((z_{1},z_{2}, ,z_{t})\\) given \\((x_{1},x_{2}, ,x_{t})\\) etc. For example, we can write \\[L^{2}_{K,t}(z_{1},z_{2}, ,z_{t},x_{1},x_{2}, ,x_{t};P,\\mathbf{M},\\sigma). \\tag{8}\\] Note also that \\[L^{1}_{K,T}=A\\cdot L^{2}_{K,T} \\tag{9}\\] where \\(A\\) is a quantity independent of \\((z_{1},z_{2}, ,z_{T})\\). Finally, from (4), (5) we have \\[L^{2}_{K,T}(\\mathbf{z},\\mathbf{x};P,\\mathbf{M},\\sigma)=\\prod_{t=1}^{T}\\left( P_{z_{t-1},z_{t}}\\cdot e^{-\\left(x_{t}-\\mu_{z_{t}}\\right)^{2}/2\\sigma^{2}} \\right), \\tag{10}\\] where \\(z_{0}\\) = 1, according to the previously stated assumption. ### The Segmentation Algorithm The ML segmentation \\(\\widehat{\\mathbf{t}}\\) can be obtained from the ML state sequence \\(\\widehat{\\mathbf{z}}=(\\widehat{z}_{1}\\), \\(\\widehat{z}_{2}\\), , \\(\\widehat{z}_{T})\\). Since state sequences are unobservable, we will estimate \\(\\widehat{\\mathbf{z}}\\) in terms of the observable sequence \\(\\mathbf{x}\\) = (\\(x_{1}\\), \\(x_{2}\\), , \\(x_{T})\\) and the parameters \\(K,\\)\\(P,\\)\\(\\mathbf{M},\\)\\(\\sigma\\). Note that in practice \\(K,\\)\\(P,\\)\\(\\mathbf{M},\\)\\(\\sigma\\) will also be unknown. Hence the computation of the maximum likelihood HMM segmentation must be divided into two subtasks: (a) estimating the HMM parameters and (b) computing the actual segmentation. We follow the standard approach used in HMM problems: a parameter estimation phase is followed by a time series segmentation phase and the process is repeated until convergence. This is the Expectation / Maximization (EM) approach. First we discuss estimation and segmentation in more detail; then we will return to a discussion of the EM approach. #### 3.2.1 Parameter Estimation Suppose, for the time being, that a segmentation \\(\\mathbf{t}=(t_{0},t_{1},\\) , \\(t_{m})\\) is given. A reasonable estimate of \\(\\mathbf{M}=\\left[\\mu_{1},\\,\\mu_{2},\\, \\,\\,\\,,\\,\\mu_{K}\\right]\\), _dependent on the given segmentation_, is (for \\(k=1,2, ,K\\)) \\[\\widehat{\\mu}_{k}=\\frac{\\sum_{t=t_{k-1}+1}^{t_{k}}x_{t}}{T_{k}}. \\tag{11}\\] Similarly we could use the following _segmentation-dependent_ estimates of \\(\\sigma\\) (for \\(k=1,2, ,K\\)) \\[\\widehat{\\sigma}_{k}=\\sqrt{\\frac{\\sum_{t=t_{k-1}+1}^{t_{k}}\\left(x_{t}- \\widehat{\\mu}_{k}\\right)^{2}}{T_{k}-1}}. \\tag{12}\\] However, to maintain compatibility with Hubert's approach, we will use the _segmentation-independent_ estimate \\[\\widehat{\\sigma}=\\sqrt{\\frac{\\sum_{t=1}^{T}\\left(x_{t}-\\widehat{\\mu}\\right)^{ 2}}{T-1}}=\\sqrt{\\frac{\\sum_{k=1}^{K}\\sum_{t=t_{k-1}+1}^{t_{k}}\\left(x_{t}- \\widehat{\\mu}\\right)^{2}}{T-1}}. \\tag{13}\\] where \\[\\widehat{\\mu}=\\frac{\\sum_{t=1}^{T}x_{t}}{T}\\] Let us now turn to the transition probability matrix \\(P\\). In a left-to-right HMM, for \\(k=1,2, ,K\\) and all \\(j\\) different from \\(k\\) and \\(k+1\\), we will have \\(P_{k,j}=0\\). Also, for \\(k=1,2, ,K-1\\) we will have \\(P_{k,k+1}=1-P_{k,k}\\). Hence \\(P\\) only has \\(K-1\\) free parameters, namely \\(P_{1,1}\\), \\(P_{2,2}\\), , \\(P_{K-1,K-1}\\). These could be estimated from the given segmentation. However, in this paper we use a simpler approach. Namely, we assume \\[P=\\left[\\begin{array}{cccccc}p&1-p&0& &0&0\\\\ 0&p&1-p& &0&0\\\\ & & & & & \\\\ 0&0&0& &p&1-p\\\\ 0&0&0&0&0&1\\end{array}\\right]. \\tag{14}\\] Hence \\(P\\) is determined in terms of a single parameter \\(p\\), which will be chosen a priori, rather than estimated. We have found by numerical experimentation that the exact value of \\(p\\) is not critical; in all the examples of Section 4, the segmentation algorithm performs very well using \\(p\\) in the range [0.85,0.95]. Finally, we must make a choice regarding the number of segments \\(K\\). We will use Hubert's approach, and take a sequence of increasing values: \\(K=2,\\,3,\\, \\,\\) until a value of \\(K\\) is reached which yields statistically nonsignificant segmentations (statistical significance is evaluated by Scheffe's contrast criterion, [16, 33]). #### 3.2.2 Segmentation Given observations \\(\\mathbf{x}=(x_{1},x_{2}, ,x_{T})\\) and assuming the parameters \\(K\\), \\(P\\), \\(\\mathbf{M}\\), \\(\\sigma\\) to be known, the _Maximum Likelihood (ML)_ state sequence is the \\(\\widehat{\\mathbf{z}}=(\\widehat{z}_{1},\\widehat{z}_{2}, ,\\widehat{z}_{T})\\) which maximizes \\(L_{K,T}^{1}(\\mathbf{z}|\\mathbf{x};P,\\mathbf{M}\\),\\(\\sigma)\\) as function of \\(\\mathbf{z}\\). The ML segmentation \\(\\widehat{\\mathbf{t}}=(\\widehat{t}_{0},\\,\\widehat{t}_{1},\\, \\,\\,\\,,\\,\\, \\widehat{t}_{K^{\\prime}})\\) is obtained from \\(\\widehat{\\mathbf{z}}\\). It will be seen inSection 3.2.4 that, under certain circumstances, \\(\\widehat{\\bf z}\\) also minimizes the segmentation cost \\(D_{K}\\) defined in Section 2. \\(\\widehat{\\bf z}=\\ (\\widehat{z}_{1},\\widehat{z}_{2}, ,\\widehat{z}_{T})\\) can be found by the _Viterbi algorithm_[11], a computationally efficient dynamic programming approach. In view of (9) we have \\[(\\widehat{z}_{1},\\widehat{z}_{2}, ,\\widehat{z}_{T})=\\arg\\max_{z_{1},z_{2},..,z_{T}}L_{K,T}^{2}(z_{1},z_{2}, ,z_{T},x_{1},x_{2}, ,x_{T};P,{\\bf M}, \\sigma). \\tag{15}\\] Now, for \\(t=1,2, ,T\\) and \\(k=1,2, ,K\\) define \\[q_{k,t}=\\max_{z_{1},z_{2}, ,z_{t-1}}L_{K,t}^{2}(z_{1},z_{2}, ,z_{t-1},k,x_ {1},x_{2}, ,x_{t};P,{\\bf M},\\sigma) \\tag{16}\\] It can be shown by standard dynamic programming arguments [5] that both \\(\\widehat{\\bf z}=\\ (\\widehat{z}_{1},\\,\\widehat{z}_{2},\\, ,\\,\\widehat{z}_{T})\\) and the \\(q_{k,t}\\)'s of (16) can be computed recursively as follows. **Viterbi Algorithm** **Input**: The time series \\(x_{1},x_{2}, ,x_{T}\\) ; the parameters \\(K\\), \\(P\\), \\({\\bf M}\\) and \\(\\sigma\\). **Forward Recursion** Set \\(q_{1,0}=1\\), \\(q_{2,0}=q_{3,0}= =q_{k,0}=0\\). For \\(t=1,2, ,T\\) For \\(k=1,2, ,K\\) \\[q_{k,t}=\\max_{1\\leq j\\leq K}\\left(q_{j,t-1}\\cdot P_{j,k}\\cdot e^{-(x_{t}-\\mu_{ k})^{2}/2\\sigma^{2}}\\right)\\] \\[r_{k,t}=\\arg\\max_{1\\leq j\\leq K}\\left(q_{j,t-1}\\cdot P_{j,k}\\cdot e^{-(x_{t}- \\mu_{k})^{2}/2\\sigma^{2}}\\right).\\] End End **Backtracking** \\(\\widehat{L}_{K,T}^{2}=\\max_{1\\leq k\\leq K}\\left(q_{k,T}\\right)\\) \\(\\widehat{z}_{T}=\\arg\\max_{1\\leq k\\leq K}\\left(q_{k,T}\\right)\\). For \\(t=T,T-1, ,2\\) \\[\\widehat{z}_{t-1}=r_{\\widehat{z}_{t},t}.\\] End Upon completion of the forward recursion, \\(\\widehat{L}_{K,T}^{2}\\), the maximum value of \\(L_{K,T}^{2}\\), is obtained. The backtracking phase produces the state sequence which maximizes \\(L_{K,T}^{2}\\) (and hence also \\(L_{K,T}^{1}\\) ). Execution time is of order O\\((T\\cdot K^{2})\\) which is _linear_ (rather than exponential) in the length of the time series \\(T\\). This makes the algorithm computationally feasible even for long time series. For more details on the Viterbi algorithm see [11]. #### 3.2.3 Combined Parameter Estimation and Segmenation Parameter estimation and segmentation can be combined in an algorithm which maximizes the likelihood viewed as a function of _both_ the state sequence \\(\\mathbf{z}=(z_{1},z_{2}, ,\\)\\(z_{T})\\) and the parameters \\(\\mathbf{M}\\). The algorithm presented below is an iterative _Expectation / Maximization_ (EM) algorithm [9] which produces a converging sequence of segmentations. **HMM Segmentation Algorithm** **Input:** The time series \\(\\mathbf{x}=(x_{1},x_{2}, ,x_{T})\\) ; the parameters \\(K\\), \\(P\\); a termination variable \\(\\varepsilon\\). Choose randomly a state sequence \\(\\widehat{\\mathbf{z}}^{(0)}\\)**= \\((z_{1}^{(0)},z_{1}^{(0)}, ,\\)\\(z_{T}^{(0)})\\). Compute \\(\\widehat{\\sigma}\\) from (13). For \\(i=1,2, \\) Compute \\(\\mathbf{t}^{(i)}\\) from \\(\\widehat{\\mathbf{z}}^{(i-1)}\\). Compute \\(\\widehat{\\mathbf{M}}^{(i)}\\) from \\(\\mathbf{t}^{(i)}\\) and (11). Compute \\(\\widehat{\\mathbf{z}}^{(i)}\\) by the Viterbi algorithm using \\(\\mathbf{x}\\), \\(K\\), \\(P\\), \\(\\mathbf{M}^{(i)}\\) and \\(\\widehat{\\sigma}\\). If \\(|L^{2}_{K,T}(\\widehat{\\mathbf{z}}^{(i)},\\mathbf{x};P,\\)\\(\\widehat{\\mathbf{M}}^{(i)}\\),\\(\\widehat{\\sigma})\\)\\(-\\)\\(L^{2}_{K,T}(\\widehat{\\mathbf{z}}^{(i-1)},\\mathbf{x};P,\\)\\(\\widehat{\\mathbf{M}}^{(i-1)}\\),\\(\\widehat{\\sigma})|<\\varepsilon\\). \\(\\widehat{\\mathbf{z}}=\\widehat{\\mathbf{z}}^{(i)}\\). Exit the loop EndIf End In Section 3.2.4 we will show that the above algorithm is a very close approximation to an EM algorithm and that, under certain conditions, every iteration increases the likelihood function. In all the examples presented in Section 4 the algorithm converges to the _global_ maximum with very few iterations (typically 3 or 4). In other words, the outer loop of the algorithm is executed only a few times; in each execution we perform a parameter reestimation according to (11) (with execution time O(\\(T\\))) and a segmentation by the Viterbi algorithm (with execution time O(\\(T\\cdot K^{2}\\))). Hence the total execution time for a fixed \\(K\\) value is O(\\(T\\cdot K^{2}\\)). For a complete segmentation procedure the above algorithm is run for a sequence of increasing values \\(K=2,3, \\). First the algorithm is used to obtain the ML segmentation of order \\(K=\\)2; the difference of the means of the two segments is tested for statistical significance by the Scheffe criterion (for details see [16] and [33]). If the difference is not significant, then it is concluded that the entire time series consists of a single segment. If the difference is significant, the algorithm is run with \\(K=3\\) and the Scheffe test is applied to the resulting segments. The process is continued until, for some value of \\(K\\), a segmentation is obtained which fails the Scheffe test (or until we reach \\(K=T\\), an unlikely case). The use of Scheffe's contrast criterion to determine the true value of \\(K\\) is somewhat problematic. This point is discussed in some detail in [16]. Many methods for the determination of \\(K\\) have been proposed in the literature, but none of these completely resolves the problem. In cases of doubt, apragmatic approach would be to use human judgement to evaluate segmentations with different \\(K\\)'s. In the case of hydrological and environmental time series which involve a rather small number of segments, this is relatively easy. The short execution time of the segmentation algorithm favors this approach, since experimentation in an \"interactive\" mode is feasible. #### 3.2.4 Convergence The goal of this section is to show that, for a fixed \\(K\\), every iteration of the HMM segmentation algorithm increases the likelihood; since the likelihood is bounded above by one, this also implies that the algorithm converges. Two approaches can be used. The first approach is based upon the probabilistic interpretation of the algorithm; since this is a routinely applied analysis of EM algorithms, it will be presented only in outline. In the second approach, the segmentation algorithm is viewed from a numerical optimization point of view and convergence is proved without using any probabilistic assumptions; furthermore this approach shows clearly the connection of our segmentation algorithm to Hubert's procedure. **Probabilistic Approach**. As explained in [9], the basic ingredient of the EM family of algorithms is the iterative application of an expectation step followed by a likelihood maximization step. In our case the expectation step consists in estimating \\(\\mathbf{M}^{(i)}\\) by (11) and the maximization step consists in finding \\(\\mathbf{z}^{(i)}\\) by the Viterbi algorithm. While the Viterbi algorithm computes exactly the global maximum of the likelihood (viewed as a function of \\(\\mathbf{z}\\) only!), the estimation step used in this paper is approximate. The exact step would involve computing estimates of \\(\\widehat{\\mu}_{1}\\), \\(\\widehat{\\mu}_{2}\\), , \\(\\widehat{\\mu}_{K}\\) for every possible segmentation and then combining these estimates in a sum weighted by the respective probability of each segmentation (a similar approach should be used for \\(\\sigma\\), using the estimates of (12)). This approach is used in [10] and elsewhere; while it is computationally more expensive than the approach used here, it is still viable. At any rate, in most cases the two approaches yield very similar results. If it is assumed that the estimate of (11) is a close approximation to the maximum likelihood estimate of \\(\\mathbf{M}\\), then convergence can be established by a standard EM argument presented in [9, 24] and several other places. This argument shows that a certain cross entropy \\(Q(\\mathbf{z}^{(i)},\\mathbf{z}^{(i-1)})\\) is decreased by every iteration of an EM algorithm. Since \\(Q\\) is always nonnegative, it must converge to a nonnegative number, and this suffices for the algorithm to terminate. Furthermore, by relating \\(Q(\\mathbf{z}^{(i)},\\mathbf{z}^{(i-1)})\\) to the likelihood, it can be shown that the sequence \\(L_{K,T}(\\mathbf{z}^{(i)})\\) is monotonically increasing. **Numerical Approach**. In what follows we will consider \\(K\\), \\(P\\), \\(\\mathbf{x}\\)**, \\(\\sigma\\)** to be fixed. We will denote the set of all possible state sequences by \\(\\Phi\\) and the set of all state sequences with \\(K\\) transitions by \\(\\Phi_{K}\\); we will also use the standard notation \\(\\mathbf{R}^{K}\\) for the set of all \\(K\\)-dimensional real vectors. Taking the negative logarithm of (10) we obtain \\[-\\log\\left[L_{K,T}^{2}(\\mathbf{z},\\mathbf{x};P,\\mathbf{M},\\sigma)\\right]=- \\sum_{t=1}^{T}\\log\\left(P_{z_{t-1},z_{t}}\\right)+\\sum_{t=1}^{T}\\frac{\\left(x_ {t}-\\mu_{z_{t}}\\right)^{2}}{2\\sigma^{2}}. \\tag{17}\\] We define \\(\\phi(\\mathbf{z})=\\) \"number of times \\(z_{t-1}\ eq z_{t}\\)\"; in other words, \\(\\phi(\\mathbf{z})\\) is the number of transitions in the state sequence \\(\\mathbf{z}\\). If we limit ourselves to state sequences \\(\\mathbf{z}\\in\\Phi_{K}\\), then obviously \\(\\phi(\\mathbf{z})=K\\). Now, for all \\(\\mathbf{z}\\in\\Phi_{K}\\), (17) becomes \\[-\\log\\left[L_{K,T}^{2}(\\mathbf{z},\\mathbf{x};P,\\mathbf{M},\\sigma)\\right] =-\\left((T-\\phi(\\mathbf{z}))\\cdot\\log\\left(p\\right)+\\phi(\\mathbf{z}) \\cdot\\log\\left(1-p\\right)\\right)+\\sum_{t=1}^{T}\\frac{\\left(x_{t}-\\mu_{z_{t}} \\right)^{2}}{2\\sigma^{2}} \\tag{18}\\] \\[=-\\left((T-K)\\cdot\\log\\left(p\\right)+K\\cdot\\log\\left(1-p\\right) \\right)+\\sum_{t=1}^{T}\\frac{\\left(x_{t}-\\mu_{z_{t}}\\right)^{2}}{2\\sigma^{2}}\\Rightarrow\\] (19) \\[=C(T,K,P)+\\sum_{t=1}^{T}\\frac{\\left(x_{t}-\\mu_{z_{t}}\\right)^{2}} {2\\sigma^{2}}. \\tag{20}\\] where \\(C(T,K,P)=-\\left[(T-K)\\cdot\\log\\left(p\\right)+K\\cdot\\log\\left(1-p\\right)\\right]\\). Now we define the function \\[J(\\mathbf{z},\\mathbf{M})=\\sum_{t=1}^{T}\\left(x_{t}-\\mu_{z_{t}}\\right)^{2} \\tag{21}\\] and note that \\[J(\\mathbf{z},\\mathbf{M})=-2\\sigma^{2}\\cdot\\left(\\log\\left[L_{K,T}^{2}( \\mathbf{z},\\mathbf{x};P,\\mathbf{M},\\sigma)\\right]+C(T,K,P)\\right). \\tag{22}\\] Note that, for simplicity of notation, we write \\(J(\\mathbf{z},\\mathbf{M})\\) as a function only of \\(\\mathbf{z},\\mathbf{M}\\); the quantities \\(T\\), \\(K\\), \\(P\\), \\(\\mathbf{x}\\), \\(\\sigma\\) can be considered fixed. Now consider a run of the segmentation algorithm which produces a sequence \\(\\mathbf{z}^{(0)}\\), \\(\\mathbf{z}^{(1)}\\), \\(\\mathbf{z}^{(2)}\\) , \\(\\mathbf{z}^{(i)}\\), _Suppose that for every \\(s\\) we have \\(\\mathbf{z}^{(i)}\\in\\Phi_{K}\\)._ By the reestimation formula for \\(\\mathbf{M}^{(i)}\\) we will have for every \\(s\\): \\[\\forall\\mathbf{M}\\in\\mathbf{R}^{K}:J(\\mathbf{z}^{(i-1)};\\mathbf{M}^{(i)}) \\leq J(\\mathbf{z}^{(i-1)};\\mathbf{M}). \\tag{23}\\] Furthermore, note that the Viterbi algorithm yields the global maximum of the likelihood _as a function of_\\(\\mathbf{z}\\). Hence, from (22) and the reestimation formula for \\(\\mathbf{z}^{(i)}\\) we will have for every \\(i\\): \\[\\forall\\mathbf{z}\\in\\Phi_{K}:J(\\mathbf{z}^{(i)};\\mathbf{M}^{(s)})\\leq J( \\mathbf{z};\\mathbf{M}^{(i)}). \\tag{24}\\] Now, using first (24) and then (23), we obtain \\[J(\\mathbf{z}^{(i)};\\mathbf{M}^{(i)})\\leq J(\\mathbf{z}^{(i-1)};\\mathbf{M}^{(i) })\\leq J(\\mathbf{z}^{(i-1)};\\mathbf{M}^{(i-1)}) \\tag{25}\\] and, from (25) and (22), \\[L_{K,T}^{2}(\\mathbf{z}^{(i)},\\mathbf{x};P,\\mathbf{M}^{(i)},\\sigma)\\geq L_{K,T }^{2}(\\mathbf{z}^{(i-1)},\\mathbf{x};P,\\mathbf{M}^{(i-1)},\\sigma) \\tag{26}\\] Hence, _if for every \\(i\\) we have \\(\\mathbf{z}^{(i)}\\in\\Phi_{K}\\)_, then the sequence \\(\\left\\{L_{K,T}^{2}(\\mathbf{z}^{(i)},\\mathbf{x};P,\\mathbf{M}^{(i)},\\sigma) \\right\\}_{i=0}^{\\infty}\\) is increasing; since it is also bounded from above by one, it must converge. It follows that the HMM segmentation algorithm produces a sequence of segmentations with increasing and convergent likelihood; from convergence of the likelihood we also conclude that the algorithm will eventually terminate. Furthermore, if \\(\\mathbf{t}^{(i)}\\) is the segmentation obtained from \\(\\mathbf{z}^{(i)}\\) is easy to check that \\[D_{K}(\\mathbf{t}^{(i)})=J(\\mathbf{z}^{(i)};\\mathbf{M}^{(i)}). \\tag{27}\\] From (23), (27) follows that _Hubert's segmentation cost is decreased in every iteration_ of the HMM segmentation algorithm. For the above analysis to hold, we have required that \\(z^{(i)}\\in\\Phi_{K}\\) for every \\(i\\). This condition is easy to check; it is usually satisfied in practice; and it can be _enforced_ by choosing the parameter \\(p\\) to be not too close to \\(1\\) (if \\(p\\simeq 1\\), then the cost of state transitions is very high and transitions are avoided). One way to interpret the above analysis is the following: using an appropriate value of \\(p\\), the segmentation algorithm presented here becomes an iterative, approximate way to find Hubert's optimal segmentation. The approximation is usually very good, as will be seen in Section 4. This interpretation is completely nonprobabilistic and does not depend on the use of the hidden Markov model. **Computational Issues**. We must also mention that succesful implementation of the Viterbi algorithm requires a normalization of the \\(q_{k,t}\\)'s to avoid numerical underflow; alternatively one can work with the logarithms of the the \\(q_{k,t}\\)'s and perform additions rather than multiplications. ### Discussion and Extensions An extensive mathematical, statistical and engineering literature covers both the theoretical and applied aspects of HMM's. The reader can use [10, 31] as starting points for a broader overview of the subject. EM-like algorithms for HMM's were introduced in [4, 3, 2, 24]. The EM family of algorithms was introduced in great generality in [9]; work on HMM's also appears in the econometrics [13, 23], as well as in the biological [22] literature. These references are merely starting points; the literature is very extensive. As already mentioned, the EM segmentation algorithm used here is a variation of algorithms which are well-established in the field of speech recognition; for example see [18, 19]. Taking into account the extensive HMM literature, as well as various ideas reported in the hydrological literature, the algorithm of Section 3.2.4 can be extended in several directions. 1. The assumption that the observations are normally distributed is not essential. Other forms of probability density can be used in (10). Similarly, by a simple modification of (10) the algorithm can handle vector valued observations. 2. A basic idea of the algorithm is that each segment must be _homogeneous_. Assuming that the observations within a segment are generated independently and normally, segment homogeneity is evaluated by the deviation of \\(x_{t_{k-1}+1},x_{t_{k-1+2}}, ,x_{t_{k}}\\) from the segment mean \\(\\widehat{\\mu}_{k}\\). But alternative assumptions can be used. For example, assume that the observations are generated by an autoregressive mechanism, i.e. that, for \\(t=t_{k-1}+1,t_{k-1}+2, ,t_{k}\\) and \\(k=1,2, ,K\\), we have \\[x_{t}=a_{0,k}+a_{1,k}x_{t-1}+a_{2,k}x_{t-2}+ +a_{l,k}x_{t-l}+\\epsilon_{t}\\] (28) (where \\(\\epsilon_{t}\\) is a white noise term). The segmentation algortith can be used within this framework. In this case the reestimation phase computes the AR coefficients \\(a_{1,k}\\), \\(a_{2,k}\\), , \\(a_{l,k}\\), which can be estimated from \\(x_{t_{k-1}+1}\\), \\(x_{t_{k-1+2}}\\), , \\(x_{t_{k}}\\) using a least squares fitting algorithm. This approach is used in Section 4.3 to fit a HMM autoregressive model to global temperature data. 3. Similarly, it may be assumed that the observations are generated by a polynomial regression of the form (for \\(t=t_{k-1}+1,t_{k-1}+2, ,t_{k}\\) and \\(k=1,2, ,K\\)) \\[x_{t}=a_{0,k}+a_{1,k}\\cdot(t-t_{k-1})+ +a_{l,k}\\cdot(t-t_{k-1})^{l}+ \\epsilon_{t}\\] (29) where \\(\\epsilon_{t}\\) is a noise term. Again, the coefficients \\(a_{0,k}\\), \\(a_{1,k}\\), , \\(a_{l,k}\\) can be computed at every reestimation phase by a least squares fitting algorithm. Additional constraints can be used to enforce continuity across segments. In the case of 1st order polynomials there are only two coefficients, \\(a_{0,k}\\), \\(a_{1,k}\\), which are determined by the continuity assumptions; the iterative reestimation of the change points can still be performed. This case may be of interest for detection of trends. 4. It has been mentioned in Section 3.2.1 that \\(P\\) can also be reestimated in every iteration of the EM algorithm. Preserving the left-to-right structure implies that for \\(k=1,2, ,K\\) and for all \\(j\\) different from \\(k\\) and \\(k+1\\), we have \\(P_{k,j}=0\\); furthermore, for \\(k=1,2, ,K-1\\) we have \\(P_{k,k+1}=1-P_{k,k}\\). The \\(P_{k,k}\\) parameters can be estimated by \\(\\widehat{P}_{k,k}=\\frac{T_{k}}{T_{k}+1}\\). However, some preliminary experiments indicate that this approach does not yield improved segmentations. 5. On the other hand, the treatment of the state transition can be modified in a more substantial manner by dropping the left-to-right assumption. In the current model each state of the Markov chain corresponds to a single segment and, because of the left-to-right structure, it is visited at most once. An alternate approach would be to assign some physical significance to the states. For instance, states could be chosen to correspond to climate regimes such as \"dry\", \"wet\" etc. In this case a state could be visited more than once. This approach allows the choice of models which incorporate expert knowledge about the evolution of climate regimes. On the other hand, if the left-to-right structure is dropped, the number of free parameters in the \\(P\\) matrix increases. These parameters could be estimated (conditional on a particular state sequence) by \\[\\widehat{P}_{kj}=\\frac{\\text{no. of times that }z_{t}=k\\text{ and }z_{t+1}=j}{\\text{no. of times that }z_{t}=k}.\\] (30) The enhancements of arbitrary transition structure and transition probability estimation are easily accommodated by our algorithm. ## 4 Experiments In this section we evaluate the segmentation algorithm by numerical experiments. The first experiment involves an annual river discharge time series which contains 86 points. The second example involves the reconstructed annual mean global temperature time series and contains 282 points. Both of these examples involve segmentation by minimization of total deviation from segment means. The third example again involves the annual mean global temperature time series, but performs segmentation by minimization of autoregressive prediction error. The fourth example involves artificially generated time series with up to 1500 points. ### Annual Discharge of the Senegal River In this experiment we use the time series of the Senegal river annual discharge data, measured at the Bakel station for the years 1903-1988. The length of the time series is 86. The same data set has been used by Hubert [16, 17]. The goal is to find the segmentation which is optimal with respect to total deviation from the segment means, has the highest possible order and is statistically significant according to Scheffe's criterion. We run the segmentation algorithm for increasing values of \\(K\\). In the experiments reported here we have always used \\(p=0.9\\) (similar results are obtained for other values of \\(p\\) in the interval [0.85, 0.95]. For every value of \\(K\\), convergence is achieved by the 3rd or 4th iteration of the algorithm. The optimal segmentations are presented in Table 1. The segmentations which were validated by the Scheffe criterion appear in bold letters. Hence it can be seen that the optimal and statistically significant segmentation is that of order 5, i.e. the segments are [1903,1921], [1922,1936], [1937,1949], [1950,1967], [1967,1988]. That this is the globally optimal segmentation, has been shown by Hubert in [16, 17] using his exact segmentation procedure. A plot of the time series, indicating the 5 segments and the respective means appears in Figure 2. **Figure 2 to appear here** We have verified that the HMM algorithm finds the globally optimal segmentation for all values of \\(K\\) (as listed in Table 1). We performed this verification by use of the exact dynamic programming algorithm presented in the Appendix. The conclusion is that, in this experiment, the HMM segmentation algorithm finds the optimal segmentations considerably faster than the exact algorithm. Specifically, running the entire experiment (i.e. obtaining the HMM segmentations of _all_ orders) with a MATLAB implementation of the HMM segmentation algorithm took 1.1 sec on a Pentium III 1 GHz personal computer; we expect that a FORTRAN or C implementation would take about 10% to 20% of this time. ### Annual Mean Global Temperature In this experiment we use the time series of annual mean global temperature for the years 1700 - 1981. Only the temperatures for the period 1902 - 1981 come from actual measurements; the remaining temperatures were _reconstructed_ according to a procedure described in [26] and also at the Internet address [http://www.ngdc.noaa.gov/paleo/ei/ei_intro.html](http://www.ngdc.noaa.gov/paleo/ei/ei_intro.html). The length of the time series is 282. The goal is again to find the segmentation which is optimal with respect to total deviation from the segment-means, has the highest possible order and is statistically significant according to Scheffe's criterion. We run the segmentation algorithm for \\(K=2,3, ,6\\), using \\(p=0.9\\). Convergence takes place in 4 iterations or less. The optimal segmentations are presented in Table 2. The segmentations which were validated by Scheffe's criterion appear in bold letters. \\begin{tabular}{|l|l|l|l|l|l|l|l|} \\hline \\(K\\) & \\multicolumn{4}{|c|}{Segment Boundaries (Change Points)} \\\\ \\hline 1 & **1902** & **1988** & & & & \\\\ \\hline 2 & **1902** & **1967** & **1988** & & & \\\\ \\hline 3 & **1902** & **1949** & **1967** & **1988** & & \\\\ \\hline 4 & **1902** & **1917** & **1953** & **1967** & **1988** & & \\\\ \\hline 5 & **1902** & **1921** & **1936** & **1949** & **1967** & **1988** & \\\\ \\hline 6 & 1902 & 1921 & 1936 & 1949 & 1967 & 1971 & 1988 \\\\ \\hline \\end{tabular} **Table 1** **Figure 2 to appear here** We have verified that the HMM algorithm finds the globally optimal segmentation for all values of \\(K\\) (as listed in Table 1). We performed this verification by use of the exact dynamic programming algorithm presented in the Appendix. The conclusion is that, in this experiment, the HMM segmentation algorithm finds the optimal segmentations considerably faster than the exact algorithm. Specifically, running the entire experiment (i.e. obtaining the HMM segmentations of _all_ orders) with a MATLAB implementation of the HMM segmentation algorithm took 1.1 sec on a Pentium III 1 GHz personal computer; we expect that a FORTRAN or C implementation would take about 10% to 20% of this time. ### Annual Mean Global Temperature In this experiment we use the time series of annual mean global temperature for the years 1700 - 1981. Only the temperatures for the period 1902 - 1981 come from actual measurements; the remaining temperatures were _reconstructed_ according to a procedure described in [26] and also at the Internet address [http://www.ngdc.noaa.gov/paleo/ei/ei_intro.html](http://www.ngdc.noaa.gov/paleo/ei/ei_intro.html). The length of the time series is 282. The goal is again to find the segmentation which is optimal with respect to total deviation from the segment-means, has the highest possible order and is statistically significant according to Scheffe's criterion. We run the segmentation algorithm for \\(K=2,3, ,6\\), using \\(p=0.9\\). Convergence takes place in 4 iterations or less. The optimal segmentations are presented in Table 2. The segmentations which were validated by Scheffe's criterion appear in bold letters. \\begin{tabular}{|l|l|l|l|l|l|l|l|} \\hline \\(K\\) & \\multicolumn{4}{|c|}{Segment Boundaries (Change Points)} \\\\ \\hline 1 & **1700** & **1981** & & & & & \\\\ \\hline 2 & **1700** & **1930** & **1981** & & & & \\\\ \\hline 3 & **1700** & **1812** & **1930** & **1981** & & & \\\\ \\hline 4 & **1700** & **1720** & **1812** & **1930** & **1981** & & \\\\ \\hline 5 & 1700 & 1720 & 1812 & 1926 & 1935 & 1981 & \\\\ \\hline 6 & 1700 & 1720 & 1812 & 1926 & 1934 & 1977 & 1981 \\\\ \\hline \\end{tabular} **Table 2**Hence it can be seen that the optimal and statistically significant segmentation is of order 4, i.e. the segments are [1700,1720], [1721,1812], [1813,1930], [1931,1981]. A plot of the time series, indicating the 4 segments and the respective means appears in Figure 3. **Figure 3 to appear here** The _total_ execution time for the experiment (i.e. to obtain optimal segmentations of all orders) is 2.97 sec. The segmentations of Table 2 are the globally optimal ones, as we have verified using the dynamic programming segmentation algorithm. ### Annual Mean Global Temperature with AR model In this experiment we again use the annual mean global temperature time series, but now we assume that it is generated by a _switching regression_ HMM. Specifically, we assume a model of the form \\[x_{t}=a_{0,k}+a_{1,k}x_{t-1}+a_{2,k}x_{t-2}+a_{3,k}x_{t-3}+\\epsilon_{t} \\tag{31}\\] where the parameters \\(a_{0,k}\\), \\(a_{1,k}\\), \\(a_{2,k}\\), \\(a_{3,k}\\) are specific to the \\(k\\)-th state of the underlying Markovian process. Given a particular segmentation, these parameters can be estimated by a least squares fitting algorithm. Hence the segmentation algorithm can be modified to obtain the optimal segmentation with respect to the model of (31). Once again we run the segmentation algorithm for \\(K=2,3, ,6\\), using \\(p=0.9\\). The optimal segmentations thus obtained are presented in Table 3. \\begin{tabular}{|l|l|l|l|l|l|l|} \\hline \\(K\\) & \\multicolumn{4}{|c|}{Segment Boundaries (Change Points)} \\\\ \\hline 1 & **1700** & **1981** & & & & \\\\ \\hline 2 & **1700** & **1926** & **1981** & & & \\\\ \\hline 3 & **1700** & **1833** & **1926** & **1981** & & \\\\ \\hline 4 & **1700** & **1769** & **1833** & **1926** & **1981** & \\\\ \\hline 5 & 1700 & 1769 & 1833 & 1895 & 1926 & 1981 & \\\\ \\hline 6 & 1700 & 1769 & 1825 & 1877 & 1904 & 1926 & 1981 \\\\ \\hline \\end{tabular} **Table 3** In this case segment validation is not performed by the Scheffe criterion; instead we use a prediction error correlation criterion. This indicates the maximum statistically significant number of segments is \\(K\\)=4 and the segments are [1700,1769], [1770,1833], [1834,1926], [1927,1981]. A plot of the time series, indicating the 4 segments and the respective autoregressions appears in Figure 3. **Figure 4 to appear here** Recall that the segments obtained by means-based segmentation are [1700,1720], [1721, 1812], [1813, 1930], [1931, 1981]. This seems to be in reasonable agreement with the AR-based segmentation, excepting the discrepancy of 1720 and 1769. From a numerical point of view, there is no a priori reason to expect that the AR-based segmentation and means-based segmentation should give the same results. The fact that the two segmentations are in reasonable agreement, supports the hypothesis that actual climate changes have occurred approximately at the transition times indicated by both segmentation methods. Finally, let us note that the _total_ execution time for the experiment (i.e. to obtain optimal segmentations of every order) is 3.07 sec and that the segmentations of Table 3 are the globally optimal ones, as we have verified using the dynamic programming segmentation algorithm. ### Artificial Time Series The goal of the final experiment is to investigate the scaling properties of the algorithm, specifically the scaling of execution time with respect to time series length \\(T\\) and the scaling of accuracy with respect to noise in the observations. To obtain better control over these factors, artificial time series are used, which have been generated by the following mechanism. The time series are generated by a 5-th order HMM. Every time series is generated by running the HMM from state no.1 until state no.5. Hence, every time series involves 5 state transitions and, for the purposes of this experiment, this is assumed to be known a priori. On the other hand, it can be seen that the length of the time series is variable. With a slight change of notation, in this section \\(T\\) will denote the _expected_ length of the time series, which can be controlled by choice of the probability \\(p\\). The values of \\(p\\) were chosen to generate time series of average lengths 200, 250, 500, 750, 1000, 1250, 1500. The observations are generated by a normal distribution with mean \\(\\mu_{k}\\) (\\(k\\)= 1, 2, , 5) and standard deviation \\(\\sigma\\). In all experiments the values \\(\\mu_{1}\\)= \\(\\mu_{3}\\)= \\(\\mu_{5}\\)= 1, \\(\\mu_{2}\\)= \\(\\mu_{4}\\)= \\(-1\\) were used. Several values of \\(\\sigma\\) were used, namely \\(\\sigma\\)= 0.00, 0.10, 0.20, 0.30, 0.50, 0.75, 1.00, 1.25, 1.50, 1.75, 2.00. For each combination of \\(T\\) and \\(\\sigma\\), 20 time series were generated and the HMM segmentation algorithm was run on each one. For each run two quantities were computed: \\(c\\), accuracy of segmentation, and \\(T_{e}\\), execution time. Segmentation accuracy is computed by the formula \\[c=\\frac{\\sum_{t=1}^{T}\\mathbf{1}(z_{t}=\\widehat{z}_{t})}{T}\\] where the indicator function \\(\\mathbf{1}(z_{t}=\\widehat{z}_{t})\\) is equal to 1 when \\(z_{t}=\\widehat{z}_{t}\\) and equal to 0 otherwise. From these data two tables are compiled. Table 4 lists \\(T_{e}\\) (in seconds) as a function of \\(T\\) (i.e. \\(T_{e}\\) is averaged over all time series of the same \\(T\\)). Table 5 lists average segmentation accuracy \\(c\\) as a function of \\(T\\) and \\(\\sigma\\) (i.e. \\(c\\) is averaged over the 20 time series with the same \\(T\\) and \\(\\sigma\\)). As expected, segmentation accuracy is generally a decreasing function of \\(\\sigma\\). \\begin{tabular}{|l|l|l|l|l|l|l|} \\hline \\(T\\) & 200 & 250 & 500 & 750 & 1000 & 1250 & 1500 \\\\ \\hline \\(\\sigma\\) & \\multicolumn{6}{c|}{\\(c\\)} \\\\ \\hline 0.0 Conclusion In this paper we have used hidden Markov models to represent hydrological and environmental time series with multiple change points. Inspired by Hubert's pioneering work and by methods of speech recognition, we have presented a fast iterative segmentation algorithm which belongs to the EM family. The quality of a particular segmentation is evaluated by the deviation from segment means, but extensions involving autoregressive HMM's, trend-generating HMM's etc. can also be used. Because execution time is \\(\\mathrm{O}(T\\cdot K^{2})\\), our algorithm can be used to explore various possible segmentations in an interactive manner. We have presented a convergence analysis which shows that under appropriate conditions every iteration of our algorithm increases the likelihood of the resulting segmentation. Furthermore, numerical experiments (involving river flow and global temperature time series) indicate that the algorithm can be expected to converge to the _globally_ optimal segmentation. ## Appendix A Appendix: A Dynamic Programming Segmentation Algorithm In this appendix we present an alternative time series segmentation algorithm which, unlike the HMM algorithm, is _guaranteed_ to produce the _globally optimal_ segmentation of a time series. This superior performance, however, is obtained at the price of longer execution time. Still, the algorithm is computationally viable for time series of several hundred terms. We describe the algorithm briefly here; a more detailed report appears in [20]. ### A General Segmentation Cost A _generalization_ of the time series segmentation problem discussed in previous sections is the following. Given a time series \\(\\mathbf{x}=(x_{1},\\,x_{2},\\, \\,\\,,\\,x_{T})\\) and a fixed \\(K\\), find a sequence of times \\(\\mathbf{t}=(t_{0},\\,t_{1},\\, \\,\\,,\\,t_{K})\\) which satisfies \\(0=t_{0}<t_{1}< \\,<t_{K-1}<t_{K}=T\\), and minimizes \\[J_{K}(\\mathbf{t})=\\sum_{k=1}^{K}f_{k}(t_{k-1},t_{k};\\mathbf{x}). \\tag{32}\\] \\(J_{K}(\\mathbf{t})\\) consists of a sum of terms \\(f_{k}(t_{k-1},t_{k};\\mathbf{x})\\). For example, Hubert's cost function can be obtained by setting \\[f_{K}(s,t;\\mathbf{x})=\\sum_{\\tau=s+1}^{t}\\left(x_{\\tau}-\\frac{ \\sum_{\\tau=s+1}^{t}x_{\\tau}}{t-s}\\right)^{2}. \\tag{33}\\] Hence Hubert's segmentation cost (3) is a special case of (32). Similarly, consider _autoregressive_ models of the form \\[x_{t}=u_{t}A_{k}+\\epsilon_{t}, \\tag{34}\\] where \\(t=t_{k-1}+1\\), \\(t_{k-1}+2\\), , \\(t_{k})\\) and \\(u_{t}=[1\\), \\(x_{t-1}\\), \\(x_{t-2}\\), , \\(x_{t-l}]\\), \\(A_{k}=[a_{k,1}\\), \\(a_{k,2}\\), , \\(a_{k,l}]^{\\prime}\\) (the \\({}^{\\prime}\\) denotes transpose of a matrix). Then we can set \\[f_{K}(s,t;\\mathbf{x})=\\sum_{\\tau=s+1}^{t}\\left(x_{\\tau}-u_{\\tau} A_{k}\\right)^{2}. \\tag{35}\\]Then the segmentation cost becomes \\[J_{K}(\\mathbf{t})=\\sum_{\\tau=s}^{t}\\epsilon_{\\tau}^{2}=\\sum_{k=1}^{K}\\sum_{t=t_{ k-1}+1}^{t_{k}}\\left(x_{t}-u_{t}A_{k}\\right)^{2}. \\tag{36}\\] The \\(a_{k,1},\\ a_{k,2},\\) , \\(a_{k,l}\\) (elements of \\(A_{k}\\)) are unknown, but can be determined by least squares fitting on \\(x_{t_{k-1}+1}\\), \\(x_{t_{k-1}+2}\\), , \\(x_{t_{k}}\\). A similar formulation can be used for regressive models of the form \\(x_{t}=u_{t}A_{k}+\\epsilon_{t}\\) where \\(A_{k}=[a_{k,0},\\)\\(a_{k,1},\\) , \\(a_{k,l}]^{\\prime}\\), \\(u_{t}=[1,\\)\\((t-t_{k-1}),\\)\\((t-t_{k-1})^{2},\\) , \\((t-t_{k-1})^{l}]\\). Hence we see that (32) is sufficiently general to subsume many cost functions of practical interest. ### Dynamic Programming Segmentation Algorithm The following dynamic programming algorithm can be used to minimize (32); it has been presented in [1] and applies to very general versions of the time series segmentation problem. **Dynamic Programming Segmentation Algorithm** **Input:** The time series \\(\\mathbf{x}=(x_{1},x_{2}, ,x_{T})\\); a termination number \\(K\\). **Initialization** For \\(t=1,2, ,T\\) For \\(s=1,2, ,t\\) \\(d_{s,t}=f_{K}(s-1,t;\\mathbf{x})\\) End \\(c_{t,0}=d_{1,t}\\) End **Minimization** For \\(k=1,2, ,K\\) For \\(t=k,k+1, ,T\\) For \\(s=0,1, ,t-1\\) \\(e_{s}=c_{s,k-1}+d_{s+1,t}\\) End \\(c_{t,k}=\\min_{1\\leq s\\leq t}\\left(e_{s}\\right)\\) \\(z_{t,k}=\\arg\\min_{1\\leq s\\leq t}\\left(e_{s}\\right)\\) End End **Backtracking** For \\(k=1,2, ,K\\)\\[\\widehat{t}_{k,k}=T\\] \\[\\text{For }n=k-1,k-2, ,1\\] \\[\\widehat{t}_{n,k}=z_{\\widehat{t}_{n+1,k},n}\\] End \\[\\widehat{t}_{0,k}=0\\] End On termination, the dynamic programming segmentation algorithm has computed \\[c_{T,k}=\\min_{\\mathbf{t}=(t_{0},t_{1}, ,t_{k})}J_{k}(\\mathbf{t}) \\tag{37}\\] for \\(k=1,2, ,K\\); in other words it has recursively solved a _sequence_ of minimization problems. For \\(k=1,2, ,K\\), the optimal segmentation \\(\\widehat{\\mathbf{t}}_{k}\\) = (\\(t_{0,k}\\), \\(t_{1,k}\\), , \\(t_{k,k}\\)) has been obtained by backtracking. The recursive minimization is performed in the second part of the algorithm; it is seen that computation time is \\(\\text{O}(K\\cdot T^{2})\\). This is not as good as the \\(\\text{O}(K^{2}\\cdot T)\\) obtained by the HMM algorithm (note that usually \\(K\\) is significantly less than \\(T\\)), but is still computationally viable for \\(T\\) in the order of a few hundreds. The backtracking part of the algorithm has execution time \\(\\text{O}(K^{2})\\). However, in many cases the computationally most expensive part of the algorithm is the initialization phase, i.e. the computation of \\(d_{s,t}\\). This involves \\(\\text{O}(T^{2})\\) computations of \\(d_{s,t}=f_{K}(s-1,t;\\mathbf{x})\\) and can increase the computation cost by one or more orders of magnitude. For example, if we apply the algorithm to detect changes in the mean, then \\[d_{s,t}=f_{K}(s-1,t;\\mathbf{x})=\\sum_{\\tau=s}^{t}\\left(x_{\\tau}-\\frac{\\sum_{ \\tau=s}^{t}x_{\\tau}}{t-s+1}\\right)^{2} \\tag{38}\\] which involves \\(t-s+1\\) addtitions; if (38) is used in the initialization phase, then this phase requires \\(\\text{O}(T^{3})\\) computations and this severely limits computational viability to relatively short time series. Hence, to enhance the computational viability of the dynamic programming segmentation algorithm, it is necessary to find efficient ways to perform the initialization phase. In the next two sections, we will deal with this question for two specific forms of \\(f_{K}(s,t;\\mathbf{x})\\): the first form pertains to the computation of means and the second to the computation of regressions and autoregressions. ### Fast Computation of Means The computation of means can be performed recursively, as will now be shown. For \\(t=1,2, ,T\\), \\(s=1,2, ,t-1\\), we must compute \\[M_{s,t}=\\sum_{\\tau=s}^{t}x_{\\tau},\\qquad d_{s,t}=f_{k}(s-1,t;\\mathbf{x})=\\sum _{\\tau=s}^{t}\\left(x_{\\tau}-\\frac{M_{s,t}}{t-s+1}\\right)^{2}. \\tag{39}\\] For \\(t=1,2, ,T\\), \\(s=1,2, ,t\\), define the following additional quantities: \\[p_{s,t}=\\frac{\\sum_{\\tau=s}^{t}x_{\\tau}}{\\sum_{\\tau=s}^{t}1},\\qquad q_{s,t}= p_{s+1,t}-p_{s,t}. \\tag{40}\\]Then we have \\[d_{s,t}=\\sum_{\\tau=s}^{t}(x_{\\tau}-p_{s,t})^{2}=(x_{s}-p_{s,t})^{2}+ \\sum_{\\tau=s+1}^{t}(x_{\\tau}-p_{s,t})^{2} \\tag{41}\\] and \\[\\sum_{\\tau=s+1}^{t}(x_{\\tau}-p_{s,t})^{2} =\\sum_{\\tau=s+1}^{t}(x_{\\tau}-p_{s+1,t}-p_{s+1,t}-p_{s,t})^{2}\\] \\[=\\sum_{\\tau=s+1}^{t}(x_{\\tau}-p_{s+1,t})^{2}+\\sum_{\\tau=s+1}^{t}(p _{s+1,t}-p_{s,t})^{2}+2\\cdot\\sum_{\\tau=s+1}^{t}(x_{\\tau}-p_{s+1,t})(p_{s+1,t}-p _{s,t})\\] \\[=d_{s+1,t}+(t-s)\\cdot(q_{s,t})^{2}+2\\cdot(p_{s+1,t}-p_{s,t})\\cdot \\left(\\sum_{\\tau=s+1}^{t}x_{\\tau}-(t-s)p_{s+1,t}\\right)\\Rightarrow\\] \\[\\sum_{\\tau=s+1}^{t}(x_{\\tau}-p_{s,t})^{2} =d_{s+1,t}+(t-s)\\cdot(q_{s,t})^{2} \\tag{42}\\] From (41), (42) follows that (for \\(t=1,2, ,T\\), \\(s=1,2, ,t-1\\)) \\[d_{s,t}=d_{s+1,t}+(t-s)\\cdot\\left(q_{s,t}\\right)^{2}+(x_{s}-p_{s,t})^{2}. \\tag{43}\\] The above computations can be implemented in time O(\\(T^{2}\\)) by the following algorithm. **Recursive Computation of \\(d_{s,t}\\)** For \\(t=1,2, ,T\\) \\(M_{t,t}=x_{t}\\) \\(p_{t,t}=M_{t,t}\\) For \\(s=t-1,t-2, ,1\\) \\(M_{s,t}=x_{s}+M_{s+1,t}\\) \\(p_{s,t}=\\frac{M_{s,t}}{t-s+1}\\) End End For \\(t=1,2, ,T\\) For \\(s=1,2,..,t-1\\) \\(q_{s,t}=(p_{s+1,t}-p_{s,t})\\) End End For \\(t=1,2, ,T\\)\\(d_{t,t}=0\\) For \\(s=t-1,t-,2, ,1\\) \\(d_{s,t}=d_{s+1,t}+(t-s)\\cdot(q_{s,t})^{2}+(x_{s}-p_{s,t})^{2}\\). End End Hence, if the above code replaces the initialization phase of the dynamic programming algorithm in Section A.2, we obtain an \\(\\mathrm{O}(K\\cdot T^{2})\\) implementation of the entire algorithm. In other words, we obtain an algorithm which, given a time series of length \\(T\\), computes the global minimum of Hubert's segmentation cost (for all segmentations of orders \\(K=1,2,3, ,T\\)) in time \\(\\mathrm{O}(K\\cdot T^{2})\\) ### Fast Computation of Regression Coefficients Consider now autoregressive models described by (34). As already mentioned, in this case we have \\[f_{k}(t_{k-1},t_{k};\\mathbf{x})=\\sum_{t=t_{k-1}+1}^{t_{k}}\\left(x_{t}-u_{t}A_{k }\\right)^{2}. \\tag{44}\\] Hence \\(d_{s,t}=f_{k}(s-1,t;\\mathbf{x})\\) is given by \\[d_{s,t}=\\sum_{\\tau=s}^{t}\\left(x_{\\tau}-u_{\\tau}A(s,t)\\right)^{2}. \\tag{45}\\] where \\(u_{t}=[1,\\)\\(x_{t-1},\\)\\(x_{t-2},\\) , \\(x_{t-l}]\\) and \\(A(s,t)\\) is obtained by solving the least squares equation \\[A(s,t)=\\left(U(s,t)^{\\prime}\\cdot U(s,t)\\right)^{-1}\\cdot U(s,t)^{\\prime} \\cdot X(s,t) \\tag{46}\\] with \\[X(s,t)=\\left[\\begin{array}{c}x_{s}\\\\ x_{s+1}\\\\ \\\\ x_{t}\\end{array}\\right]\\qquad\\text{ and }\\qquad U(s,t)=\\left[\\begin{array}{c}u_{s} \\\\ u_{s+1}\\\\ \\\\ u_{t}\\end{array}\\right]. \\tag{47}\\] Note that to solve (46) the matrix multiplications \\(U(s,t)^{\\prime}\\cdot U(s,t)\\), \\(U(s,t)^{\\prime}\\cdot X(s,t)\\) must be performed. For \\(t=1,2, ,T\\), \\(s=1,2, ,t\\), these multiplications require \\(\\mathrm{O}(T^{5})\\) time. However, the solution of (46) can be approximated by a fast recursive algorithm reported in [12]. Choose some small number \\(\\delta\\) and set \\[P_{0}=\\frac{1}{\\delta}\\cdot I \\tag{48}\\] (where \\(I\\) is the \\((l+1)\\times(l+1)\\) unit matrix). Then, consider the following recursion for \\(s=1,2, ,T\\) and \\(t=s+1, ,T\\): \\[u_{t} =[1,x_{t-1},x_{t-2}, ,x_{t-l}], \\tag{49}\\] \\[n =t-s,\\] (50) \\[P_{n} =P_{n-1}-P_{n-1}\\cdot u_{t}^{\\prime}\\cdot u_{t}\\cdot P_{n-1} \\cdot\\frac{1}{1+u_{t}\\cdot P_{n-1}\\cdot u_{t}^{\\prime}},\\] (51) \\[\\widehat{A}(s,t) =\\widehat{A}(s,t-1)+P_{n}\\cdot u_{t}^{\\prime}\\cdot\\left(x_{t}-u_ {t}\\cdot\\widehat{A}(s,t-1)\\right). \\tag{52}\\]Using the arguments of [12] for a fixed \\(s\\) and increasing \\(t\\) it can be shown that \\(\\widehat{A}(s,t)\\) converges _very quickly_ to \\(A(s,t)\\), the true solution of (46). Furthermore, the computations of (49)-(52) can be implemented in time O\\((T^{2})\\). Hence, for the case of autoregressive models, the \\(d_{s,t}\\) computation can be programmed as follows. \\begin{tabular}{l} \\hline \\hline **Recursive Computation of \\(d_{s,t}\\)** \\\\ For \\(s=1,2, ,T\\) \\\\ \\(P_{0}\\)=\\(\\frac{1}{\\delta}\\cdot I\\) \\\\ Initialize \\(\\widehat{A}(s,s)\\) randomly \\\\ \\(d_{s,s}\\)=0 \\\\ For \\(t=s+1,s+2, ,T\\) \\\\ \\(u_{t}=[1,x_{t-1},x_{t-2}, ,x_{t-l}]\\) \\\\ \\(n=t-s\\) \\\\ \\(P_{n}=P_{n-1}-P_{n-1}\\cdot u_{t}^{\\prime}\\cdot u_{t}\\cdot P_{n-1}\\cdot\\frac{1} {1+u_{t}\\cdot P_{n-1}\\cdot u_{t}^{\\prime}}\\) \\\\ \\(\\widehat{A}(s,t)=\\widehat{A}(s,t-1)+P_{n}\\cdot u_{t}^{\\prime}\\cdot\\left(x_{t} -u_{t}\\cdot\\widehat{A}(s,t-1)\\right)\\) \\\\ \\(d_{s,t}=d_{s,t-1}+\\left(x_{t}-u_{t}\\cdot\\widehat{A}(s,t)\\right)^{2}\\) \\\\ End \\\\ End \\\\ \\hline \\hline \\end{tabular} Hence, if the above code replaces the initialization phase of the dynamic programming segmentation algorithm in Section A.2, we have an O\\((K\\cdot T^{2})\\) implementation of the entire algorithm for autoregressive models. A similar modification is possible for regressive models of the form (34). ## References * [1] I.E. Auger and C.E. Lawrence. \"Algorithms for the optimal identification of segment neighborhoods\". _Bul. of Math. Biol._, vol.51, pp.39-54, 1989. * [2] L.E. Baum and T.Petrie. \"Statistical inference for probabilistic functions of finite state Markov chains\". _Ann. of Math. Stat._, 1966, vol.37, pp.1554-1563. * [3] L.E. Baum and J.A. Eagon. \"An inequality with applications to statistical estimation for probabilistic functions of Markov processes and to a model for ecology\". _Bull. Amer. Math. Soc._, vol.73, pp.360-363, 1967. * [4] E. Baum et al. \"A maximization technique occurring in the statistical analysis of probabilistic functions of Markov chains\". _Ann. of Math. Stat._, vol.41, pp.164-171, 1970. * [5] D. Bertsekas. _Dynamic Programming: Deterministic and Stochastic Models_. Prentice Hall, Englewood Cliffs, New Jersey, 1987. * [6] T.A. Buishand. \"Some methods for testing the homogeneity of rainfall records\". _J. Hydrol._, vol.58, pp.11-27, 1982. * [7] T.A. Buishand. \"Tests for detecting a shift in the mean of hydrological time series\". _J. Hydrol._, vol.75, pp.51-69, 1984. * [8] G. W. Cobb. \"The problem of the Nile: Conditional solution to a changepoint problem\". _Biometrika_, vol. 65, pp. 243- 252, 1978. * [9] A. P. Dempster, N. M. Laird and D. B. Rubin. \"Maximum likelihood from incomplete data via the EM algorithm\". _J. Roy. Statist. Soc. B_, vol.39, pp.1-38, 1977. * [10] R.J. Elliot, L. Aggoun and J.B. Moore. _Hidden Markov Models_. Springer, New York, 1995. * [11] G. Forney. \"The Viterbi algorithm\". _Proceedings of the IEEE_, vol. 61, pp.268-278, 1973. * [12] D. Graupe. _Identification of systems_. Van Nostrand, Reinhold, New York, 1972. * [13] J.D. Hamilton. \"Analysis of time series subject to changes in regime.\" _J. of Econometrics_, vol.45, pp.39-70, 1990. * [14] A.I. Hipel and K.W. McLeod. _Time Series Modelling of Water Resources and Environmental Systems_. Elsevier, 1994. * Observed changes since 1940\". _Phys. Chem. Earth (B)_, vol.24, pp.91-96, 1999. * [16] P. Hubert. \"Change points in meteorological analysis\". In _Applications of Time Series Analysis in Astronomy and Meteorology_, T.Subba Rao, M.B. Priestley and O. Lessi (eds.). Chapman and Hall, London, 1997. * [17] P. Hubert. \"The segmentation procedure as a tool for discrete modeling of hydrometeorogical regimes\". _Stoch. Env. Res. and Risk Ass._, vol. 14, pp.297-304, 2000. * [18] B.H. Juang. \"Maximum likelihood estimation for mixture multivariate stochastic observations of Markov chains\". _ATT Tech. J._, vol.64, pp.1235-1249, 1985. * [19] B.H. Juang and L.R. Rabiner. \"Mixture autoregressive hidden Markov models for speech signals\". _IEEE Trans. on Acoustics, Speech and Signal Processing_, vol. 33, pp.1404-1412, 1985. * [20] Ath. Kehagias, A. Nicolaou, V. Petridis and P. Fragou. \"Some dynamic programming algorithms for time series segmentation\". _In preparation_. * [21] G. Kiely, J.D. Albertson and M.B. Parlange. \"Recent trends in diurnal variation of precipitation at Valentia on the west coast of Ireland\". _J. Hydrol._, vol.207, pp.270-279, 1998. * [22] A. Krogh et al. \"Hidden Markov models in computational biology: applications to protein modeling\". _J. Mol. Biol._, vol.235, pp.150-1531, 1994. * [23] H.M. Krolzig. _Markov Switching Vector Autoregressions_. Springer, 1997. * [24] S.E. Levinson, L. R. Rabiner and M.M. Sondhi. \"An introduction to the application of the theory of probabilistic functions of a Markov chain\", _The Bell Sys. Tech. J._, vol. 62, pp.1035-1074, 1983. * [25] Z.Q. Lu and L.M. Berliner. \"Markov switching time series models with application to a daily runoff series\". _Water Resour. Res._, vol. 35, pp.523-534, 1999. * [26] M.E. Mann, R.S. Bradley and M.K. Hughes. \"Northern hemisphere temperatures during the past millennium: inferences, uncertainties, and limitations\". _Geophys. Res. Lett._, vol.26, pp.759-762, 1999. * [27] L. Perreault, M. Hache, M. Slivitzky and B. Bobee. \"Detection of changes in precipitation and runoff over eastern Canada and U.S. using a Bayesian approach\". _Stoch. Env. Res. and Risk Ass._, vol. 13, pp.201-216, 1999. * [28] L. Perreault, E. Parent, J. Bernier, B. Bobee and M. Slivitzky. \"Retrospective multivariate Bayesian change-point analysis: a simultaneous single change in the mean of several hydrological sequences\". _Stoch. Env. Res. and Risk Ass._, vol. 14, pp.243-261, 2000. * [29] L. Perreault, J. Bernier, B. Bobee and E. Parent. \"Bayesian change-point analysis in hydrometeorological time series. Part 1. The normal model revisited\". _J. Hydrol._, vol. 235, pp.221-241, 2000. * [30] L. Perreault, J. Bernier, B. Bobee and E. Parent. \"Bayesian change-point analysis in hydrometeorological time series. Part 2. Comparison of change-point models and forecasting\". _J. Hydrol._, vol. 235, pp.242-263, 2000. * [31] L.R. Rabiner. \"A tutorial on hidden Markov models and selected applications in speech recognition\", _Proc. IEEE_, vol. 77, pp.257-286, 1988. * [32] A. Ramachandra Rao and W. Tirttojandro. \"Investigation of changes in characteristics of hydrological time series by Bayesian methods\". _Stoch. Hydrol. and Hydraulics_, vol. 10, pp.295-317, 1996. * [33] M. Scheffe. _The Analysis of Variance_. Wiley, New York, 1959. * [34] E. Servat et al. \"Climatic variability in humid Africa along the Gulf of Guinea. Part I: detailed analysis of the phenomenon in Cote d' Ivoire\". _J. Hydrol.,_ vol.191, pp.1-15, 1997. Figure 1: A diagrammatic representation of a hidden Markov model. Figure 3: Plot of the annual mean global temperature and the segment means. This figure corresponds to the optimal fourth order segmentation. Figure 2: Plot of the Senegal river annual discharge and the segment means. This figure was obtained from the optimal 5-th order segmentation Figure 4: Plot of the annual mean global temperature and the AR estimate. This figure corresponds to the optimal fourth order segmentation.
Motivated by Hubert's segmentation procedure [16, 17], we discuss the application of hidden Markov models (HMM) to the segmentation of hydrological and environmental time series. We use a HMM algorithm which segments time series of several hundred terms in a few seconds and is computationally feasible for even longer time series. The segmentation algorithm computes the Maximum Likelihood segmentation by use of an expectation / maximization iteration. We rigorously prove algorithm convergence and use numerical experiments, involving temperature and river discharge time series, to show that the algorithm usually converges to the globally optimal segmentation. The relation of the proposed algorithm to Hubert's segmentation procedure is also discussed.
Write a summary of the passage below.
arxiv-format/0206095v1.md
# Statistical stability in time reversal George Papanicolaou Department of Mathematics, Stanford University, Stanford CA, 94305; [email protected] Leonid Ryzhik Department of Mathematics, University of Chicago, Chicago IL, 60637; [email protected] Knut Solna Department of Mathematics, University of California, Irvine CA, 92697; [email protected]. ## 1 Introduction In time reversal experiments a signal emitted by a localized source is recorded by an array and then re-emitted into the medium time-reversed, that is, the tail of the recorded signal is sent back first. In the absence of absorption the re-emitted signal propagates back toward the source and focuses approximately on it. This phenomenon has numerous applications in medicine, underwater acoustics and elsewhere and has been extensively studied in the literature, both from the experimental and theoretical points of view [12, 13, 14, 15, 16, 20, 24, 25, 31]. Recently time reversal has been also the subject of active mathematical research in the context of wave propagation and imaging in random media [2, 3, 4, 7, 8, 9, 32]. A schematic description of a time reversal experiment is presented in Figure 1. Figure 1: A pulse propagates toward a time reversal array of size \\(a\\). The propagation distance \\(L\\) is large compared to \\(a\\). The ambient medium has a randomly varying index of refraction with a typical correlation length that is small compared to \\(a\\). The signal is time reversed at the array and sent back into the medium. The back propagated signal refocuses with spot size \\(\\lambda L/a_{e}\\), where \\(a_{e}\\) is the effective aperture of the array (Section 3.3). For a point source in a homogeneous medium, the size of the refocused spot is approximately \\(\\lambda L/a\\), where \\(\\lambda\\) is the central wavelength of the emitted signal, \\(L\\) is the distance between the source and the transducer array and \\(a\\) is the size of the array. We assume here that the array is operating in the remote-sensing regime \\(a\\ll L\\). Multiple scattering in a randomly inhomogeneous medium creates **multipathing**, which means that the transducer array can capture waves that were initially moving away from it but get scattered onto it by the inhomogeneities. As a result, the array captures a wider aperture of rays emanating from the original source and appears to be larger than its physical size. Therefore, somewhat contrary to intuition, the inhomogeneities of the medium do not destroy the refocusing but enhance its resolution. The refocused spot is now \\(\\lambda L/a_{e}\\), where \\(a_{e}>a\\) is the **effective** size of the array in the randomly scattering medium, and depends on \\(L\\). The enhancement of refocusing resolution by multipathing is called **super-resolution**[7]. The time reversed pulse is also **self averaging** and refocusing near the source is therefore **statistically stable**, which means that it does not depend on the particular realization of the random medium. There is some loss of energy in the refocused signal because of scattering away from the array but this can be overcome by amplification, up to a point. The purpose of this paper is to explore in detail the mathematical basis of pulse stabilization, beyond what was done in [7]. We want to explore in particular in what regime of parameters statistical stability is observed in time reversal. We show here that for high frequency waves in a remote sensing regime, spatially localized sources lead to statistically stable super-resolution in time reversal, even for narrow-band signals. We also show that when the source is spatially distributed, only for broad-band signals do we have statistical stability in time reversal. The regime where our analysis holds is a high frequency one, more appropriate to optical or infrared time reversal than to ultrasound, sonar or microwave radar. In this regime we can make precise what spatially localized or distributed means (see Section 3.1). The numerical simulations in [7] and [8], which are set in ultrasound or underwater sound regime, indicate that time reversal is not statistically stable for narrow-band signals even for localized sources. Only for broad-band signals is time reversal statistically stable in the regime of ultrasound experiments or sonar. If the aperture of the transducer array is small \\(a/L\\ll 1\\), the Fresnel number \\(L/(ka^{2})\\) is of order one, and the random inhomogeneities are weak, which is often the case, we may analyze wave propagation in the paraxial or parabolic approximation [29]. The wave field is then given approximately by \\[u(t,{\\bf x},z)=\\frac{1}{2\\pi}\\int e^{i\\omega(z/c_{0}-t)}\\psi(z,{\\bf x};\\omega/c _{0})d\\omega \\tag{1}\\] where the complex amplitude \\(\\psi\\) satisfies the parabolic or Schrodinger equation \\[2ik\\psi_{z}+\\Delta_{\\bf x}\\psi+k^{2}(n^{2}-1)\\psi=0. \\tag{2}\\] Here \\({\\bf x}=(x,y)\\) are the coordinates transverse to the direction of propagation \\(z\\), the wave number \\(k=\\omega/c_{0}\\) and \\(n({\\bf x},z)=c_{0}/c({\\bf x},z)\\) is the random index of refraction relative to a reference speed \\(c_{0}\\). The fluctuations of the refraction index \\[\\sigma\\mu(\\frac{{\\bf x}}{l},\\frac{z}{l})=n^{2}({\\bf x},z)-1 \\tag{3}\\] are assumed to be a stationary random field with mean zero, variance \\(\\sigma^{2}\\), correlation length \\(l\\) and normalized covariance with dimensionless arguments \\[R({\\bf x},z)=E\\{\\mu({\\bf x}+{\\bf x}^{\\prime},z+z^{\\prime})\\mu({\\bf x}^{\\prime },z^{\\prime})\\}. \\tag{4}\\]A convenient tool for the analysis of wave propagation in a random medium is the Wigner distribution [19, 28] defined by \\[W(z,{\\bf x},{\\bf p})=\\frac{1}{(2\\pi)^{d}}\\int_{\\mathbb{R}^{d}}e^{i{\\bf p}\\cdot{ \\bf y}}\\psi({\\bf x}-\\frac{{\\bf y}}{2},z)\\overline{\\psi({\\bf x}+\\frac{{\\bf y}}{2 },z)}d{\\bf y} \\tag{5}\\] where \\(d=1\\) or \\(2\\) is the transverse dimension and the bar denotes complex conjugate. The Wigner distribution may be interpreted as phase space wave energy and is particularly well suited for high frequency asymptotics and random media [28]. The quantity of principal interest in time reversal, the time-reversed and back-propagated wave field, can be also expressed in terms of the Wigner distribution (see Section 3.1). The self-averaging properties of the back-propagated field are related to the self-averaging properties of functionals of the Wigner distribution in the form of integrals of \\(W\\) over the wave numbers \\({\\bf p}\\). In the next Section we introduce a precise scaling that corresponds to (a) high frequency, (b) long propagation distance, (c) narrow beam propagation, and (d) weak random fluctuations. In the asymptotic limit where the small parameters go to zero the Wigner distribution satisfies a stochastic partial differential equation (SPDE), a Liouville-Ito equation, that has the form \\[dW(z,{\\bf x},{\\bf p};k)=\\left(-\\frac{{\\bf p}}{k}\\cdot\ abla_{{\\bf x}}W+\\frac{k ^{2}D}{2}\\Delta_{{\\bf p}}W\\right)dz-\\frac{k}{2}\ abla_{{\\bf p}}W\\cdot d{\\bf B }({\\bf x},z) \\tag{6}\\] where \\({\\bf B}({\\bf x},z)\\) is a vector-valued Brownian field with covariance \\[E\\{B_{i}({\\bf x}_{1},z_{1})B_{j}({\\bf x}_{2},z_{2})\\}=-\\left(\\frac{\\partial^{2 }R_{0}(({\\bf x}_{1}-{\\bf x}_{2}))}{\\partial x_{i}\\partial x_{j}}\\right)z_{1} \\wedge z_{2}, \\tag{7}\\] where \\(z_{1}\\wedge z_{2}=\\min\\{z_{1},z_{2}\\}\\), and in the isotropic case \\[D=-\\frac{R_{0}^{{}^{\\prime\\prime}}(0)}{4},\\quad R_{0}({\\bf x})=\\int_{-\\infty} ^{\\infty}R({\\bf x},s)ds. \\tag{8}\\] In Section 2.5 we analyze this SPDE in the asymptotic limit of small correlation length for \\({\\bf B}({\\bf x},z)\\) in the transverse variables \\({\\bf x}\\), and show that \\(W(z,{\\bf x},{\\bf p};k)\\)'s with different wave vectors \\({\\bf p}\\) are uncorrelated. From this decorrelation property we deduce that for localized sources the time-reversed, back-propagated field is self-averaging, even for narrow-band signals. For distributed sources it is self-averaging only for broad-band signals. We show in detail in Section 3 how the asymptotic theory is used in time reversal. In Appendix A we introduce other scalings which lead to the same averaged SPDE but we do not analyze them in detail. Throughout the paper we define the Fourier transform by \\[\\hat{f}({\\bf k})=\\int d{\\bf x}e^{-i{\\bf k}\\cdot{\\bf x}}f({\\bf x})\\] so that \\[f({\\bf x})=\\int\\frac{d{\\bf k}}{(2\\pi)^{d}}e^{i{\\bf k}\\cdot{\\bf x}}\\hat{f}({ \\bf k}).\\] G. Papanicolaou was supported in part by grants AFOSR F49620-01-1-0465, NSF DMS-9971972 and ONR N00014-02-1-0088, L. Ryzhik by NSF grant DMS-9971742, an Alfred P. Sloan Fellowship and ONR grant N00014-02-1-0089. K. Solna by NSF grant DMS-0093992 and ONR grant N00014-02-1-0090. ## 2 Scaling and asymptotics ### The rescaled problem To carry out the asymptotic analysis we begin by rewriting the Schrodinger equation (2) in dimensionless form. Let \\(L_{z}\\) and \\(L_{\\bf x}\\) be characteristic length scales in the propagation direction, as, for example, the distance \\(L\\) between the source and the transducer array for \\(L_{z}\\) and the array size \\(a\\) for \\(L_{x}\\). We introduce a dimensionless wave number \\(k^{\\prime}=k/k_{0}\\) with \\(k_{0}=\\omega_{0}/c_{0}\\) and \\(\\omega_{0}\\) a central frequency. We rescale \\({\\bf x}\\) and \\(z\\) by \\({\\bf x}=L_{\\bf x}{\\bf x}^{\\prime}\\), \\(z=L_{z}z^{\\prime}\\) and rewrite (2) in the new coordinates dropping primes: \\[2ik\\frac{\\partial\\psi}{\\partial z}+\\frac{L_{z}}{k_{0}L_{\\bf x}^{2}}\\Delta\\psi+ k^{2}k_{0}L_{z}\\sigma\\mu\\left(\\frac{{\\bf x}L_{\\bf x}}{l},\\frac{zL_{z}}{l} \\right)\\psi=0. \\tag{3}\\] The physical parameters that characterize the propagation problem are: (a) the central wave number \\(k_{0}\\), (b) the strength of the fluctuations \\(\\sigma\\), and (c) the correlation length \\(l\\). We introduce now three dimensionless variables \\[\\delta=\\frac{l}{L_{\\bf x}},\\ \\ \\varepsilon=\\frac{l}{L_{z}},\\ \\ \\gamma=\\frac{1}{k_{0}l} \\tag{4}\\] which are the reciprocals of the **transverse scale** relative to correlation length, the reciprocal of the **propagation distance** relative to correlation length, and the central **wave length** relative to the correlation length. We will assume that the dimensionless parameters \\(\\gamma\\), \\(\\sigma\\), \\(\\varepsilon\\) and \\(\\delta\\) are small \\[\\gamma\\ \\ll\\ 1;\\ \\ \\ \\sigma\\ \\ll\\ 1;\\ \\ \\ \\delta\\ \\ll\\ 1;\\ \\ \\ \\varepsilon\\ \\ll\\ 1. \\tag{5}\\] This is a regime of parameters where super-resolution phenomena can be observed. To make the scaling more precise we introduce the Fresnel number \\[\\theta=\\frac{L_{z}}{k_{0}L_{\\bf x}^{2}}=\\gamma\\frac{\\delta^{2}}{\\varepsilon}. \\tag{6}\\] We can then rewrite the Schrodinger equation (3) in the form \\[2ik\\theta\\psi_{z}+\\theta^{2}\\Delta_{\\bf x}\\psi+\\frac{k^{2}\\delta}{\\varepsilon ^{1/2}}\\mu(\\frac{{\\bf x}}{\\delta},\\frac{z}{\\varepsilon})\\psi=0. \\tag{7}\\] provided that we relate \\(\\varepsilon\\) to \\(\\sigma\\) and \\(\\delta\\) by \\[\\varepsilon=\\sigma^{2/3}\\delta^{2/3}. \\tag{8}\\] One way that the asymptotic regime (5) can be realized is with the ordering \\[\\theta\\ \\ll\\ \\varepsilon\\ \\ll\\ \\delta\\ \\ll 1\\, \\tag{9}\\] and \\(\\gamma\\ \\ll\\ \\sigma^{4/3}\\delta^{-2/3}\\), corresponding to the high-frequency limit. We see from the scaled Schrodinger equation (7) that this regime can be given the following interpretation. We have first a **high frequency** limit \\(\\theta\\to 0\\), then a **white noise** limit \\(\\varepsilon\\to 0\\), and then a **broad beam** limit \\(\\delta\\to 0\\). We will analyze in detail and interpret these limits in the following Sections. Another scaling in which (5) is realized is \\(\\varepsilon\\ \\ll\\ \\theta\\ \\ll\\delta\\ \\ll 1\\). This is a regime in which the white noise limit is carried out first, then the high frequency limit and then the broad beam limit. We do not analyze this case here. Additional comments on scaling are provided in Appendix A. It is instructive to express the constraints (6) and (7) in terms of the dimensional parameters of the problem. First, both the size of the transverse scale \\(L_{\\mathbf{x}}\\) and the propagation distance \\(L_{z}\\) should be much larger than the correlation length \\(l\\) of the medium. Moreover, (6) implies that the longitudinal and transverse scales should be related by \\[\\frac{L_{z}}{L_{\\mathbf{x}}}=\\left(\\frac{\\delta}{\\sigma^{2}}\\right)^{1/3}\\gg 1\\] so that we are indeed in the beam approximation. The first inequality in (7) implies that \\[\\frac{L_{z}}{L_{\\mathbf{x}}}\\ll\\sqrt{k_{0}l}=\\frac{1}{\\sqrt{\\gamma}},\\] and with the above choice of \\(L_{z}\\) this implies that \\[\\frac{\\gamma^{3/2}}{\\sigma^{2}}\\ll\\frac{L_{\\mathbf{x}}}{l}\\ll\\frac{1}{\\sigma^ {2}}.\\] ### The high frequency limit A convenient tool for the study of the high frequency limit, especially in random media, is the Wigner distribution. It is often used in the context of energy propagation [19, 28] but it is also useful in analyzing time reversal phenomena [2, 3, 7]. Let \\(\\phi_{\\theta}(\\mathbf{x})\\) be a family of functions oscillating on a small scale \\(\\theta\\). The Wigner distribution is a function of the physical space coordinate \\(\\mathbf{x}\\) and wave vector \\(\\mathbf{p}\\) defined as \\[W_{\\theta}(\\mathbf{x},\\mathbf{p})=\\int\\limits_{\\mathbb{R}^{d}}\\frac{d\\mathbf{ y}}{(2\\pi)^{d}}e^{i\\mathbf{p}\\cdot\\mathbf{y}}\\phi_{\\theta}(\\mathbf{x}-\\frac{ \\theta\\mathbf{y}}{2})\\overline{\\phi_{\\theta}(\\mathbf{x}+\\frac{\\theta\\mathbf{y }}{2})}. \\tag{8}\\] The family \\(W_{\\theta}\\) is bounded in the space of Schwartz distributions \\(\\mathcal{S}^{\\prime}(\\mathbb{R}^{d}\\times\\mathbb{R}^{d})\\) if the functions \\(\\phi_{\\theta}\\) are uniformly bounded in \\(L^{2}(\\mathbb{R}^{d})\\). Therefore there exists a subsequence \\(\\theta_{k}\\to 0\\) such that \\(W_{\\theta_{k}}\\) converges weakly as \\(k\\to\\infty\\) to a limit measure \\(W(\\mathbf{x},\\mathbf{p})\\). This limit \\(W(\\mathbf{x},\\mathbf{p})\\) is non-negative and is customarily interpreted as the limit phase space energy density because \\[|\\phi_{\\theta_{k}}(\\mathbf{x})|^{2}\\to\\int\\limits_{\\mathbb{R}^{d}}W(\\mathbf{x },\\mathbf{p})d\\mathbf{p}\\quad\\text{as }\\theta\\to 0 \\tag{9}\\] in the weak sense. This allows one to think of \\(W(\\mathbf{x},\\mathbf{p})\\) as a local energy density. Let \\(W_{\\theta}(z,\\mathbf{x},\\mathbf{p})\\) be the Wigner distribution of the solution \\(\\psi\\) of the Schrodinger equation (5), in the transversal space-variable \\(\\mathbf{x}\\). A straightforward calculation shows that \\(W_{\\theta}(z,\\mathbf{x},\\mathbf{p})\\) satisfies in a weak sense the linear evolution equation \\[\\frac{\\partial W_{\\theta}}{\\partial z}+\\frac{\\mathbf{p}}{k}\\cdot \ abla_{\\mathbf{x}}W_{\\theta}\\] \\[=\\frac{ik\\delta}{2\\sqrt{\\varepsilon}}\\int e^{i\\mathbf{q}\\cdot \\mathbf{x}/\\delta}\\hat{\\mu}\\left(q,\\frac{z}{\\varepsilon}\\right)\\frac{W_{\\theta }\\left(\\mathbf{p}-\\frac{\\theta\\mathbf{q}}{2\\delta}\\right)-W_{\\theta}\\left( \\mathbf{p}+\\frac{\\theta\\mathbf{q}}{2\\delta}\\right)}{\\theta}\\frac{d\\mathbf{q}} {(2\\pi)^{d}}.\\] In the limit \\(\\theta\\to 0\\) the solution converges weakly in \\(\\mathcal{S}^{\\prime}\\), for each realization, to the (weak) solution of the random Liouville equation \\[\\frac{\\partial W}{\\partial z}+\\frac{\\mathbf{p}}{k}\\cdot\ abla_{\\mathbf{x}}W+ \\frac{k}{2\\sqrt{\\varepsilon}}\ abla_{\\mathbf{x}}\\mu\\left(\\frac{\\mathbf{x}}{ \\delta},\\frac{z}{\\varepsilon}\\right)\\cdot\ abla_{\\mathbf{p}}W=0. \\tag{10}\\] The initial condition at \\(z=0\\) is \\(W(0,\\mathbf{x},\\mathbf{p})=W_{I}(\\mathbf{x},\\mathbf{p})\\), the limit Wigner distribution of the initial wave function. ### The white noise limit In this Section we take the white noise limit \\(\\varepsilon\\to 0\\) in the random Liouville equation (11) whose solution we now denote by \\(W_{\\varepsilon}\\). We can do this using the asymptotic theory of stochastic differential equations and flows [22, 6, 21, 26] as follows. Using the method of characteristics, the solution of the Liouville equation (11) may be written in the form \\[W_{\\varepsilon}(t,\\mathbf{x},\\mathbf{p})=W_{I}(\\mathbf{X}_{\\varepsilon}(t; \\mathbf{x},\\mathbf{p}),\\mathbf{P}_{\\varepsilon}(t;\\mathbf{x},\\mathbf{p})),\\] where the processes \\(\\mathbf{X}_{\\varepsilon}(t;\\mathbf{x},\\mathbf{p})\\) and \\(\\mathbf{P}_{\\varepsilon}(t;\\mathbf{x},\\mathbf{p})\\) are solutions of the characteristic equations \\[\\frac{d\\mathbf{X}_{\\varepsilon}}{dz}=-\\frac{1}{k}\\mathbf{P}_{\\varepsilon}; \\quad\\frac{d\\mathbf{P}_{\\varepsilon}}{dz}=-\\frac{k}{2\\sqrt{\\varepsilon}} \ abla_{\\mathbf{x}}\\mu\\left(\\frac{\\mathbf{X}_{\\varepsilon}}{\\delta},\\frac{z} {\\varepsilon}\\right)\\] with the initial conditions \\(\\mathbf{X}_{\\varepsilon}(0)=\\mathbf{x}\\) and \\(\\mathbf{P}_{\\varepsilon}(0)=\\mathbf{p}\\). The asymptotic theory of random differential equations with rapidly oscillating coefficients implies that, under suitable conditions on \\(\\mu\\), in the limit \\(\\varepsilon\\to 0\\) the processes \\(\\mathbf{X}_{\\varepsilon}\\), \\(\\mathbf{P}_{\\varepsilon}\\) converge weakly (in the probabilistic sense), and uniformly on compact sets in \\(\\mathbf{x},\\mathbf{p}\\) to the limit processes \\(\\mathbf{X}(t)\\), \\(\\mathbf{P}(t)\\) that satisfy a system of stochastic differential equations \\[d\\mathbf{P}=-\\frac{k}{2}d\\mathbf{B}(z),\\quad d\\mathbf{X}=-\\frac{1}{k}\\mathbf{ P}dz,\\quad\\mathbf{X}(0)=\\mathbf{x},\\ \\ \\mathbf{P}(0)=\\mathbf{p}.\\] The random process \\(\\mathbf{B}(z)\\) is a Brownian motion with the covariance function \\[E\\left\\{B_{i}(z_{1})B_{j}(z_{2})\\right\\} =-\\frac{\\partial^{2}R_{0}(0)}{\\partial x_{i}\\partial x_{j}}dsz_{ 1}\\wedge z_{2}\\] \\[=\\delta_{ij}\\left(-R_{0}^{{}^{\\prime\\prime}}(0)\\right)z_{1} \\wedge z_{2},\\] in the isotropic case, where \\[R_{0}(\\mathbf{x})=\\int_{-\\infty}^{\\infty}R(\\mathbf{x},s)ds \\tag{13}\\] is a function of \\(|\\mathbf{x}|\\). This implies that the average Wigner distribution \\(W_{\\varepsilon}^{(1)}(z,\\mathbf{x},\\mathbf{p})=E\\left\\{W_{\\varepsilon}(z, \\mathbf{x},\\mathbf{p})\\right\\}\\) converges as \\(\\varepsilon\\to 0\\) uniformly on compact sets to the solution of the advection-diffusion equation in phase space \\[\\frac{\\partial W^{(1)}}{\\partial z}+\\frac{\\mathbf{p}}{k}\\cdot\ abla_{\\mathbf{ x}}W^{(1)}=\\frac{k^{2}D}{2}\\Delta_{\\mathbf{p}}W^{(1)} \\tag{14}\\] with the initial data \\(W^{(1)}(0,\\mathbf{x},\\mathbf{p})=W_{I}(\\mathbf{x},\\mathbf{p})\\). Here the diffusion coefficient \\(D\\) is given by \\[D=-\\frac{R_{0}^{{}^{\\prime\\prime}}(0)}{4}. \\tag{15}\\] The one-point moments \\(E\\left\\{[W_{\\varepsilon}(z,\\mathbf{x},\\mathbf{p})]^{N}\\right\\}\\) converge as \\(\\varepsilon\\to 0\\) to the functions \\(W^{(N)}(z,\\mathbf{x},\\mathbf{p})\\) that satisfy the same equation (14) but with the initial data \\(W^{(N)}(0,\\mathbf{x},\\mathbf{p})=[W_{0}(\\mathbf{x},\\mathbf{p})]^{N}\\). This is similar to the spot dancing phenomenon [11], where all one-point moments are governed by the same Brownian motion. In particular we have that \\[W^{(2)}(z,\\mathbf{x},\\mathbf{p})\ eq\\left[W^{(1)}(z,\\mathbf{x},\\mathbf{p}) \\right]^{2}\\] so that the process \\(W_{\\varepsilon}\\) does not converge to a deterministic one, in the strong sense pointwise. ### Multi-point moment equations As in the previous Section we may also study the white noise limit \\(\\varepsilon\\to 0\\) of the higher moments of \\(W_{\\varepsilon}(z,{\\bf x},{\\bf p})\\) at different points \\[W_{\\varepsilon}^{(N)}(z,{\\bf x}^{1},\\ldots,{\\bf x}^{N},{\\bf p}^{1},\\ldots,{\\bf p }^{N})=E\\left\\{[W_{\\varepsilon}(z,{\\bf x}^{1},{\\bf p}^{1})]^{r_{1}}\\cdot\\ldots \\cdot[W_{\\varepsilon}(z,{\\bf x}^{N},{\\bf p}^{N})]^{r_{N}}\\right\\}.\\] Here the points \\(({\\bf x}^{m},{\\bf p}^{m})\\) are all distinct, \\(({\\bf x}^{n},{\\bf p}^{n})\ eq({\\bf x}^{m},{\\bf p}^{m})\\). We may account for moments that have different powers of \\(W_{\\varepsilon}\\) at different points by taking different powers \\(r_{j}\\) of \\(W_{\\varepsilon}({\\bf x}^{j},{\\bf p}^{j})\\). We now consider the joint process \\(({\\bf X}_{\\varepsilon}(z;{\\bf x}^{m},{\\bf p}^{m}),{\\bf P}_{\\varepsilon}(z;{ \\bf x}^{m},{\\bf p}^{m}))\\), \\(m=1,\\ldots,N\\). As \\(\\varepsilon\\to 0\\) it converges to the solution of the system of stochastic differential equations \\[d{\\bf P}_{i}^{m}=-\\frac{k}{2}\\sum_{n=1}^{N}\\sum_{j=1}^{d}\\sigma_{ij}\\left( \\frac{{\\bf X}^{m}-{\\bf X}^{n}}{\\delta}\\right)dB_{j}^{n}(z),\\quad d{\\bf X}^{m}= -\\frac{1}{k}{\\bf P}^{m}dz, \\tag{16}\\] with the initial conditions \\[{\\bf X}^{m}(0)={\\bf x}^{m},\\ \\ {\\bf P}^{m}(0)={\\bf p}^{m}.\\] The d-dimensional Brownian motions \\({\\bf B}^{m}\\), \\(m=1,\\ldots,N\\) have the standard covariance tensor \\[E\\left\\{B_{i}^{m}(z_{1})B_{j}^{n}(z_{2})\\right\\}=\\delta_{mn}\\delta_{ij}z_{1} \\wedge z_{2},\\ \\ i,j=1,\\ldots,d,\\ m,n=1,\\ldots,N.\\] The symmetric tensor \\(\\sigma_{ij}({\\bf x})\\) is determined from \\[\\sum_{k=1}^{N}\\sigma_{ik}({\\bf x})\\sigma_{jk}({\\bf x})=-\\left(\\frac{\\partial^ {2}R_{0}({\\bf x})}{\\partial x_{i}\\partial x_{j}}\\right). \\tag{17}\\] We assume that equation (17) has a solution that is differentiable in \\({\\bf x}\\), which is compatible with the fact that the matrix on the right is, by Bochner's theorem, non-negative definite. The moments \\(W_{\\varepsilon}^{(N)}\\) converge as \\(\\varepsilon\\to 0\\) to the solution of the advection-diffusion equation \\[\\frac{\\partial W^{(N)}}{\\partial t}+\\sum_{m=1}^{N}\\frac{{\\bf p}^{ m}}{k}\\cdot\ abla_{{\\bf x}^{m}}W^{(N)}=\\frac{k^{2}D}{2}\\sum_{m=1}^{N}\\Delta_{{ \\bf p}^{m}}W^{(N)} \\tag{18}\\] \\[\\qquad\\qquad-\\frac{k^{2}}{4}\\sum_{{n,m=1}^{N}\\atop{n>m}}^{N}\\sum _{i,j=1}^{d}\\frac{\\partial^{2}R_{0}(({\\bf x}^{n}-{\\bf x}^{m})/\\delta)}{ \\partial x_{i}\\partial x_{j}}\\frac{\\partial^{2}W^{(N)}}{\\partial p_{i}^{n} \\partial p_{j}^{m}}\\] with the initial data \\[W^{(N)}(0,{\\bf x}_{1},,\\ldots,{\\bf x}^{N},{\\bf p}^{1},\\ldots,{\\bf p}^{N})=[W_{ I}({\\bf x}^{1},{\\bf p}^{1})]^{r_{1}}\\cdot\\ldots\\cdot[W_{I}({\\bf x}^{N},{\\bf p}^{N})]^ {r_{N}}.\\] From (18) we can calculate moments of functionals of \\(W_{\\varepsilon}\\) of the form \\[W_{\\varepsilon,\\phi}(z)=\\int W_{\\varepsilon}(z,{\\bf x},{\\bf p})\\phi({\\bf x}, {\\bf p})d{\\bf x}d{\\bf p}.\\] For example, as \\(\\varepsilon\\to 0\\) we have that \\[E\\left\\{[W_{\\varepsilon,\\phi}(z)]^{2}\\right\\}\\to\\int W^{(2)}(z,{\\bf x}_{1},{ \\bf p}_{1},{\\bf x}_{2},{\\bf p}_{2})\\phi({\\bf x}_{1},{\\bf p}_{1})\\phi({\\bf x}_{ 2},{\\bf p}_{2})d{\\bf x}_{1}d{\\bf p}_{1}d{\\bf x}_{2}d{\\bf p}_{2}.\\] A convenient way to deal with not only the limit of \\(N\\)-point moments but with the full limit process \\(W(z,{\\bf x},{\\bf p})\\), at all points \\({\\bf x},{\\bf p}\\) simultaneously, is provided by the theory ofstochastic flows [23]. For this we need to show that \\(W_{\\varepsilon}(z,{\\bf x},{\\bf p})\\) converges weakly (in the probabilistic sense) as \\(\\varepsilon\\to 0\\) to the process \\(W(z,{\\bf x},{\\bf p})\\) that satisfies the stochastic partial differential equation \\[dW_{\\delta}=\\left[-\\frac{{\\bf p}}{k}\\cdot\ abla_{\\bf x}W_{\\delta}+\\frac{k^{2}D}{ 2}\\Delta_{\\bf p}W_{\\delta}\\right]dz-\\frac{k}{2}\ abla_{\\bf p}W_{\\delta}\\cdot d {\\bf B}(\\frac{{\\bf x}}{\\delta},z). \\tag{19}\\] Here the Gaussian random field \\({\\bf B}({\\bf x},z)\\) has the covariance \\[E\\{B_{i}({\\bf x}_{1},z_{1})B_{j}({\\bf x}_{2},z_{2})\\}=-\\left(\\frac{\\partial^{2 }R_{0}(({\\bf x}_{1}-{\\bf x}_{2}))}{\\partial x_{i}\\partial x_{j}}\\right)z_{1} \\wedge z_{2}.\\] We call equation (19) the Liouville-Ito equation. It allows us to treat all equations of the form (18) simultaneously and is a convenient tool for simulation and analysis. The dimensionless wave number \\(k\\) can be scaled out of (19) by writing \\(W(z,{\\bf x},{\\bf p};k)=W(z,{\\bf x},\\frac{{\\bf p}}{k};1)\\) so that we need only consider (19) with \\(k=1\\). We will use this scaling in Section 3.1. Note that unlike the single Brownian motion (12) that governs the evolution of one-point moments, the Brownian field that enters the SPDE (19) depends explicitly on the dimensionless correlation length \\(\\delta\\) in the transverse direction. Therefore the limit process also depends on \\(\\delta\\) and we denote it by \\(W_{\\delta}\\). ### Statistical stability in the broad beam limit We will now consider the limit \\(\\delta\\to 0\\) of the process \\(W_{\\delta}(z,{\\bf x},{\\bf p})\\) when the transverse dimension \\(d\\geq 2\\). We are particularly interested in the behavior of functionals of \\(W_{\\delta}\\) as \\(\\delta\\to 0\\). The analysis of one-point moments in Section 2.3 showed that they do not depend on \\(\\delta\\) and are governed by a standard Brownian motion. Therefore the process \\(W_{\\delta}\\) does not have a pointwise deterministic limit. However, we will show that functionals of \\(W_{\\delta}\\) become deterministic in the limit \\(\\delta\\to 0\\). We refer to this phenomenon as **statistical stabilization** and give conditions for it to happen. Stabilization plays an important role in time reversal, imaging and other applications, as discussed in the Introduction. **Theorem 2.1**: _Assume that \\(\\phi({\\bf p})\\) is a smooth test function of rapid decay, the transverse correlation function \\(R_{0}({\\bf x})\\) has compact support, the initial Wigner distribution \\(W_{I}({\\bf x},{\\bf p})\\) is uniformly bounded and Lipschitz continuous, and the transverse dimension \\(d\\geq 2\\). Define_ \\[I_{\\delta,\\phi}(z,{\\bf x})=\\int W_{\\delta}(z,{\\bf x},{\\bf p})\\phi({\\bf p})d{ \\bf p}. \\tag{20}\\] _Then_ \\[\\lim_{\\delta\\to 0}E\\left\\{I_{\\delta,\\phi}^{2}(z,{\\bf x})\\right\\}=E^{2}\\left\\{I_{ \\delta,\\phi}(z,{\\bf x})\\right\\} \\tag{21}\\] _where \\(E\\left\\{I_{\\delta,\\phi}(z,{\\bf x})\\right\\}\\) is independent of \\(\\delta\\)._ The assumption of compact support for \\(R_{0}({\\bf x})\\) is not essential but simplifies the proof. We have already noted that the Wigner distribution \\(W_{\\delta}\\) itself does not stabilize. However, (21) implies that \\[\\lim_{\\delta\\to 0}Var\\left\\{I_{\\delta,\\phi}\\right\\}=\\lim_{\\delta\\to 0}E \\left\\{I_{\\delta,\\phi}^{2}(z)\\right\\}-E^{2}\\left\\{I_{\\delta,\\phi}\\right\\}=0. \\tag{22}\\] Therefore, any smooth functional of the form (20) stabilizes in the limit \\(\\delta\\to 0\\), that is, \\[I_{\\delta,\\phi}\\approx E\\{I_{\\delta,\\phi}\\}, \\tag{23}\\]in mean square, and the expectation of \\(I_{\\delta,\\phi}\\) does not depend on \\(\\delta\\). We prove Theorem 2.1 in Appendix B. In the applications of the asymptotic theory to time reversal we need not only functionals \\(I_{\\delta,\\phi}\\) of the form (20) but also of the form \\[J_{\\delta}(z,\\mathbf{x})=\\int W_{\\delta}(z,\\mathbf{x},\\mathbf{p})d\\mathbf{p}. \\tag{24}\\] We need to show that such functionals are well defined with probability one and to analyze their behavior as \\(\\delta\\to 0\\). This is done in the following theorem. Under the same hypotheses of Theorem 2.1 and with a non-negative initial Wigner distribution \\(W_{I}\\geq 0\\), the functional \\(J_{\\delta}\\) is bounded, continuous and non-negative with probability one. In the limit \\(\\delta\\to 0\\) we have \\[\\lim_{\\delta\\to 0}E\\left\\{J_{\\delta}^{2}(z,\\mathbf{x})\\right\\}=E^{2}\\left\\{J_{ \\delta}(z,\\mathbf{x})\\right\\} \\tag{25}\\] where \\(E\\left\\{J_{\\delta}(z,\\mathbf{x})\\right\\}\\) does not depend on \\(\\delta\\). The proof of this theorem is given in Appendix B. What is important in both Theorems 2.1 and 2.2 is that we do integrate over the wave numbers \\(\\mathbf{p}\\) because there is no pointwise stabilization. In time reversal applications, as in section 3.1, we actually need Theorem 2.2 when the integration is only over a line segment in \\(\\mathbf{p}\\) space, and the dimension of the latter is \\(d\\geq 2\\). Its proof follows from the one of Theorem 2.2. ## 3 Application to time reversal in a random medium We will now apply these results to the time reversal problem [7] described in the Introduction. A wave emitted from the plane \\(z=0\\) propagates through the random medium and is recorded on the time reversal mirror at \\(L\\). It is then _reversed_ in time and re-emitted into the medium. The back-propagated signal refocuses approximately at the source, as shown in Figure 1. There are two striking features of this refocusing in random media. One is that it is statistically stable, that is, it does not depend on the particular realization. The other is super-resolution, that is, the refocused spot is tighter than in the deterministic case. We discuss these two issues in this section. ### The time-reversed and back-propagated field We assume that the wave source at \\(z=0\\) is distributed on a scale \\(\\sigma_{s}\\) around a point \\(\\mathbf{x}_{0}\\), that is, \\[\\psi_{\\theta}(z=0,\\mathbf{x};k)=e^{i\\mathbf{p}_{0}\\cdot(\\mathbf{x}-\\mathbf{x}_ {0})/\\theta}\\psi_{0}(\\frac{\\mathbf{x}-\\mathbf{x}_{0}}{\\sigma_{s}};k),\\] where \\(\\psi_{0}\\) is a rapidly decaying and smooth function of \\(\\mathbf{x}\\) and \\(k\\). The width of the source \\(\\sigma_{s}\\) could be large or small compared to the Fresnel number \\(\\theta\\), and this affects the statistical stability of the time-reversed, back-propagated field, as we explain in this Section. The Green's function, \\(G_{\\theta}(z,\\mathbf{x};\\xi)\\), solves the parabolic wave equation (5) with a point source at \\((\\mathbf{x},z)=(\\xi,0)\\). Using its symmetry properties and the fact that time reversal \\(t\\to-t\\) is equivalent to \\(\\omega\\to-\\omega\\) or \\(k\\to-k\\), the back-propagated, time-reversed field on the plane of the source has the form \\[\\psi_{\\theta}^{B}(L,\\mathbf{x}_{0},\\xi;k)=\\] \\[\\iint G_{\\theta}(L,\\mathbf{x};\\mathbf{x}_{0}+\\theta\\xi;k) \\overline{G_{\\theta}(L,\\mathbf{x}_{0}+\\eta;\\eta;k)}e^{i\\mathbf{p}_{0}\\cdot \\eta/\\theta}\\psi_{0}(\\frac{\\eta}{\\sigma_{s}};-k)\\chi_{A}(\\mathbf{x})d \\mathbf{x}d\\eta.\\] The complex field amplitude \\(\\psi_{\\theta}^{B}\\) is evaluated at \\(\\mathbf{x}_{0}+\\theta\\xi\\), in the plane \\(z=0\\). We scale the observation point off \\(\\mathbf{x}_{0}\\) by \\(\\theta\\) because we expect that the spot size of the refocused signal will be comparable to the lateral spread of the initial wave function. We denote with \\(\\chi_{A}\\) the aperture function of the time reversal mirror. It could be its characteristic function, occupying the region \\(A\\) in the plane \\(z=L\\) \\[\\chi_{A}({\\bf x})=\\cases{1,&${\\bf x}\\in A$\\cr 0,&${\\bf x}\ otin A$},\\] or a more general aperture function like a Gaussian. The time reversal mirror is located in the plane \\(z=L\\). After changing variables, the back-propagated field is given by \\[\\psi^{B}_{\\theta}(L,{\\bf x}_{0},\\xi;k) =\\theta^{d}\\int G_{\\theta}(L,{\\bf x};{\\bf x}_{0}+\\theta\\xi;k) \\overline{G_{\\theta}(L,{\\bf x};{\\bf x}_{0}+\\theta\\eta;k)}e^{i{\\bf p}_{0}\\cdot \\eta}\\psi_{0}(\\frac{\\theta\\eta}{\\sigma_{s}};-k)\\chi_{A}({\\bf x})d{\\bf x}d\\eta\\] \\[=\\theta^{d}\\int G_{\\theta}(L,{\\bf x}_{0}+\\theta\\xi,{\\bf x};k) \\overline{G_{\\theta}(L,{\\bf x}_{0}+\\theta\\eta,{\\bf x};k)}e^{i{\\bf p}_{0}\\cdot \\eta}\\psi_{0}(\\frac{\\theta\\eta}{\\sigma_{s}};-k)\\chi_{A}({\\bf x})d{\\bf x}d\\eta.\\] It is now convenient to introduce the Wigner distribution \\[W_{\\theta}(z,{\\bf x}_{0},{\\bf p};k)=\\int\\frac{\\theta^{d}e^{i{\\bf p}\\cdot{\\bf y }}}{(2\\pi)^{d}}G_{\\theta}(z,{\\bf x}_{0}-{\\bf y}\\theta/2,{\\bf x};k)\\overline{G_ {\\theta}(z,{\\bf x}_{0}+{\\bf y}\\theta/2,{\\bf x};k)}\\chi_{A}({\\bf x})d{\\bf x}d{ \\bf y}, \\tag{10}\\] and express the back-propagated field as \\[\\psi^{B}_{\\theta}(L,{\\bf x}_{0},\\xi;k)=\\int e^{i{\\bf p}\\cdot(\\xi-\\eta)}W_{ \\theta}(L,{\\bf x}_{0}+\\frac{\\theta(\\xi+\\eta)}{2},{\\bf p};k)e^{i{\\bf p}_{0} \\cdot\\eta}\\psi_{0}(\\frac{\\theta\\eta}{\\sigma_{s}};-k)d{\\bf p}d\\eta. \\tag{11}\\] The Wigner distribution is scaled here differently from (8) because of the way we have scaled the source function. In the high frequency limit \\(\\theta\\to 0\\), \\(W_{\\theta}(z,{\\bf x},{\\bf p};k)\\) tends to \\(W(z,{\\bf x},{\\bf p};k)\\), which solves the random Liouville equation (11). Then, in the white noise limit, it solves the Liouville-Ito equation (19). The mean of \\(W\\) solves (14), in the high-frequency and white noise limit, with initial data \\[W(0,{\\bf x},{\\bf p};k)=\\frac{\\chi_{A}({\\bf x})}{(2\\pi)^{d}}. \\tag{12}\\] Let \\[\\beta=\\frac{\\sigma_{s}}{\\theta} \\tag{13}\\] be the ratio of the width of the source to the Fresnel number and assume that it remains fixed as \\(\\theta\\to 0\\). In this limit, the time-reversed and back-propagated field is given by \\[\\psi^{B}(L,{\\bf x}_{0},\\xi;k) =\\int e^{i{\\bf p}\\cdot(\\xi-\\eta)}W(L,{\\bf x}_{0},\\frac{{\\bf p}}{k })e^{i{\\bf p}_{0}\\cdot\\eta}\\psi_{0}(\\eta/\\beta;-k)d{\\bf p}d\\eta\\] \\[=\\int e^{i{\\bf p}\\cdot\\xi}W(L,{\\bf x}_{0},\\frac{{\\bf p}}{k}) \\beta^{d}\\hat{\\psi}_{0}(\\beta({\\bf p}-{\\bf p}_{0});-k)d{\\bf p}.\\] Here we have used the scaling \\(W(z,{\\bf x},{\\bf p};k)=W(z,{\\bf x},\\frac{{\\bf p}}{k};1)\\) in (19) and we have dropped the last argument \\(k=1\\). ### Statistical stability From the form (13) of the back-propagated and time-reversed field we see that when \\(\\beta=O(1)\\) (or small), which means that \\(\\sigma_{s}\\) is comparable to the Fresnel number \\(\\theta\\) (or smaller), we can apply the results of Section 2.5 and conclude that it is statistically stable or self-averaging in the broad beam limit \\(\\delta\\to 0\\). Theorems 1 and 2 are exactly what is needed for this. The fact that the initial function (12) may be discontinuous at the boundary of the set \\(A\\) is not a problem. This is because, we may approximate the function \\(\\chi_{A}\\) from above and below by two smooth positive functions, to which we may apply Theorems 1 and 2, and then use the maximum principle to deduce the decorrelation property when the initial data is \\(\\chi_{A}\\). We have, therefore, \\[\\psi^{B}(L,{\\bf x}_{0},\\xi;k)\\approx\\langle\\psi^{B}(L,{\\bf x}_{0},\\xi;k)\\rangle\\] in the sense of convergence in probability or in mean square, in the broad beam limit \\(\\delta\\to 0\\), for each fixed frequency \\(\\omega=kc_{0}\\). Statistical stability of time reversal does not depend on having a broad-band signal if the source is localized in space. This is true in the regime of parameters reflected by the scaling \\(\\theta\\ll\\varepsilon\\ll\\delta\\) considered here, which is a high frequency regime encountered in optical or infrared applications like ladar. The numerical experiments in [7] and [8] are closer to the regime of ultrasound experiments [16] and in underwater sound propagation, which is different from the high frequency regime analyzed here. For distributed sources the parameter \\(\\beta\\) is large and we cannot apply Theorems 1 and 2 to (13). It is necessary for statistical stability in this case to have broad-band signals. For \\(\\beta\\) large the time reversed and back propagated signal in the time domain has the form \\[\\psi^{B}(L,{\\bf x}_{0},\\xi,t) \\tag{14}\\] \\[=(2\\pi)^{d}e^{i({\\bf p}_{0}\\cdot\\xi-k_{0}c_{0}t)}\\psi_{0}(\\xi/ \\beta)\\int W(L,{\\bf x}_{0},\\frac{{\\bf p}_{0}}{k_{0}+k})e^{-ikc_{0}t}\\hat{g}(-c_ {0}k)\\frac{c_{0}dk}{2\\pi}\\] \\[=(2\\pi)^{d}e^{i({\\bf p}_{0}\\cdot\\xi-\\omega_{0}t)}\\psi_{0}(\\xi/ \\beta)\\int W(L,{\\bf x}_{0},\\frac{c_{0}{\\bf p}_{0}}{\\omega_{0}+\\omega})e^{-i \\omega t}\\hat{g}(-\\omega)\\frac{d\\omega}{2\\pi}\\] with \\(\\hat{g}(c_{0}k)\\) the Fourier transform of the initial pulse relative to the central frequency \\(\\omega_{0}=c_{0}k_{0}\\). This means that we have replaced the actual wave number \\(k\\) by \\(k_{0}+k\\), or \\(\\omega\\) by \\(\\omega_{0}+\\omega\\), with the new \\(\\omega\\), the baseband frequency, bounded by \\(\\Omega\\), the bandwidth, \\(|\\omega|\\leq\\Omega<\\omega_{0}\\). The integration is over the bandwidth \\([-\\Omega,\\Omega]\\). This integral is well defined with probability one and is self-averaging in the broad beam limit \\(\\delta\\to 0\\) by Theorem 2 and the remark following it. We will compute its average in Section 3.4. ### The effective aperture of the array From the explicit expression for the Green's function of (14), with \\(k=1\\), \\[U(z,{\\bf x},{\\bf p};{\\bf x}^{0},{\\bf p}^{0})=\\int\\frac{d{\\bf w} d{\\bf r}}{(2\\pi)^{2d}}\\exp\\left(i{\\bf w}\\cdot({\\bf x}-{\\bf x}^{0})+i{\\bf r} \\cdot({\\bf p}-{\\bf p}^{0})-iz{\\bf w}\\cdot{\\bf p}^{0}\\right)\\] \\[\\times\\exp\\left(-\\frac{Dz}{2}\\left[r^{2}+z{\\bf r}\\cdot{\\bf w}+ \\frac{{\\bf w}^{2}z^{2}}{3}\\right]\\right),\\] and with the time reversal mirror a distance \\(L\\) from the source and \\({\\bf x}_{0}=0\\), it follows from (13) that \\[\\langle\\psi^{B}(z,\\xi;k)\\rangle=\\] \\[\\int\\frac{d{\\bf p}d{\\bf y}d{\\bf w}}{(2\\pi)^{2d}}e^{i{\\bf p}\\cdot \\xi}\\beta^{d}\\psi_{0}(\\beta({\\bf p}-{\\bf p}_{0});-k)\\chi_{A}({\\bf y})\\exp\\left[ -i{\\bf w}\\cdot{\\bf y}-iz{\\bf w}\\cdot\\frac{{\\bf p}}{k}-\\frac{Dz^{3}w^{2}}{6}\\right]\\]The high-frequency, white-noise limit of the _self-averaging_ time-reversed and back-propagated field is therefore given by a convolution \\[\\langle\\psi^{B}(L,\\xi;k)\\rangle=\\psi^{\\beta}_{0}(\\cdot,-k)*{\\mathcal{W}}(\\cdot)(\\xi) \\tag{10}\\] with \\[{\\mathcal{W}}(\\eta)={\\mathcal{W}}(\\eta;L,k)=\\frac{k^{d}}{(2\\pi L)^{d}}\\hat{ \\chi}_{A}(\\eta k/L)\\ e^{-\\eta^{2}/(2\\sigma_{M}^{2})}, \\tag{11}\\] the **point spread function**, and \\[\\psi^{\\beta}_{0}(\\eta,-k)=e^{i{\\bf p}_{0}\\cdot\\eta}\\psi_{0}(\\eta/\\beta)\\hat{g} (-kc_{0}) \\tag{12}\\] with \\(\\psi_{0}(\\eta/\\beta)\\) the spatial source distribution function and \\(\\hat{g}\\) the Fourier transform of the pulse shape function \\(g(t)\\). This notation is consistent with (10), with the time factor \\(e^{-ik_{0}c_{0}t}\\) omitted, along with the horizontal phase \\(e^{ikz}\\) which cancels in time reversal. We have also introduced the refocused **spot size** with multipathing \\[\\sigma_{M}^{2}=\\frac{3}{DLk^{2}}=\\frac{L^{2}}{k^{2}a_{e}^{2}} \\tag{13}\\] and the **effective aperture**\\(a_{e}=a_{e}(L)\\), \\[a_{e}=\\sqrt{\\frac{DL^{3}}{3}}, \\tag{14}\\] which we now interpret. If the time reversal mirror is the whole plane \\(z=L\\), then \\(\\chi_{A}\\equiv 1\\) and \\[\\left\\langle\\psi^{B}(L,\\xi;k)\\right\\rangle=\\psi^{\\beta}_{0}(\\xi,-k).\\] In this case the back-propagated field is the source field reversed in time, both in the random and in the deterministic case. The point spread function \\({\\mathcal{W}}\\) determines the resolution of the refocused signal for a time reversal mirror of finite aperture. Multipathing in a random medium gives rise to the Gaussian factor (13) whose variance is \\(\\sigma_{M}^{2}\\). We can give an interpretation of this variance, or spot size, as follows. For a square time reversal mirror of size \\(a\\), the Fourier transform of \\(\\chi_{A}\\) is the sinc function so that \\[{\\mathcal{W}}(\\eta_{1},\\eta_{2};L,k)=\\left(\\frac{1}{\\pi L}\\right)^{2}\\sin( \\frac{\\eta_{1}ka}{2L})\\sin(\\frac{\\eta_{2}ka}{2L})e^{-(\\eta_{1}^{2}+\\eta_{2}^{ 2})/(2\\sigma_{M}^{2})}\\] For a deterministic medium (\\(D=0\\)) the Rayleigh resolution is the distance \\(\\eta_{F}\\) to the first zero of the sine, the first Fresnel zone in either direction, \\[\\eta_{F}=\\frac{2\\pi L}{ka}=\\frac{\\lambda L}{a}.\\] In general, if \\(\\chi_{A}\\) is supported by a region of size \\(a\\) we may define the Fresnel resolution, or the Fresnel **spot size**, by \\[\\sigma_{F}=\\frac{L}{ka}.\\]For **weak multipathing** we have \\(\\sigma_{M}\\gg\\sigma_{F}\\) and \\[{\\cal W}(\\eta;L,k)\\sim\\left(\\frac{k}{2\\pi L}\\right)^{d}\\hat{\\chi}_{A}(\\eta k/L)\\,\\] which is the diffractive point spread function whose integral over \\(\\eta\\in R^{d}\\) is one. If, however, we have **strong multipathing**, \\(\\sigma_{M}\\ll\\sigma_{F}\\), then we may approximate \\(\\hat{\\chi}_{A}(\\eta k/L)\\) by \\(\\hat{\\chi}_{A}(0)=a^{d}\\) in (20), and the point spread function becomes \\[{\\cal W}(\\eta;L,k)\\sim\\left(\\frac{ka}{2\\pi L}\\right)^{d}e^{-|\\eta|^{2}/(2\\sigma _{M}^{2})}.\\] By writing the variance (spot size) \\(\\sigma_{M}^{2}\\) in the form (12) we can interpret \\(a_{e}\\) as an effective aperture of the time reversal mirror. We can rewrite the point spread function in terms of a normalized Gaussian as \\[{\\cal W}(\\eta;L,k)\\sim\\left(\\frac{\\sigma_{M}}{\\sqrt{2\\pi}\\sigma_{F}}\\right)^{ d}\\frac{e^{-|\\eta|^{2}/(2\\sigma_{M}^{2})}}{(2\\pi\\sigma_{M}^{2})^{d/2}}\\] with the factor in front of the normalized Gaussian also equal to \\[\\left(\\frac{a}{\\sqrt{2\\pi}a_{e}}\\right)^{d}.\\] This means that when there is strong multipathing the integral of the point spread function over \\(R^{d}\\) is not equal to one but to this ratio, which can be much smaller than one if \\(a_{e}\\gg a\\). Multipathing produces a tighter point spread function but there is also loss of energy, as of course we should expect. A more direct interpretation for the effective aperture can be given if the time reversal mirror has a Gaussian aperture function \\[\\chi_{A}(\\eta)=e^{-|\\eta|^{2}/(2a^{2})}.\\] The point spread function \\({\\cal W}\\) has now the form \\[{\\cal W}(\\eta;L,k)=\\left(\\frac{ka}{\\sqrt{2\\pi}L}\\right)^{d}e^{-|\\eta|^{2}/(2 \\sigma_{g}^{2})},\\] with \\[\\sigma_{g}=\\frac{L}{ka_{g}}\\] and the effective aperture \\(a_{g}\\) given by \\[a_{g}=\\sqrt{a^{2}+\\frac{DL^{3}}{3}}=\\sqrt{a^{2}+a_{e}^{2}}.\\] Clearly, \\(a_{g}\\approx a_{e}\\) when there is strong multipathing and \\(a_{e}\\gg a\\). Written with a normalized Gaussian the point spread function for a Gaussian aperture has the form \\[{\\cal W}(\\eta)=\\left(\\frac{a}{a_{g}}\\right)^{d}\\frac{e^{-|\\eta|^{2}/(2\\sigma_ {g}^{2})}}{(2\\pi\\sigma_{g}^{2})^{d/2}}.\\] ### Broad-band time reversal for distributed sources For a distributed source, its support \\(\\sigma_{s}\\) is large compared to the Fresnel number \\(\\theta\\) so the ratio \\(\\beta=\\sigma_{s}/\\theta\\) is large. In this case we can compute the average of (12) the same way as we did in (11) and we find that \\[\\langle\\Psi^{B}(L,{\\bf x}_{0},\\xi,t)\\rangle\\] \\[=(2\\pi)^{d}e^{i({\\bf p}_{0}\\cdot\\xi-k_{0}c_{0}t)}\\psi_{0}(\\xi/ \\beta)\\int\\langle W(L,{\\bf x}_{0},\\frac{{\\bf p}_{0}}{k_{0}+k})\\rangle e^{-ikc _{0}t}\\hat{g}(-c_{0}k)\\frac{c_{0}dk}{2\\pi}\\] \\[=e^{i({\\bf p}_{0}\\cdot\\xi-k_{0}c_{0}t)}\\psi_{0}(\\xi/\\beta)\\int \\frac{d{\\bf y}d{\\bf w}c_{0}dk}{(2\\pi)^{d+1}}\\chi_{A}({\\bf y})e^{i(\\frac{L{\\bf w }{\\bf p}_{0}}{k_{0}+k}-{\\bf w}\\cdot{\\bf y}-kc_{0}t)}e^{-\\frac{DL^{3}u^{2}}{6} }\\hat{g}(-c_{0}k). \\tag{13}\\] The \\({\\bf y}\\) integral on the right gives the Fourier transform of the aperture function \\(\\chi_{A}({\\bf y})\\) so with \\(\\omega_{0}=c_{0}k_{0}\\) and a change of variable from \\(k\\) to \\(\\omega=c_{0}k\\) we have \\[\\langle\\Psi^{B}(L,{\\bf x}_{0},\\xi,t)\\rangle\\] \\[=e^{i({\\bf p}_{0}\\cdot\\xi-\\omega_{0}t)}\\psi_{0}(\\xi/\\beta)\\int \\frac{d\\omega}{2\\pi}e^{-i\\omega t}\\hat{g}(-\\omega)\\ \\chi_{A}*\\left(\\frac{e^{-x^{2}/(2a_{e}^{2})}}{(2\\pi a_{e}^{2})^{d/2}} \\right)(\\frac{Lc_{0}{\\bf p}_{0}}{\\omega_{0}+\\omega})\\] Here the star denotes convolution with respect to the spatial variables \\({\\bf x}\\), and \\(a_{e}\\) is the effective aperture defined by (12). When multipathing is weak we can ignore the Gaussian factor in the convolution and we have \\[\\langle\\Psi^{B}(L,{\\bf x}_{0},\\xi,t)\\rangle\\] \\[=e^{i({\\bf p}_{0}\\cdot\\xi-\\omega_{0}t)}\\psi_{0}(\\xi/\\beta)\\int \\frac{d\\omega}{2\\pi}e^{-i\\omega t}\\hat{g}(-\\omega)\\ \\chi_{A}\\left(\\frac{Lc_{0}{\\bf p}_{0}}{\\omega_{0}+\\omega}\\right)\\] In the opposite case, when there is strong multipathing and the effective aperture is much larger than the physical one, \\(a_{e}\\gg a\\), we have \\[\\langle\\Psi^{B}(L,{\\bf x}_{0},\\xi,t)\\rangle \\tag{14}\\] Figure 1: A directed field propagates from a distributed source of size \\(\\sigma_{s}\\) toward the time reversal mirror of size \\(a\\). The time-reversed, back-propagated field depends on the location of the mirror relative to the direction of the propagating beam. \\[=e^{i({\\bf p}_{0}\\cdot\\xi-\\omega_{0}t)}\\psi_{0}(\\xi/\\beta)\\left(\\frac{a}{\\sqrt{2 \\pi}a_{e}}\\right)^{d}\\int\\frac{d\\omega}{2\\pi}e^{-i\\omega t}\\hat{g}(-\\omega)e^{- \\frac{1}{2}(\\frac{Le_{0}{\\bf p}_{0}}{a_{e}(\\omega_{0}+\\omega)})^{2}}\\] To interpret these results we note first that a distributed source function of the form (16) can be considered as a phased array emitting an inhomogeneous plane wave, a beam, in the direction \\((k,{\\bf p}_{0})\\), within the paraxial or parabolic approximation. The ratio \\(|{\\bf p}_{0}|/k\\) is the tangent of the angle the direction vector makes with the \\(z\\) axis, and \\(L|{\\bf p}_{0}|/k\\) is the transverse distance of the beam center to the center of the phased array (see Figure 1). If for each \\(\\omega\\) the beam displacement vector \\(Lc_{0}{\\bf p}_{0}/(\\omega_{0}+\\omega)\\) is inside the set \\(A\\) occupied by the time reversal array, then we recover at the source the full pulse in (18), time-reversed, \\[\\langle\\Psi^{B}(L,{\\bf x}_{0},\\xi,t)\\rangle=e^{i({\\bf p}_{0}\\cdot\\xi-\\omega_{ 0}t)}\\psi_{0}(\\xi/\\beta)g(-t).\\] If, however, for some frequencies the transverse displacement vector is outside the time reversal array, these frequencies will be nulled in the integration and a distorted time pulse will be received at the source. Depending on the position of the time reversal mirror relative to the beam, high or low frequencies may be nulled. In a strongly multipathing medium the situation is quite different because the expression (19), or more generally (15), holds now. Even if the beam from the phased array does not intercept the time reversal mirror at all, we will still get a time reversed signal at the source but with a much diminished amplitude. If the beam falls entirely within the time reversal mirror then the time reversed pulse will be a distorted form of \\(g(-t)\\), with its amplitude reduced by the factor \\((a/a_{e})^{d}\\). An interesting and important application of the time reversal of a beam in a random medium is the possibility of **estimating** the effective aperture \\(a_{e}\\) by pointing the beam in different directions toward the time reversal mirror, measuring the time reversed signal that back propagates to the source, that is, to the phased array, and inferring \\(a_{e}\\) by fitting the measurements to (15). ## 4 Summary and conclusions We have analyzed and explained two important phenomena associated with time reversal in a random medium: * Super-resolution of the back-propagated signal due to multipathing * Self-averaging that gives a statistically stable refocusing Our analysis is based on a specific asymptotic limit (see Section 2.1) where the longitudinal distance of propagation is much larger than the size of the time-reversal mirror, which in turn is much larger than the correlation length of the medium, fluctuations in the index of refraction are weak, and the wave length is short compared to the correlation length. This asymptotic regime is more relevant to optical or infrared time reversal than it is to sonar or ultrasound. We have related the self-averaging properties of the back-propagated signal to those of functionals of the Wigner distribution. Self-averaging of these functionals implies the statistical stability of the time-reversed and back-propagated signal in the frequency domain, provided that the source function is not too broad compared to the Fresnel number (4). Time reversal refocusing of waves emitted from a distributed source is self-averaging only in the time domain. We apply our theoretical results about stochastic Wigner distributions to time reversal and discuss in detail super-resolution and statistical stability in section 3. ## Appendix A The white noise limit and the parabolic approximation We collect here some comments on the scaling analysis of section 2.1 and refer to [1, 27, 33] for additional comments and results on scaling and asymptotics in the high-frequency and white-noise regime. The dimensionless parameters \\(\\delta,\\varepsilon,\\gamma\\) introduced by (2) in Section 2.1, along with the Fresnel number \\(\\theta\\) defined by (4), lead to the scaled parabolic wave equation (5). If we do not make the parabolic approximation and keep the \\(\\psi_{zz}\\) term we have the scaled Helmholtz equation, with the phase \\(e^{ikz}\\) removed, \\[\\frac{\\varepsilon^{2}\\theta^{2}}{\\delta^{2}}\\psi_{zz}+2ik\\theta\\psi_{z}+\\theta^{ 2}\\Delta_{\\mathbf{x}}\\psi+\\frac{k^{2}\\delta}{\\varepsilon^{1/2}}\\mu(\\frac{ \\mathbf{x}}{\\delta},\\frac{z}{\\varepsilon})\\psi=0. \\tag{10}\\] Here, as in (5), we relate the strength of the fluctuations \\(\\sigma\\) to \\(\\varepsilon\\) and \\(\\delta\\) by (6). Is the parabolic approximation valid in the ordering (7), \\(\\theta\\ll\\varepsilon\\ll\\delta\\ll 1\\), that we have analyzed? The answer is yes but not before both \\(\\theta\\)**and**\\(\\varepsilon\\) limits have been taken, in which case the scaled Wigner distribution (8) converges to the Liouville-Ito process that is defined by the stochastic partial differential equation (19). It is in the white noise limit \\(\\varepsilon\\to 0\\), with Fresnel number \\(\\theta\\) and \\(\\delta\\) fixed, that the parabolic approximation is valid for (10), as was pointed out in [1]. This is easily seen if the random fluctuations \\(\\mu\\) are differentiable in \\(z\\). The parabolic approximation is clearly not valid in the high frequency limit \\(\\theta\\to 0\\), before the white noise limit \\(\\varepsilon\\to 0\\) is also taken. In the white noise limit, the wave function \\(\\psi(z,\\mathbf{x})\\) satisfies an Ito-Schrodinger equation \\[2ik\\theta d_{z}\\psi+\\theta^{2}\\Delta_{\\mathbf{x}}\\psi dz+\\frac{ik^{3}\\delta^{2} }{4\\theta}R_{0}(0)\\psi dz+k^{2}\\delta\\psi d_{z}B(\\frac{\\mathbf{x}}{\\delta},z)=0. \\tag{11}\\] Here \\(R_{0}\\) is the integrated covariance of the fluctuations \\(\\mu\\) given by (15) and (13), and the Brownian field \\(B(\\mathbf{x},z)\\) has covariance \\[\\langle B(\\mathbf{x},z_{1})B(\\mathbf{y},z_{2})\\rangle=R_{0}(\\mathbf{x}- \\mathbf{y})z_{1}\\wedge z_{2}.\\] This Ito-Schrodinger equation is the result of the central limit theorem applied to (10). Let \\[B^{\\varepsilon}(\\mathbf{x},z)=\\frac{1}{\\sqrt{\\varepsilon}}\\int_{0}^{z}\\mu( \\mathbf{x},\\frac{s}{\\varepsilon})ds.\\] Then, as \\(\\varepsilon\\to 0\\) this process converges weakly, under suitable hypotheses, to the Brownian field \\(B(\\mathbf{x},z)\\) with the above covariance. The extra term in (11) is the Stratonovich correction. The white noise limit for stochastic partial differential equations is analyzed in [10] and a rigorous theory of the Ito-Schrodinger equation is given in [11]. The ergodic theory of the Ito-Schroedinger equation is explored in [17]. Wave propagation in the parabolic approximation with white-noise fluctuations is considered in detail in [18, 30]. The scaled Wigner distribution for the process \\(\\psi\\), defined by (8), satisfies the stochastic transport equation \\[d_{z}W_{\\theta}(z,\\mathbf{x},\\mathbf{p})+\\frac{\\mathbf{p}}{k} \\cdot\ abla_{x}W_{\\theta}(z,\\mathbf{x},\\mathbf{p})dz\\] \\[=\\frac{k^{2}\\delta^{2}}{4\\theta^{2}}\\int\\frac{d\\mathbf{q}}{(2\\pi) ^{d}}\\hat{R}_{0}(\\mathbf{q})\\left(W_{\\theta}(z,\\mathbf{x},\\mathbf{p}+\\frac{ \\theta\\mathbf{q}}{\\delta})-W_{\\theta}(z,\\mathbf{x},\\mathbf{p})\\right)dz\\] \\[+\\frac{ik\\delta}{2\\theta}\\int\\frac{d\\mathbf{q}}{(2\\pi)^{d}}e^{i \\mathbf{q}\\cdot\\mathbf{x}/\\delta}\\left(W_{\\theta}(z,\\mathbf{x},\\mathbf{p}- \\frac{\\theta\\mathbf{q}}{2\\delta})-W_{\\theta}(z,\\mathbf{x},\\mathbf{p}+\\frac{ \\theta\\mathbf{q}}{2\\delta})\\right)d_{z}\\hat{B}(\\mathbf{q},z),\\] which is derived from (11) equation using the Ito calculus. The Wigner process \\(W_{\\theta}\\) converges in the limit \\(\\theta\\to 0\\) to the Liouville-Ito process defined by the stochastic partial differential equation (19). **Appendix B. Decorrelation of the Wigner process.** **B.1. Proof of Theorem 2.1.** We give here the proof of Theorems 2.1 and 2.2. We consider Theorem 2.1 first. It will follow from the Lebesgue dominated convergence theorem if we show that for \\({\\bf p}_{1}\ eq{\\bf p}_{2}\\): \\[E\\left\\{W_{\\delta}(z,{\\bf x},{\\bf p}_{1})W_{\\delta}(z,{\\bf x},{\\bf p}_{2}) \\right\\}-E\\left\\{W_{\\delta}(z,{\\bf x},{\\bf p}_{1})\\right\\}E\\left\\{W_{\\delta}(z,{\\bf x},{\\bf p}_{2})\\right\\}\\to 0\\] (B.1) as \\(\\delta\\to 0\\) because the function \\(W_{\\delta}\\) is uniformly bounded and \\(E\\left\\{W_{\\delta}(z,{\\bf x},{\\bf p}_{1})\\right\\}\\) does not depend on \\(\\delta\\). Furthermore, the correlation function at the same spatial point but for two different values of the wave vector, \\(U_{\\delta}^{(2)}(z,{\\bf x},{\\bf p}_{1},{\\bf p}_{2})=E\\left\\{W_{\\delta}(z,{\\bf x },{\\bf p}_{1})W_{\\delta}(z,{\\bf x},{\\bf p}_{2})\\right\\}\\) is the solution of (2.18) with \\(N=2\\) and the initial data \\[W_{\\delta}^{(2)}(0,{\\bf x}_{1},{\\bf p}_{1},{\\bf x}_{2},{\\bf p}_{2})=W_{I}({\\bf x }_{1},{\\bf p}_{1})W_{I}({\\bf x}_{2},{\\bf p}_{2}),\\] evaluated at \\({\\bf x}_{1}={\\bf x}_{2}={\\bf x}\\). Therefore \\(U_{\\delta}^{(2)}\\) may be represented as \\[U_{\\delta}^{(2)}(z,{\\bf x}_{1},{\\bf p}_{1},{\\bf x}_{2},{\\bf p}_{2})=E\\left\\{W_ {I}({\\bf X}_{\\delta}^{1}(z),{\\bf P}_{\\delta}^{1}(z))W_{I}({\\bf X}_{\\delta}^{2} (z),{\\bf P}_{\\delta}^{2}(z))\\right\\}.\\] The processes \\({\\bf X}_{\\delta}^{1,2}\\) and \\({\\bf P}_{\\delta}^{1,2}\\) satisfy the system of SDE's (2.16) which may be more explicitly written as \\[d{\\bf P}_{\\delta}^{1} = -\\left[\\sigma(0)d{\\bf B}^{1}(z)+\\frac{1}{2}\\sigma\\left(\\frac{{\\bf X }_{\\delta}^{1}-{\\bf X}_{\\delta}^{2}}{\\delta}\\right)d{\\bf B}^{2}(z)\\right]\\] \\[d{\\bf P}_{\\delta}^{2} = -\\left[\\sigma(0)d{\\bf B}^{2}(z)+\\frac{1}{2}\\sigma\\left(\\frac{{\\bf X }_{\\delta}^{2}-{\\bf X}_{\\delta}^{1}}{\\delta}\\right)d{\\bf B}^{1}(z)\\right]\\] \\[d{\\bf X}_{\\delta}^{1} = -{\\bf P}_{\\delta}^{1}dz,\\ \\ d{\\bf X}_{\\delta}^{2}=-{\\bf P}_{\\delta}^{2}dz\\] with the initial conditions \\({\\bf X}_{\\delta}^{1,2}(0)={\\bf x},\\ \\ {\\bf P}_{\\delta}^{m}(0)={\\bf p}_{m}\\), \\(m=1,2\\). Here \\(\\sigma^{2}(0)=D\\), the diffusion coefficient (2.15) and the coupling matrix \\(\\sigma({\\bf x})\\) is given by (2.17). Recall that \\(W_{\\delta}(z,{\\bf x},{\\bf p},k)=W_{\\delta}(z,{\\bf x},{\\bf p}/k;1)\\) and we need only consider the case \\(k=1\\). It is convenient to introduce the processes \\({\\bf X}^{1,2}\\) and \\({\\bf P}^{1,2}\\) that are solutions of (2.16) with no coupling: \\[d{\\bf P}^{m}=-\\sigma(0)d{\\bf B}^{m}(z),\\ \\ d{\\bf X}^{m}=-{\\bf P}^{m}dz,\\] (B.3) \\[{\\bf X}^{1,2}(0)={\\bf x},\\ \\ {\\bf P}^{m}(0)={\\bf p}_{m},\\ \\ m=1,2\\] and define the deviations of the solutions of the coupled system of SDE's (B.2) from those of (B.3): \\({\\bf Z}_{\\delta}^{m}={\\bf X}_{\\delta}^{m}-{\\bf X}^{m}\\), \\({\\bf S}_{\\delta}^{m}={\\bf P}_{\\delta}^{m}-{\\bf P}^{m}\\). Then we have \\[d{\\bf S}_{\\delta}^{1} = -\\frac{1}{2}\\sigma\\left(\\frac{{\\bf X}_{\\delta}^{1}-{\\bf X}_{ \\delta}^{2}}{\\delta}\\right)d{\\bf B}^{2}(z),\\ \\ d{\\bf S}_{\\delta}^{2}=-\\frac{1}{2}\\sigma\\left(\\frac{{\\bf X}_{\\delta}^{2}-{ \\bf X}_{\\delta}^{1}}{\\delta}\\right)d{\\bf B}^{1}(z)\\] \\[d{\\bf Z}_{\\delta}^{1} = -{\\bf S}_{\\delta}^{1}dz,\\ \\ d{\\bf Z}_{\\delta}^{2}=-{\\bf S}_{\\delta}^{2}dz\\] with the initial data \\({\\bf S}_{\\delta}^{m}(0)={\\bf Z}^{m}(0)=0\\). Define \\[{\\cal V}({\\bf X}^{1},{\\bf X}^{2},{\\bf P}^{1},{\\bf P}^{2},{\\bf Z }_{\\delta}^{1},{\\bf Z}_{\\delta}^{2},{\\bf S}_{\\delta}^{1},{\\bf S}_{\\delta}^{2})\\] \\[= W_{I}({\\bf X}^{1}+{\\bf Z}_{\\delta}^{1},{\\bf P}^{1}+{\\bf S}_{ \\delta}^{1})W_{I}({\\bf X}^{2}+{\\bf Z}_{\\delta}^{2},{\\bf P}^{2}+{\\bf S}_{\\delta }^{2})-W_{I}({\\bf X}^{1},{\\bf P}^{1})W_{I}({\\bf X}^{2},{\\bf P}^{2})\\] then we have with the above notation \\[E\\left\\{W_{\\delta}(z,{\\bf x},{\\bf p}_{1})W_{\\delta}(z,{\\bf x},{ \\bf p}_{2})\\right\\}-E\\left\\{W_{\\delta}(z,{\\bf x},{\\bf p}_{1})\\right\\}E\\left\\{W_ {\\delta}(z,{\\bf x},{\\bf p}_{2})\\right\\}\\] \\[=E\\left\\{{\\cal V}({\\bf X}^{1}(z),{\\bf X}^{2}(z),{\\bf P}^{1}(z),{ \\bf P}^{2}(z),{\\bf Z}_{\\delta}^{1}(z),{\\bf Z}_{\\delta}^{2}(z),{\\bf S}_{\\delta}^ {1}(z),{\\bf S}_{\\delta}^{2}(z))\\right\\}\\] \\[\\leq CE\\left\\{|{\\bf Z}_{\\delta}^{1}(z)|+|{\\bf Z}_{\\delta}^{2}(z)|+|{ \\bf S}_{\\delta}^{1}(z)|+|{\\bf S}_{\\delta}^{2}(z)|\\right\\}\\]since \\(W_{I}\\) is a Lipschitz function. Let us assume for simplicity that the correlation function \\(R({\\bf x})\\) has compact support inside the set \\(|{\\bf x}|\\leq M\\). Then the coupling term in (10) is non-zero only when \\(|{\\bf X}_{\\delta}^{1}-{\\bf X}_{\\delta}^{2}|\\leq M\\delta\\). We introduce the processes \\({\\bf Q}_{\\delta}={\\bf P}_{\\delta}^{1}-{\\bf P}_{\\delta}^{2}\\) and \\({\\bf Y}_{\\delta}={\\bf X}_{\\delta}^{1}-{\\bf X}_{\\delta}^{2}\\) that govern (11). They satisfy the SDE's \\[d{\\bf Q}_{\\delta}=-\\left[\\sigma(0)-\\frac{1}{2}\\sigma\\left(\\frac{ {\\bf Y}_{\\delta}}{\\delta}\\right)\\right]d\\tilde{\\bf B},\\ \\ d{\\bf Y}_{\\delta}=-{\\bf Q}_{\\delta}dz, \\tag{12}\\] \\[{\\bf Q}_{\\delta}(0)={\\bf p}_{1}-{\\bf p}_{2},\\ {\\bf Y}_{\\delta}(0)=0\\] with \\(\\tilde{\\bf B}={\\bf B}^{1}-{\\bf B}^{2}\\) being a Brownian motion. In order to prove the theorem we show that the coupling term \\(\\sigma(\\cdot)\\) in (10) introduces only lower order correction terms, that is, \\({\\bf S}_{\\delta}^{m}\\) and \\({\\bf Z}_{\\delta}^{m}\\) are small. We show first that after a small 'time', \\(\\tau\\), the points \\({\\bf X}_{\\delta}^{m}\\) are driven apart since \\({\\bf Q}_{\\delta}(0)={\\bf P}_{\\delta}^{1}(0)-{\\bf P}_{\\delta}^{2}(0)\ eq 0\\). Then we show that after the points have separated the probability that they come close, so that the coupling term \\(\\sigma(\\cdot)\\) becomes non-zero, is small. This \"non-recurrence\" condition requires that the spatial dimension \\(d\\geq 2\\). It follows that to leading order the points \\({\\bf X}_{\\delta}^{m}\\) are uncorrelated when \\(d\\geq 2\\) and that the coupling term introduces only lower order corrections. A similar argument for \\(d=1\\) would require an estimate on the time that points that are originally separated in the spatial variable spend near each other, where the coupling term in (10) is not zero. We need the following two Lemmas. The first one shows that particles that start at the same point \\({\\bf x}\\) with different initial directions \\({\\bf p}_{1}\\) and \\({\\bf p}_{2}\\), get separated with a large probability: **Lemma B.1**: _Let \\({\\bf Y}_{\\delta}\\), \\({\\bf Q}_{\\delta}\\) solve (12) with \\({\\bf Y}_{\\delta}(0)=0\\), \\({\\bf Q}_{\\delta}(0)={\\bf q}\ eq 0\\). Then for any \\(\\varepsilon>0\\) there exists \\(\\tau_{0}(\\varepsilon)>0\\) that depends only on \\({\\bf q}={\\bf p}_{1}-{\\bf p}_{2}\\) but not on \\(\\delta\\) so that we have \\( P\\left(|{\\bf Y}_{\\delta}(\\tau)|\\geq\\frac{|{\\bf q}|\\tau}{2}\\right) \\geq 1-\\varepsilon\\) for all \\(\\tau\\leq\\tau_{0}(\\varepsilon)\\)._ The second lemma shows that after the particles are separated, the probability that they come close to each other is small: **Lemma B.2**: _Given any fixed \\(r>0\\) and \\(z>0\\), if \\({\\bf Y}_{\\delta}\\), \\({\\bf Q}_{\\delta}\\) solve (12) with \\(|{\\bf Y}_{\\delta}(0)|\\geq r\\), \\({\\bf Q}_{\\delta}(0)={\\bf q}\ eq 0\\), then \\(P\\left(\\inf_{0\\leq s\\leq z}|{\\bf Y}_{\\delta}(s)|\\leq M\\delta\\right)\\to 0\\) as \\(\\delta\\to 0\\)._ We prove Theorem 2 before proving Lemmas B.1 and B.2: Let \\(z\\) and \\({\\bf q}={\\bf p}_{1}-{\\bf p}_{2}\\) be fixed and defined as above. Given \\(\\varepsilon>0\\), then for any \\(\\tau<\\tau_{0}(\\varepsilon)\\) (with \\(\\tau_{0}\\) as defined in Lemma B.1), Lemma B.2 and the Markov property of the Brownian motion imply that \\[P\\left({\\bf S}_{\\delta}^{m}(z)={\\bf S}_{\\delta}^{m}(\\tau)\\Big{|}|{\\bf Y}_{ \\delta}(\\tau)|\\geq\\frac{\\tau|{\\bf q}|}{2}\\right)\\geq 1-\\varepsilon\\] and \\[P\\left({\\bf Z}_{\\delta}^{m}(z)={\\bf Z}_{\\delta}^{m}(\\tau)+(z-\\tau){\\bf S}_{ \\delta}^{m}(\\tau)\\Big{|}|{\\bf Y}_{\\delta}(\\tau)|\\geq\\frac{\\tau|{\\bf q}|}{2} \\right)\\geq 1-\\varepsilon\\] for \\(\\delta<\\delta_{0}(\\tau,\\varepsilon)\\). Furthermore, \\[E\\left\\{|{\\bf Z}_{\\delta}^{1}(\\tau)|+|{\\bf Z}_{\\delta}^{2}( \\tau)|+|{\\bf S}_{\\delta}^{1}(\\tau)|+|{\\bf S}_{\\delta}^{2}(\\tau)|\\Big{|}|{\\bf Y }_{\\delta}(\\tau)|\\geq\\frac{\\tau|{\\bf q}|}{2}\\right\\}\\] \\[\\leq E\\left\\{|{\\bf Z}_{\\delta}^{1}(\\tau)|+|{\\bf Z}_{\\delta}^{2}( \\tau)|+|{\\bf S}_{\\delta}^{1}(\\tau)|+|{\\bf S}_{\\delta}^{2}(\\tau)|\\right\\}/(1- \\varepsilon)\\leq C\\tau\\]because the function \\(\\sigma\\) is uniformly bounded. Therefore we have \\[E\\left\\{\\mathcal{V}(\\mathbf{X}^{1},\\mathbf{X}^{2},\\mathbf{P}^{1}, \\mathbf{P}^{2},\\mathbf{Z}_{\\delta}^{1},\\mathbf{Z}_{\\delta}^{2},\\mathbf{S}_{ \\delta}^{1},\\mathbf{S}_{\\delta}^{2})\\right\\}\\] (B.9) \\[=E\\left\\{\\mathcal{V}(\\mathbf{X}^{1},\\mathbf{X}^{2},\\mathbf{P}^{1},\\mathbf{P}^{2},\\mathbf{Z}_{\\delta}^{1},\\mathbf{Z}_{\\delta}^{2},\\mathbf{S}_{ \\delta}^{1},\\mathbf{S}_{\\delta}^{2})\\Big{|}|\\mathbf{Y}_{\\delta}(\\tau)|\\geq \\frac{\\tau|\\mathbf{q}|}{2}\\right\\}P\\left(|\\mathbf{Y}_{\\delta}(\\tau)|\\geq\\frac{ \\tau|\\mathbf{q}|}{2}\\right)\\] \\[+E\\bigg{\\{}\\mathcal{V}(\\mathbf{X}^{1},\\mathbf{X}^{2},\\mathbf{P}^ {1},\\mathbf{P}^{2},\\mathbf{Z}_{\\delta}^{1},\\mathbf{Z}_{\\delta}^{2},\\mathbf{S} _{\\delta}^{1},\\mathbf{S}_{\\delta}^{2})\\Big{|}|\\mathbf{Y}_{\\delta}(\\tau)|\\leq \\frac{\\tau|\\mathbf{q}|}{2}\\bigg{\\}}\\,P\\left(|\\mathbf{Y}_{\\delta}(\\tau)|\\leq \\frac{\\tau|\\mathbf{q}|}{2}\\right)=I+II.\\] The second term above is small because the probability for \\(\\mathbf{Y}_{\\delta}(\\tau)\\) to be very small is bounded by Lemma B.1. More precisely, given \\(\\varepsilon>0\\) and \\(\\tau<\\tau_{0}(\\varepsilon)\\), Lemma B.1 implies that (B.10) \\[II\\leq C\\varepsilon.\\] The first term in (B.9) corresponds to the more likely scenario that \\(\\mathbf{Y}_{\\delta}\\) at time \\(\\tau\\) has left the ball of radius \\(\\tau|\\mathbf{q}|/2\\). We estimate it as follows. The probability that \\(\\mathbf{Y}_{\\delta}\\) re-enters the ball of radius \\(M\\delta\\) is small according to Lemma B.2. Moreover, if \\(\\mathbf{Y}_{\\delta}\\) stays outside this ball, the difference variables \\(\\mathbf{Z}^{m}\\) and \\(\\mathbf{S}^{m}\\) are bounded in terms of their values at time \\(\\tau\\). The latter are small if \\(\\tau\\) is small. More precisely, using (B.8) we choose \\(\\tau\\) so small that \\[E\\left\\{|\\mathbf{Z}_{\\delta}^{1}(\\tau)|+|\\mathbf{Z}_{\\delta}^{2}(\\tau)|+| \\mathbf{S}_{\\delta}^{1}(\\tau)|+|\\mathbf{S}_{\\delta}^{2}(\\tau)|\\Big{|}| \\mathbf{Y}_{\\delta}(\\tau)|\\geq\\frac{\\tau|\\mathbf{q}|}{2}\\right\\}\\leq\\varepsilon.\\] Then we obtain \\[I\\leq E\\left\\{\\mathcal{V}(\\mathbf{X}^{1},\\mathbf{X}^{2},\\mathbf{P }^{1},\\mathbf{P}^{2},\\mathbf{Z}_{\\delta}^{1},\\mathbf{Z}_{\\delta}^{2},\\mathbf{S }_{\\delta}^{1},\\mathbf{S}_{\\delta}^{2})\\Big{|}|\\mathbf{Y}_{\\delta}(\\tau)|\\geq \\frac{\\tau|\\mathbf{q}|}{2}\\right\\}\\] \\[\\leq E\\left\\{\\mathcal{V}(\\mathbf{X}^{1},\\mathbf{X}^{2},\\mathbf{P }^{1},\\mathbf{P}^{2},\\mathbf{Z}_{\\delta}^{1},\\mathbf{Z}_{\\delta}^{2},\\mathbf{S }_{\\delta}^{1},\\mathbf{S}_{\\delta}^{2})\\Big{|}|\\mathbf{Y}_{\\delta}(\\tau)|\\geq \\frac{\\tau|\\mathbf{q}|}{2}\\text{ and }\\inf_{\\tau\\leq s\\leq z}|\\mathbf{Y}_{\\delta}(s)|\\leq M\\delta\\right\\}\\] \\[\\times P\\left(\\inf_{\\tau\\leq s\\leq z}|\\mathbf{Y}_{\\delta}(s)|\\leq M \\delta\\Big{|}|\\mathbf{Y}_{\\delta}(\\tau)|\\geq\\frac{\\tau|\\mathbf{q}|}{2}\\right)\\] \\[+E\\left\\{|\\mathbf{Z}_{\\delta}^{1}(z)|+|\\mathbf{Z}_{\\delta}^{2}(z )|+|\\mathbf{S}_{\\delta}^{1}(z)|+|\\mathbf{S}_{\\delta}^{2}(z)|\\Big{|}|\\mathbf{Y}_ {\\delta}(\\tau)|\\geq\\frac{\\tau|\\mathbf{q}|}{2}\\text{ and }\\inf_{\\tau\\leq s\\leq z}| \\mathbf{Y}_{\\delta}(s)|\\geq M\\delta\\right\\}\\] \\[\\times P\\left(\\inf_{\\tau\\leq s\\leq z}|\\mathbf{Y}_{\\delta}(s)|\\geq M \\delta\\Big{|}|\\mathbf{Y}_{\\delta}(\\tau)|\\geq\\frac{\\tau|\\mathbf{q}|}{2}\\right)= I_{1}+I_{2}.\\] The term \\(I_{1}\\) goes to zero as \\(\\delta\\to 0\\) by Lemma B.2. However, if the conditions in \\(I_{2}\\) hold, then \\[\\mathbf{S}_{\\delta}^{m}(z)=\\mathbf{S}_{\\delta}^{m}(\\tau),\\ \\ \\mathbf{Z}_{\\delta}^{m}(z)= \\mathbf{Z}_{\\delta}^{m}(\\tau)-\\frac{1}{k}(z-\\tau)\\mathbf{S}_{\\delta}^{m}(\\tau).\\] Therefore the term \\(I_{2}\\) may be bounded with the help of (B.8) by \\[I_{2}\\leq E\\left\\{|\\mathbf{Z}_{\\delta}^{1}(z)|+|\\mathbf{Z}_{\\delta}^{2}(z)|+| \\mathbf{S}_{\\delta}^{1}(z)|+|\\mathbf{S}_{\\delta}^{2}(z)|\\Big{|}\\Big{|}|\\mathbf{ Y}_{\\delta}(\\tau)|\\geq\\frac{\\tau|\\mathbf{q}|}{2}\\text{ and }\\inf_{\\tau\\leq s\\leq z}|\\mathbf{Y}_{\\delta}(s)|\\geq M\\delta\\right\\}\\] \\[\\leq CE\\left\\{|\\mathbf{Z}_{\\delta}^{1}(\\tau)|+|\\mathbf{Z}_{\\delta}^{2 }(\\tau)|+|\\mathbf{S}_{\\delta}^{1}(\\tau)|+|\\mathbf{S}_{\\delta}^{2}(\\tau)|\\Big{|} |\\mathbf{Y}_{\\delta}(\\tau)|\\geq\\frac{\\tau|\\mathbf{q}|}{2}\\right\\}\\leq C\\tau.\\] Putting together (B.9), (B.10) and the above bounds on \\(I_{1}\\) and \\(I_{2}\\), we obtain \\[E\\left\\{|\\mathbf{Z}_{\\delta}^{1}(z)|+|\\mathbf{Z}_{\\delta}^{2}(z)|+|\\mathbf{S}_ {\\delta}^{1}(z)|+|\\mathbf{S}_{\\delta}^{2}(z)|\\right\\}\\leq C\\varepsilon\\] for \\(\\delta<\\bar{\\delta}\\) and Theorem 2.1 follows from (B.6). **B.2. Proof of Lemmas B.1 and B.2.** We first prove Lemma B.1. _Proof._ We write \\[Q_{\\delta}(z)={\\bf q}-\\int_{0}^{z}\\left(\\sigma(0)-\\frac{1}{2}\\sigma({\\bf Y}(s)/ \\delta)\\right)\\ d\\tilde{\\bf B}(s)\\ \\equiv\\ {\\bf q}+\\tilde{\\bf Q}_{\\delta}(z)\\] so that \\[{\\bf Y}_{\\delta}(t)=-{\\bf q}t-\\int_{0}^{t}\\tilde{\\bf Q}_{\\delta}(s)ds.\\] Then we have \\[P\\left(\\sup_{0\\leq s\\leq\\tau}|\\tilde{\\bf Q}_{\\delta}(s)|\\ >\\ r\\right)\\leq C \\tau/r^{2}\\] (B.11) and hence \\[P\\left(|{\\bf Y}_{\\delta}(\\tau)+\\tau{\\bf q}|>r\\tau\\right)\\leq P\\left(\\sup_{0 \\leq s\\leq\\tau}|\\tilde{\\bf Q}_{\\delta}(s)|>r\\right)\\leq C\\tau/r^{2}.\\] We let \\(r=|{\\bf q}|/2\\) in the above formula and obtain \\[P\\left(|{\\bf Y}_{\\delta}(\\tau)|<\\frac{\\tau|{\\bf q}|}{2}\\right)\\leq\\frac{C}{| {\\bf q}|^{2}}\\tau,\\] and the conclusion of Lemma B.1 follows. Finally, we prove Lemma B.2. _Proof._ Let \\(\\tau_{\\delta}\\) be the first time \\({\\bf Y}_{\\delta}(z)\\) enters the ball of radius \\(M\\delta\\): \\[\\tau_{\\delta}=\\inf\\left\\{z:\\ |{\\bf Y}_{\\delta}(z)|\\leq M\\delta\\right\\},\\] with \\({\\bf Y}_{\\delta}(0)={\\bf Y}^{0}\ eq 0\\). For \\(0<\\alpha<1\\) let \\(\\Delta z=\\delta^{1-\\alpha}\\), \\(n=\\lceil z/\\Delta z\\rceil\\), \\(J_{i}=(i\\Delta z,(i+1)\\Delta z)\\) and \\(p<1\\). Note that until the time \\(\\tau_{\\delta}\\) the process \\(({\\bf Y}_{\\delta},{\\bf Q}_{\\delta})\\) coincides with the process \\(({\\bf Y},{\\bf Q})\\) governed by (B.7) without the coupling term \\(\\sigma({\\bf Y}_{\\delta}/\\delta)\\). We find \\[P(\\tau_{\\delta}<z)\\leq\\sum_{i=0}^{n-1}\\left\\{P\\left(|{\\bf Y}(i\\Delta z)|<M \\delta^{p}\\right)+P\\left(\\inf_{s\\in J_{i}}|{\\bf Y}(s)|<M\\delta\\ \\Big{|}\\ |{\\bf Y}(i\\Delta z)|\\geq M \\delta^{p}\\right)\\right\\}.\\] The process \\({\\bf Y}(s)\\) is Gaussian with mean \\({\\bf Y}^{0}\\) and variance \\({\\cal O}(s^{2})\\). Therefore, there is a \\(\\bar{\\delta}>0\\) such that for \\(\\delta<\\bar{\\delta}\\) \\[P(|{\\bf Y}(i\\Delta z)|<M\\delta^{p})\\leq C\\delta^{dp}.\\] If we assume \\[p<1-\\alpha\\] (B.12) then also \\[P(\\tau_{\\delta}<z) \\leq nC\\left(\\delta^{dp}+P\\left(\\sup_{0<s<\\Delta z}|{\\bf Y}(s)-{ \\bf Y}^{0}|\\geq M[\\delta^{p}-\\delta]\\right)\\right)\\] \\[\\leq C(\\delta^{dp+\\alpha-1}+\\delta^{\\alpha-1}P\\left(\\sup_{0<s< \\Delta z}|{\\bf B}(s)|\\geq M[\\delta^{p}-\\delta]/\\Delta z\\right))\\] \\[\\leq C(\\delta^{dp+\\alpha-1}+\\delta^{\\alpha-1}\\frac{E\\left\\{{\\bf B }(\\Delta z)^{2r}\\right\\}\\Delta z^{2r}}{(\\delta^{p}-\\delta)^{2r}})\\] \\[\\leq C\\left[\\delta^{dp+\\alpha-1}+\\delta^{\\alpha-1-rp+3r(1-\\alpha) /2}\\right].\\]Note that with \\(p<1-\\alpha\\) and \\(r\\) large enough, there is a \\(q>0\\) so that \\[P(\\tau_{\\delta}<z)\\leq C\\delta^{q}\\] if \\(d\\geq 2\\) and Lemma B.2 follows. ### Proof of Theorem 2.2 We need to show first that (B.13) \\[J_{\\delta}(z,{\\bf x})=\\int W_{\\delta}(z,{\\bf x},{\\bf p})d{\\bf p}\\] is finite with probability one. The stochastic flow \\(({\\bf X}_{\\delta}(t,{\\bf x},{\\bf p}),{\\bf P}_{\\delta}(t,{\\bf x},{\\bf p})\\) is continuous in \\((t,{\\bf x},{\\bf p})\\) with probability one, so \\(W_{\\delta}(z,{\\bf x},{\\bf p})=W_{I}({\\bf X}_{\\delta}(t,{\\bf x},{\\bf p}),{\\bf P} _{\\delta}(t,{\\bf x},{\\bf p}))\\) is bounded and continuous. It is, moreover, non-negative if \\(W_{I}\\geq 0\\). We know that \\[\\int E\\{W_{\\delta}(z,{\\bf x},{\\bf p})\\}d{\\bf p}\\] is finite and independent of \\(\\delta\\), and the order of integration and expectation can be interchanged by Tonelli's theorem. This theorem implies in addition that \\(J_{\\delta}(z,{\\bf x})\\) is finite with probability one. We can now consider \\[E\\{J_{\\delta}^{2}(z,{\\bf x})\\}=\\int E\\{W_{\\delta}(z,{\\bf x},{\\bf p}_{1})W_{ \\delta}(z,{\\bf x},{\\bf p}_{2})\\}d{\\bf p}_{1}d{\\bf p}_{2}.\\] The integrand is bounded by an integrable function uniformly in \\(\\delta\\) because \\[E\\{W_{\\delta}(z,{\\bf x},{\\bf p}_{1})W_{\\delta}(z,{\\bf x},{\\bf p}_{2})\\}\\leq E^ {1/2}\\{W_{\\delta}^{2}(z,{\\bf x},{\\bf p}_{1})\\}E^{1/2}\\{W_{\\delta}(z,{\\bf x},{ \\bf p}_{2})\\},\\] the right side does not depend on \\(\\delta\\), and is integrable. Therefore by the Lebesgue dominated convergence theorem and the results of the previous Section we have that \\[\\lim_{\\delta\\to 0}E\\{J_{\\delta}^{2}(z,{\\bf x})\\}=E^{2}\\{J_{\\delta}(z,{\\bf x})\\}\\] and the right side does not depend on \\(\\delta\\). This completes the proof of Theorem 2.2. ## References * [1] F. Bailly, J.F. Clouet and J.P. Fouque, Parabolic and white noise approximation for waves in random media, SIAM Journal on Applied Mathematics **56**, 1996, 1445-1470. * Serie I - Mathematique, **333**, 2001, 1041-1046. * [3] G.Bal and L. Ryzhik, Time reversal for waves in random media, Preprint, 2002. * [4] G.Bal, G. Papanicolaou and L. Ryzhik, Self-averaging in time reversal for the parabolic wave equation Preprint, 2002. * [5] G.Bal, G. Papanicolaou and L. Ryzhik, Radiative transport limit for the random Schrodinger equation, Nonlinearity, **15**, 2002, 513-529. * [6] G. Blankenship and G. C. Papanicolaou, Stability and Control of Stochastic Systems with Wide-Band Noise Disturbances, SIAM J. Appl. Math., **34**, 1978, 437-476. * [7] P. Blomgren, G. Papanicolaou, and H. Zhao, Super-Resolution in Time-Reversal Acoustics, J. Acoust. Soc. Am., **111**, 2002, 230-248. * [8] L. Borcea, C. Tsogka, G. Papanicolaou and J. Berryman, Imaging and time reversal in random media, to appear in Inverse Problems, 2002. * [9] J. Berryman, L. Borcea, G. Papanicolaou and C. Tsogka, Statistically stable ultrasonic imaging in random media, Preprint, 2002. * [10] R. Bouc and E. Pardoux, Asymptotic analysis of PDEs with wide-band noise disturbances and expansion of the moments, Stochastic Analysis and Applications, **2**, 1984, 369-422. * [11] D. Dawson and G. Papanicolaou, A random wave process, Appl. Math. Optim., **12**, 1984, 97-114. * [12] D. Dowling and D. Jackson, Phase conjugation in underwater acoustics, Jour. Acoust. Soc. Am., **89**, 1990, 171-181 * [13] D. Dowling and D. Jackson, Narrow-band performance of phase-conjugate arrays in dynamic random media, Jour. Acoust. Soc. Am.,**91**, 1992, 3257-3277. * [14] M. Fink and J. de Rosny, Time-reversed acoustics in random media and in chaotic cavities, Nonlinearity, **15**, 2002, R1-R18. * [15] M. Fink, D. Cassereau, A. Derode, C. Prada, P. Roux, M. Tanter, J.L. Thomas and F. Wu, Time-reversed acoustics, Rep. Progr. Phys., **63**, 2000, 1933-1995. * [16] M. Fink and C. Prada, Acoustic time-reversal mirrors, Inverse Problems, **17**, 2001, R1-R38. * [17] J.P. Fouque, G.C. Papanicolaou and Y. Samuelides, Forward and Markov Approximation: The Strong Intensity Fluctuations Regime Revisited, Waves in Random Media, **8**, 1998, 303-314. * [18] K. Furutsu, Random Media and Boundaries: Unified Theory, Two-Scale Method, and Applications, Springer Verlag, 1993. * [19] P.Gerard, P.Markovich, N.Mauser and F.Poupaud, Homogenization limits and Wigner transforms, Comm.Pure Appl. Math., **50**, 1997, 323-380. * [20] W. Hodgkiss, H. Song, W. Kuperman, T. Akal, C. Ferla and D. Jackson, A long-range and variable focus phase-conjugation experiment in a shallow water, Jour. Acoust. Soc. Am., **105**, 1999, 1597-1604. * [21] H. Kesten and G. Papanicolaou, A Limit Theorem for Turbulent Diffusion, Comm. Math. Phys., **65**, 1979, 97-128. * [22] G. Papanicolaou and W. Kohler, Asymptotic analysis of deterministic and stochastic equations with rapidly varying components, Comm. Math. Phys. **45**, 217-232, 1975. * [23] H. Kunita, _Stochastic flows and stochastic differential equations_. Cambridge Studies in Advanced Mathematics, 24. Cambridge University Press, Cambridge, 1997. * [24] W. Kuperman, W. Hodgkiss, H. Song, T. Akal, C. Ferla and D. Jackson, Phase-conjugation in the ocean, Jour. Acoust. Soc. Am., **102**, 1997, 1-16. * [25] W. Kuperman, W. Hodgkiss, H. Song, T. Akal, C. Ferla and D. Jackson, Phase conjugation in the ocean: Experimental demonstration of an acoustic time reversal mirror, J. Acoust. Soc. Am., **103**, 1998, 25-40. * [26] H. Kushner, Approximation and weak convergence methods for random processes, with applications to stochastic systems theory, MIT Press Series in Signal Processing, Optimization, and Control, MIT Press, 1984. * [27] B. Nair and B. White, High-frequency wave propagation in random media- a unified approach, SIAM J. Appl. Math., **51**, 1991, 374-411. * [28] L. Ryzhik, G. Papanicolaou, and J. B. Keller. Transport equations for elastic and other waves in random media, Wave Motion, **24**, 327-370, 1996. * [29] F. Tappert, The parabolic approximation method, Lecture notes in physics, vol. 70, _Wave propagation and underwater acoustics_, Springer-Verlag, 1977. * [30] V. I. Tatarskii, A. Ishimaru and V. U. Zavorotny, editors, Wave Propagation in Random Media (Scintillation), SPIE and IOP, 1993. * [31] Thomas J. L. and M. Fink, _Ultrasonic beam focusing through tissue inhomogeneities with a time reversal mirror: Application to transskull therapy_, IEEE Trans. on Ultrasonics, Ferroelectrics and Frequency Control, Vol. 43, 1122-1129, (1996). * [32] C. Tsogka and G. Papanicolaou, Time reversal through a solid-liquid interface and super-resolution, Preprint, 2001. * [33] B. White, The stochastic caustic, SIAM Jour. Appl. Math., **44**, 1984, 127-149.
When a signal is emitted from a source, recorded by an array of transducers, time reversed and re-emitted into the medium, it will refocus approximately on the source location. We analyze the refocusing resolution in a high frequency, remote sensing regime, and show that, because of multiple scattering, in an inhomogeneous or random medium it can improve beyond the diffraction limit. We also show that the back-propagated signal from a spatially localized narrow-band source is self-averaging, or statistically stable, and relate this to the self-averaging properties of functionals of the Wigner distribution in phase space. Time reversal from spatially distributed sources is self-averaging only for broad-band signals. The array of transducers operates in a remote-sensing regime so we analyze time reversal with the parabolic or paraxial wave equation. wave propagation, random medium, Liouville-Ito equation, stochastic-flow, time reversal 35L05, 60H15, 35Q60
Give a concise overview of the text below.
arxiv-format/0207097v1.md
**A Kalman Filter for Ocean Monitoring** Konstantin P.Belyaev _LNCC, Petropolis, Brazil_ Detlev Muller _MPIMET, Hamburg, Germany_ # Introduction Ocean state estimation draws its wider societal as well as scientific significance from the central role of the oceans in Earth's climate system. At a time when the impact of climate variability on societal infrastructures is increasingly felt, the need for comprehensive climate monitoring is generally accepted. While mankind is primarily affected by meteorological manifestations of climate variability, large-amplitude weather fluctuations oftentimes screen the atmospheric climate signal. In practice, atmospheric climate observation proves prohibitively intricate. Alternatively, estimates of the state of the ocean interior with its enormous capacity to store and distribute water, heat and radiatively active trace substances (such as carbon dioxide) provide direct evidence of the climate signal. As the dominating climate component, the ocean acts as a Markov Integrator of atmospheric noise, provides the memory of the climate system and sets time-scales of climate processes by (at least partly) predictable transport mechanisms. Thus, practical climate monitoring anchors on operational ocean state estimation. Only a decade ago, an observational basis for the assessment of a temporally changing global ocean circulation was practically nonexistent. In fact, oceanography was plagued by a notorious undersampling problem and the ocean circulation was widely considered as a stationary flow. The most comprehensive data set of this era is the World Ocean Atlas (WOA) by Levitus and coworkers [1], a meticulous collection of hydrographic data from oceanographic archives all over the world. This atlas represents a global ocean density field that is best associated with the mean ocean circulation during the second half of the twentieth century. Temporal changes such as those at the roots of past, present and future climate variability are far beyond the scope of these data. Since then, ocean observation underwent an explosive development. Today, space-borne ocean observatories provide global data sets of the mesoscale state of the sea surface in near real-time. Oceanographic data paucity has been replaced by an almost overwhelming data stream [2]. Nevertheless, observational monitoring of the ocean interior remains technically difficult and costly for the foreseeable future. For mere book keepingas well as for analysis and interpretation, ocean state estimation thus relies heavily on the dynamical extrapolation of observations by means of numerical models of the global ocean circulation. Such models play a central role in the study of the climate system and its ocean component for more than three decades. As engineering devices with the capacity for otherwise impossible experimentation they provide a laboratory for the simulation, extrapolation and ultimately understanding of the observational record. So far, the most convincing demonstration of the potential of circulation modeling has been the discovery of the El Nino mechanism and its repercussions on remote regions of the globe. After Matsuno's theoretical ground work [3] and the provision of observational evidence by Wyrtki [4], the essential dynamical processes were numerically simulated with simple shallow water models [5, 6]. Today, operational El Nino monitoring and forecasting are implemented at a number of institutions world wide and the variability of tropical circulations has become one of the best understood aspects of the climate system. On the other hand, dedication of more complex models to the same problem has yet been unable to advance El Nino simulation and forecasting significantly. Nor has it been possible to gain comparable insights for the extratropics. While scientists are well aware of extratropical circulation variability such as the North Atlantic or Arctic Oscillation or the Antarctic Circumpolar Wave, numerical analyses have clearly been less yielding in these instances. Uncertainties of this kind also raise the question of the adequacy of contemporary numerical circulation models for longer term climate projections. The major difficulty of global circulation modeling is the lack of a physically consistent and numerically soluble formulation of the circulation problem. Physically, the global ocean circulation poses the thermohydrodynamical problem for a viscous fluid on the rotating geoid. The gravest problem of circulation theory is certainly the absence of an energetically consistent formulation of the equations of motion for a viscous rotating fluid. While such formulations are well known for viscous nonrotating fluids and ideal rotating fluids, the derivation of the energy budget from the equations of motion of a dissipative rotating fluid remains a matter of scientific debate. As one consequence, wave-dissipation in rotating fluids is not very well understood. Moreover, viscosity plays a significant role for the numerical stability of circulation models. It is clearly desirable that parameterizations of subscale transports steer momentum, energy and vorticity along realistic, i.e. energetically and vortically consistent pathways in space-time and wave vector space. For contemporary circulation models, this matter remains essentially unsolved. Satellite orbits reveal a complex fine structure of the planet's shape and considerable uncertainties particularly about the marine geoid still exist. For most purposes of circulation modeling, however, an approximation in terms of a spheroid or even a sphere is probably sufficient. A source of dissatisfaction with 3-dimensional spheroidal or spherical circulation equations are difficulties in analytically obtaining simple stationary solutions which reflect characteristic circulation features such as geostrophy and thermal wind balance. Moreover, linearizations in these coordinates do generally not admit separation of variables. Hence, it has yet been impossible to study analytically the propagation of acoustic, gravity and Rossby wave disturbances together with the stability of simple flows in a common, 3-dimensional framework. For numerical integration, such problems are insignificant. Nevertheless, most contemporary numerical circulation models compromise geometric-dynamic integrity in favor of a multi-\\(\\beta\\)-plane approximation to the geometry of the geoid. This approach codes Laplace operators as sum of second order derivatives and ignores first order contributions from nontrivial Christoffel symbols. A third set of issues of circulation modeling is associated with the nonlinearity of fluid dynamics. Nonlinear field theories inevitably couple variability on the smallest space- and time-scales with the largest scales available. For numerical as well as theoretical purposes, processes on small spatial and fast temporal scales should be eliminated from circulation equations. In the first place, this applies to acoustics. The widely favored sound filter invokes the Boussinesq approximation to inertia and weight of the fluid. While this approach has been quite successful in the study of internal gravity waves and Rayleigh-Benard instability, its vorticity-inconsistency becomes a problem in long-termintegrations of circulation modeling. Furthermore, Rayleigh-Benard (or static) instability involves fast convective motions on small spatial scales. Such convective events are generally not resolved in global circulation models and their crucial role for the thermohaline circulation is represented by appropriate parameterizations. Hence, models assume the ocean to be in hydrostatic equilibrium and simply neglect internal vertical accelerations relative to Earth's gravitational acceleration. However, this straightforward introduction of hydrostatics generates a problem. Now, momentum density is a 2-dimensional vector while the mass flux vector of the continuity equation remains 3-dimensional. Such a violation of the first law of motion will generally be uncritical for stationary flows or order-of-magnitude estimates. However, for the long-term integrations of circulation modeling, such a formulation is not entirely satisfactory. The Boussinesq approximation, hydrostatics and the so-called \"traditional approximation\" of inertial forces define Richardson's Primitive Equations [7]. Currently, these equations provide the physical basis for most global circulation models. While the Primitive Equations account consistently for velocities and thermodynamics of equilibrium circulations, they do not pose a Newtonian dynamical problem. The present study utilizes GROB HOPE, a coarse version of the numerical Hamburg Ocean Primitive Equations model of MPIMET [8]. This model is based on a C-grid discretization of the Primitive Equations for UNESCO seawater [9] and allows various convection parameterizations which account for different characteristics of this process in the open ocean and in the bottom boundary layer down submarine slopes. GROB HOPE includes sea ice dynamics with viscous-plastic rheology [10] parameterizing cracking, ridging, rafting and deformation of sea ice. The model is forced by buoyancy fluxes and wind stresses at the sea surface as well as the freshwater discharges of Earth's 50 largest rivers. In long term experiments (integration time: 1000 years) with climatological forcing resolving the annual cycle, the model assumes an essentially drift-free cyclostationary state after a few centuries which reproduces the major water masses and gyre structures of the global ocean circulation as well as the sea ice cover and its seasonal variation at high latitudes. While this model circulation exhibits the characteristic degree of realism of state-of-the-art simulations it also displays a number of typical deficits. The model fails to maintain the observed Pacific Intermediate Waters. Furthermore, while the poleward Atlantic heat transport is certainly of the observed order of magnitude, its maximum of 0.8 PW is still somewhat lower than the 1.1 PW suggested by observations. On the other hand, the mass transport by the Antarctic Circumpolar Current with 180 Sverdrup in the Drake Passage is higher than the observed 140 Sverdrup. The path of the Gulf Stream which is crucial for the European climate and weather turns out to be quite sensitive to the details of the atmospheric forcing and the chosen parameterization of subscale transports. For an extensive discussion of the strengths and weaknesses of the GROB HOPE circulation, see [8]. At this time, the versatility of numerical circulation models is sufficiently developed to simulate a wide variety of preconceived scenarios. However, beyond this illustrative role, state-of-the-art models generally lack the capacity of scientific discrimination between competing predictions and projections. To a large part, these uncertainties can be resolved by the systematic combination of models with observations. Such combination and confrontation of models with extensive and novel observational data sets has been the outstanding factor in forecast-improvement by numerical weather prediction over the last thirty years. With maturing numerical ocean circulation models and a growing understanding of the variability of the ocean circulation this approach has now also become attractive to the oceanographic and climate communities. Given the characteristics of model data and observations in Earth System Modeling, the integration of information from different sources poses a considerable data engineering problem. Frequently, observations do not refer to prognostic model variables while other parameters such as vertical velocities are practically unobservable. Moreover, observations are typically distributed highly irregular in space-time. And data sets from both sources, model and observation, are large. The mathematics of the optimizing synthesis of large data sets are the objective of estimation theory [11]. To avoid the mutual enhancement of model- and data-error, estimation theory has been (and still is)developing a number of what are called data assimilation algorithms. Generally, these algorithms fall into two classes: variational and sequential techniques [12, 13, 14]. The equivalence of both methods is readily demonstrated in simple cases. Variational assimilation, namely the Adjoint Method, is based on an application of inverse modeling techniques to the estimation problem. Variation of control parameters minimizes a cost function formed by the model-data misfit. This approach lends itself particularly to the estimation of equilibrium states and processes of finite duration. Computation of the cost gradient with respect to the controls calls for what is often referred to as the temporally backward integration of the adjoint model. For complex models, coding of the model adjoint is a substantial task, well comparable to coding the model itself. The practical relevance of adjoint assimilation in Earth System Modeling therefore arose only after the advent of the theory of automatic differentiation [15] and the subsequent development of automatic adjoint code compilers [16]. With global circulation models, the Adjoint Method finds presently wide application in sensitivity studies. Sequential methods such as the Kalman Filter are more specifically tailored to the needs of monitoring and prediction [17]. These updating schemes emerge from the application of the theory of stochastic processes to estimation and yield an estimate of minimum variance. On update, the relative weight of model and data is determined by the Kalman Gain which is computed from data and model error dynamics. To this end, the model error is considered as a stochastic process. The dynamics of such processes can be equivalently formulated in the Langevin (or Heisenberg) representation and in the Fokker-Planck (or Schrodinger) representation [18]. The Langevin picture addresses the space-time behavior of the process in terms of its moments. In practice, this refers generally to the covariance only. Formally, the temporal development of the covariance is uniquely determined by the model dynamics. However, the practical derivation of the covariance dynamics for a complex model such as a global ocean circulation model readily becomes everything but straightforward. This applies particularly to nonlinear models, the issue of boundary and initial conditions for the covariance, stability questions and the problem of temporally backward assimilation. Nevertheless, at this time the literature on Kalman Filter assimilation in Earth System Modeling and other branches of engineering is almost exclusively dominated by the Langevin approach [14]. Alternatively, a stochastic process may be considered in phase-space in terms of its probability density. Provided the process is Markovian and jumps remain small in an appropriate sense [18] the dynamics of this probability density are governed by the Fokker-Planck Equation. The advection- and diffusion-coefficients of this linear parabolic differential equation are determined by model dynamics and observational error statistics. In general, these coefficients are also difficult to obtain from a complex model. However, for sufficiently short update intervals, phase-space advection and diffusion can be determined phenomenologically from the model output by histogram techniques. In this framework, the assimilation method provides practical answers to the issues of phase-space reduction, model nonlinearity, initial and boundary conditions for higher moment dynamics as well as stability. Moreover, the existence of the Backward Fokker-Planck Equation [18] will permit the generalization of sequential assimilation to include the temporally backward extrapolation of data information. The mathematical aspects of the Fokker-Planck representation of sequential Kalman Filter assimilation have been developed in detail by [19]. With the typical volume of model output and observational record in Earth System Modeling, computational demands for assimilation with least-square optimality are always quite high. For a reduction of the computational burden, the present estimation utilizes a combination of Kalman Filter assimilation and simple \"nudging\". While subsurface temperatures from the TAO/TRITON array will be assimilated sequentially, observations of global sea-surface temperatures are essentially inserted into the model at daily intervals. The feasibility of this simplistic technique is by no means trivial. Older model generations were generally unable to \"digest\" essentially unprocessed data and model-data inconsistencies would readily emerge in various regions of space-time and phase space. It will here be shown that the quality of contemporary models and data sets is sufficiently high for nudging to be beneficial for the ocean state estimate. ### Fokker-Planck Picture of the Kalman Filter. Societal and scientific needs in Earth System Monitoring are presently best met by an operational, steady combination and confrontation of substantial, yet incomplete observations with complex, but nevertheless approximate numerical models. This combination aims to fill data gaps by dynamical extrapolation of observations on the basis of physical laws incorporated in the model and simultaneously constrain model uncertainties by operating the model in close vicinity of observations. One source of information is used to compensate the deficits of the other and thus arrive at a comprehensive state estimate including an assessment of model- and observation-quality. The assimilation algorithms of estimation theory are information integration techniques which prevent the mutual enhancement of errors from different sources and maximize the benefit from imperfect models and data. As an engineering tool, data assimilation accepts or rejects a hypothesis (the model) while a constructive evaluation of structural model deficits remains beyond its scope. The key idea of sequential assimilation is to integrate the model until an observation becomes available. At this time, model integration is halted and the state of the system is updated by an appropriate combination of model prediction and observation. Subsequently, this update provides the initial value for the continued model integration (fig.1). Hence, the main task in sequential data assimilation is the determination of the temporal development of the relative weight of model prediction and observation. The state of the system under consideration (here: the global ocean) is given by a finite- (generally: very high) dimensional vector \\(Y(t,x)\\) where \\(t\\) denotes time and \\(x=(x_{(1)},x_{(2)},x_{(3)})\\) the 3-dimensional spatial coordinate. In numerical ocean models, time and space coordinates will generally be members of a discrete, 4-dimensional grid. The time-development of the state of the system is governed by the model dynamics \\[\\frac{d}{dt}Y_{m}(t)=\\Lambda(Y_{m},t) \\tag{1}\\] where the index m refers to the model and \\(\\Lambda\\) is a generally nonlinear operator of \\(Y_{m}\\) with \\[\\Lambda(Y_{m}=0,t)=0\\] and some explicit time-dependence representing external forcing. For simplicity, spatial coordinates have been suppressed in (1). It is now assumed that the model is sufficiently \"good\" so that the Figure 1: Flow Chart for Sequential Data Assimilation time-development of the true, i.e. observed system is given by the equations of motion \\[\\frac{d}{dt}Y_{o}(t)=\\Lambda(Y_{o},t)+W_{0} \\tag{2}\\] where the index \\(o\\) refers to observation, while \\(\\Lambda\\) is the same operator as in (1) and \\(W_{0}\\) an additive noise of known probability distribution. The possibly nonstationary noise is assumed to have zero average, finite spatial and no temporal correlation, i.e. it is assumed to be temporally white \\[<W_{0}(t,x)>=0,\\hskip 28.452756pt<W_{0}(t,x)W_{0}(t+\\tau,x+r)>=Q(t,r)\\delta( \\tau).\\] Physically, this noise accounts for model deficits. Equation (2) is the Langevin representation of a (nonlinear) stochastic differential equation. The model error \\[y(t,x)=Y_{o}(t,x)-Y_{m}(t,x)\\] appears under a variety of names in different applications of assimilation theory. Variational techniques generally term this quantity \"misfit\" while numerical weather prediction refers to the model error as \"innovation\". By definition, the model error satisfies the Langevin equation \\[\\frac{d}{dt}y(t)=\\Lambda(y+Y_{m},t)-\\Lambda(Y_{m},t)+W_{0}.\\] With the help of the mean value theorem, the dynamical operator is rewritten as \\[\\Lambda(y+Y_{m},t)-\\Lambda(Y_{m},t)=y\\Lambda^{\\prime}(Y_{m},t)+R\\] where the residual R comprises terms higher than second order. Writing now \\[\\Lambda(y+Y_{m},t)-\\Lambda(Y_{m},t)=\\Lambda(y,t)-\\Lambda(y,t)+y\\Lambda^{\\prime }(Y_{m},t)+R\\] and using again the mean value theorem on \\(\\Lambda(y,t)\\) together with \\(\\Lambda(0,t)=0\\), one arrives at \\[\\frac{d}{dt}y(t)=\\Lambda(y,t)+W \\tag{3}\\] where the new noise \\(W\\) comprises \\(W_{0}\\) and terms of second and higher order. It is readily seen that \\[<W(t,x)>=0.\\] The error dynamics (3) account for nonlinearities and provide the basis for the histogram technique to be invoked below. For simplicity, data are here assumed to represent prognostic model variables directly. Furthermore, measurements are made at specific observation points \\(X=(X_{(1)},X_{(2)},X_{(3)})\\) which are generally much less in number than model grid points \\(x\\) and do not coincide with these. Interpolation to model points will generally be straightforward and observation points are here assumed to coincide with model points. In the context of monitoring and prediction, observations typically become available at specific times which are termed \"analysis times\" in numerical weather prediction. At these times, sequential data assimilation halts the model integration and updates the system's state vector with observational information. This update is given in terms of a convolution-type integral \\[Y(t,x)=Y_{m}(t,x)+\\int_{0}^{t}d\\tau\\int dX\\,G(\\tau,x,X)\\,y(\\tau,X) \\tag{4}\\] where the kernel \\(G(\\tau,x,X)\\) is called the gain. For the update (4), the gain accounts for all observations prior to analysis time and hence only represents the temporally forward propagation of data information. In this sense, the kernel in (4) represents the \"retarded gain\". It is equally possible to ask: at which state did the system initially start, if it is at the observed state on analysis time. The account of the temporally backward propagation of data information requires the inclusion of the \"advanced gain\". Sequential assimilation admits the dynamical extrapolation of data information into both, past and future. The corresponding algorithm, namely a meaningful representation of the advanced gain becomes particularly transparent in the phase space representation. Its formal details will be discussed elsewhere. There is a number of expressions for the gain-kernel in (4) which all lead to a different variance for the state estimate. The estimate has minimum variance if the gain is given by the Wiener-Hopf equation \\[K(t,x,X)=\\int_{0}^{t}d\\tau\\int dX_{0}\\,G(\\tau,x,X_{0})\\,K(t-\\tau,X,X_{0}) \\tag{5}\\] where \\[K(t,x_{1},x_{2})=<y(t,x_{1})y(t,x_{2})>-<y(t,x_{1})><y(t,x_{2})>.\\] is the error covariance with brackets indicating the ensemble mean and unbraced indices denoting different points, i.e. different 3-tuples (and not different vector components). The gain defined by equation (5) is called the Kalman Gain and the sequential data assimilation algorithm given by (4) and (5) is called the Kalman Filter [20]. It follows from (5) that the central problem of Kalman Filter assimilation is the determination of the temporal development of the error covariance \\(K(t,x_{1},x_{2})\\). Formally, the covariance matrix and its temporal behavior are uniquely determined by the model error dynamics (3). In practice, however, exploitation of this equation encounters a number of serious problems. Covariance dynamics on the basis of (3) are determined at every model grid point. If model integration requires \\(N\\) operations per time-step, integration of the covariance dynamics requires additional \\(N^{2}\\) operations. An order of magnitude estimate of the number of operations for a contemporary ocean circulation model such as GROB HOPE is \\(N\\approx 10^{5}\\). Even taking into account the rate of increase in available computing capacities, integration of the complete covariance dynamics in the Langevin picture becomes prohibitive for global circulation models. Furthermore, these models typically involve strong nonlinearities. For mean quantities such as the the error covariance, this nonlinearity leads to a hierarchy problem since the mean of a product does not equal the product of the means. The difficulties and ambiguities of practical closures of such hierarchies are well known from turbulence theory [21]. Additional complications arise from the lack of initial and boundary conditions for the dynamics of higher moments and for the determination of stability and uniqueness conditions of the assimilation procedure. These problems represent some of the reasons for the prevalence of the Adjoint Method in Earth System estimation. An alternative approach to the Kalman Filter utilizes the phase space or Fokker-Planck representation of stochastic processes. In this framework, a detailed assimilation algorithm for application in Earth System estimation has been developed by Belyaev and coworkers [19, 22]. The starting point of this formulation is the joint probability distribution \\[p=p(t,\\eta_{1},\\eta_{2};X_{1},X_{2})\\] for the two-component projection \\(\\eta\\) of the model error \\(y\\) \\[\\eta=(\\eta_{1},\\eta_{2})=(y(t,X_{1}),y(t,X_{2}))\\] to have the value \\(\\eta_{1}\\) at observation point \\(X_{1}\\) and the value \\(\\eta_{2}\\) at observation point \\(X_{2}\\) at time t. In terms of this probability the error covariance (5) takes the form \\[K(t,X_{1},X_{2})=\\int d^{2}\\eta\\,\\eta_{1}\\,\\eta_{2}\\,p-\\int d^{2}\\eta\\,\\eta_{ 1}\\,p\\int d^{2}\\eta\\,\\eta_{2}\\,p. \\tag{6}\\] Determination of the error covariance thus requires the calculation of joint probability distributions p and this operation sets the magnitude of necessary computing capacities. Notice that (6) definesthe error covariance only for pairs of observation points. In Earth System estimation, the number \\(M\\) of these points is typically orders of magnitude smaller than the number \\(N\\) of model grid points. Using the symmetry of the joint probability distribution, the covariance matrix for \\(M\\) observations has \\(M(M+1)/2\\) independent members. Given 1000 data points (which still is a fairly small data set in Earth System observation), the error covariance calls for the computation of half a million probability functions. While this number is clearly much smaller than \\(N^{2}\\approx 10^{10}\\), it still poses a considerable computational task. In the phase space picture, the error \\(\\eta\\) is viewed as a Markov process. The temporal development of the conditional probability distribution of such a process is governed by the Master Equation [18]. For Markov processes with small jumps this equation is well approximated by the Fokker-Planck Equation [18, 23, 24]. In the present case this equation takes the form \\[\\partial_{t}p=-\\partial^{n}\\Lambda_{n}p+\\frac{1}{2}\\partial^{n}(q_{mn}\\, \\partial^{m}p) \\tag{7}\\] where the usual summation convention for indices \\(m,n,\\ldots=1,2\\) is implied. The vector \\[J_{n}=\\Lambda_{n}p-\\frac{1}{2}q_{mn}\\,\\partial^{m}p\\] denotes the advective-diffusive probability flux in phase space with advection velocity \\(\\Lambda_{n}\\) and diffusion \\(j_{n}=-\\frac{1}{2}q_{mn}\\partial^{m}p\\). In this sense, the Fokker-Planck Equation (7) expresses the conservation of probability in phase space. Temporally, this linear parabolic equation governs the development of the conditional probability \\(p(t_{2},\\eta_{2}\\,|t_{1},\\eta_{1})\\) of the error \\(\\eta\\) to have the value \\(y_{2}\\) at time \\(t_{2}\\) and location \\(X_{2}\\) given its value \\(y_{1}\\) at time \\(t_{1}<t_{2}\\) at location \\(X_{1}\\). The required joint probability follows from the solution of the Fokker-Planck Equation by Bayes' Rule \\[p(t,\\eta_{2},\\eta_{1})=p(t,\\eta_{2}\\,|\\,t,\\eta_{1})p(t,\\eta_{1}).\\] It is noted that there is also a Backward Master Equation [18, 24]. The small jump approximation to this equation leads to the Adjoint Fokker-Planck Equation which governs the temporally reversed development of the error distribution. Hence, the Adjoint Fokker-Planck Equation provides the basis for the determination of the Advanced Kalman Gain. For the solution of the Fokker-Planck Equation (7) the phase space advection \\(\\Lambda_{n}(t,\\eta)\\) and the diffusion tensor \\(q_{mn}(t,\\eta)\\) have to be known. These parameters are determined by error dynamics and data stochastics according to \\[\\Lambda_{n}(t,\\eta)=<\\Lambda_{n}(t,\\eta)\\,|\\,\\eta=y>=\\tau^{-1}\\int d^{2}\\eta^ {\\prime}\\,(\\eta_{n}-\\eta_{n}^{\\prime})\\,p(t,\\eta\\,|\\,t^{\\prime},\\eta^{\\prime})\\] with \\(\\tau=t-t^{\\prime}\\) for \\(t^{\\prime}<t\\) for the advection while one has for the diffusion tensor \\[q_{mn}(t,\\eta)-Q_{mn}(t,\\eta)=<\\Lambda_{m}(t,\\eta)\\Lambda_{n}(t,\\eta)\\,|\\,\\eta =y>=\\tau^{-1}\\int d^{2}\\eta^{\\prime}\\,(\\eta_{m}-\\eta_{m}^{\\prime})(\\eta_{n}- \\eta_{n}^{\\prime})\\,p(t,\\eta\\,|\\,t^{\\prime},\\eta^{\\prime})\\] where \\(Q_{mn}\\) denotes the data covariance. With these definitions, the Fokker-Planck Equation (7) is readily seen to be equivalent to the Langevin Equation (3). Multiplying (7) from the left by \\(\\eta\\) and integrating over phase space one obtains \\[\\frac{d}{dt}<\\eta_{n}>=<\\Lambda_{n}(t,\\eta)>\\] in agreement with the two-component projection of the ensemble average of (3). In principle, the advection- and diffusion-terms are determined by the Langevin representation (3) of the error dynamics. However, particularly for strongly nonlinear dynamics such a derivation encounters serious formal difficulties and a unique and practical solution to this problem is currently not known. In view of the typical problem in Earth System Monitoring, Belyaev and coworkers [22] propose an alternative, phenomenological determination of these parameters. In Earth System Monitoring the time-interval between consecutive samples is typically short compared to the time scales of the processes under observation. Under these conditions it becomes possible to consider the model as a black box and determine the transition (i.e. conditional) probabilities by histogram techniques from model input and output. To this end, the number \\(N^{\\prime}\\) of all grid points is counted for which \\(\\eta\\) has the value \\(y^{\\prime}\\) at time \\(t^{\\prime}\\). At the later time \\(t=t^{\\prime}+\\tau\\) all former \\(y^{\\prime}\\)-points \\(N\\) are counted for which \\(\\eta\\) now has the value \\(y\\). The conditional probability is then given by the ratio \\[p(t,\\eta\\,|\\,t^{\\prime},\\eta^{\\prime})=N/N^{\\prime}.\\] Since \\(0\\leq N\\leq N^{\\prime}\\), this expression always satisfies the condition \\[0\\leq p(t,\\eta\\,|\\,t^{\\prime},\\eta^{\\prime})\\leq 1\\] necessary for being interpreted as a probability density. With the help of this probability, the advection and diffusion parameters are readily obtained by phase space integration according to the above formulas. It is now possible to solve the Fokker-Planck Equation numerically. As a linear parabolic differential equation in the 2-dimensional unbounded plane, the equation is efficiently integrated by the Peaceman-Rachford scheme [25]. For details of this integration concerning initial conditions, positive definiteness and normalizability of the solution see [19]. The resulting probability density determines the error covariance at all observational points \\(X\\) according to (6) and the covariance at all model grid points is constructed from this expression by interpolation [19]. Using this error covariance, the Wiener-Hopf Equation (5) is solved for the Kalman Gain and the model is finally updated according to (4). ### Ocean State Estimation The feasibility of operational global ocean state estimation will here be demonstrated by combining simulations of the numerical circulation model GROB-HOPE with observations of global sea-surface temperatures (SST) and observed subsurface temperatures from the TAO/TRITON array for the El Nino year 1997. Besides a globally realistic mean state, the objective of the estimate is the improvement of the model's El Nino simulation. GROB HOPE has 20 layers in the vertical with high resolution of 10 layers in the upper 500m. In the horizontal, the model uses a spatially inhomogeneous grid obtained from a conformal transformation of the geographical coordinates. At the present stage of model development and availability of computer capacities, the disadvantages of an inhomogeneous horizontal grid are easily outweighed by its advantages. For one, polar singularities are avoided by transformation of the model poles to a continental site. Secondly, the spatial inhomogeneity of the horizontal grid allows high resolution in regions of interest (up to 25 km for the Arctic Ocean in the present case) while low resolution is accepted for remote regions (300 km near the equator in the present case). This design avoids well-known open boundary problems of fine-resolution regional or nested models. While the low-resolution regions provide a model-consistent climatology, the high-resolution regions admit even the study of mesoscale processes. In spite of this versatility, the machine requirements for GROB HOPE are those of a global model with a spatially homogeneous \\(3^{\\circ}\\times 3^{\\circ}\\) grid. This design permits a time step of 2.4 hours. With its coarse spatial resolution in the tropics, the GROB version of HOPE does not especially qualify for El Nino simulation. It is here to be shown that assimilation of observations is able to offset these design limitations. Success in this framework provides a demonstration of the capacities of sequential assimilation. For operational purposes, on the other hand, data will always be assimilated into the best model available. After an initial spin-up period of 2 years with restoring to the 3-dimensional buoyancy climatology of WOA the model is integrated from 1948 to the present with surface forcing derived from the NCEP reanalysis [26]. Atmospheric data are interpolated onto the GROB HOPE grid and surface buoyancy- and momentum-fluxes are calculated by bulk formulae [8] depending on both, the atmosphere and the ocean. Hence, the eventual ocean forcing is determined by the particular realization of the ocean state by the model while the present ocean-only set-up is unable to account for a feedback of the ocean on the atmosphere. For a reduction of trends in the deep ocean, integration over the NCEP period is repeated. Furthermore, model surface salinities are nudged to a mean annual cycle taken from WOA with a time constant of a little over a year (385d). Use of a mean annual cycle rather than an annual mean accounts for the seasonal variation in the hemispheric distribution of convective Figure 3: Monthly Mean Surface Heat Flux for December 1997 [W\\(m^{-2}\\)]. Nudge. Figure 2: Monthly Mean Surface Heat Flux for December 1997 [W\\(m^{-2}\\)]. Control. activity. With this forcing the model is integrated to 31 December 1997. The period from 1 January 1997 to 31 December 1997 is taken as the control run in the present experiment and the model state at 31 December 1996 provides the initial condition for the assimilation. It is noted that model runs considered here do not address the prediction problem. Surface data transfer external El Nino information to the ocean model. Fig.2 shows the monthly mean of the net surface heat flux of the control configuration for December 1997. This heat flux is determined by atmospheric data from the NCEP reanalysis and oceanic data from GROB HOPE. The main feature is the characteristic seasonal separation of the (southern) summer- and (northern) winter-hemisphere: the ocean gains heat in summer and loses heat during winter. A particular detail in the North Atlantic is associated with model problems in simulating a realistic Gulf Stream path: off the American east coast, the ocean is unrealistically warm leading to a pronounced heat loss while the ocean is unrealistically cold in the region of the so-called North West Corner leading in turn to a pronounced heat gain by the ocean. Similar aberrations are seen in the Kuroshio region, the confluence of the Malvinas and Brazil Currents off the South American east coast and for the Agulhas Current near the Cape of Good Hope. The paths of these currents are essentially determined by vorticity dynamics and mismatches of NCEP derived forcing and model simulation are due to ambiguities in the vorticity dynamics of the Primitive Equations. Given the NECP fluxes, GROB HOPE fails to simulate mesoscale details of the state of the underlying ocean surface realistically. It will now be shown that nudging of observed SST into the model improves the state estimate considerably. To this end, GROB HOPE is restarted from 31 December 1996 and daily mean Reynolds SST of the NCEP data set are inserted into the model's top layer at a time constant of one day. During this one year integration, model-data incompatibilities do not develop. This is also true for GROB HOPE runs with SST nudging over the full NCEP period (not shown). Fig.3 depicts the monthly mean net surface heat flux for December 1997 with SST nudging. In comparison to fig.2, it is seen that aberrations of the major current systems are significantly reduced and the estimate of mesoscale features of the state of the sea surface improves without penalty. Nudging effects are not confined to the upper ocean alone. In convectively active regions surface temperature information is rapidly communicated to the abyss. For the present integration period of one year the deep ocean remains of course unable to adjust to the \"injected\" information. Nevertheless, with these data and for this model, nudging becomes a practical option of ocean state estimation by an efficient and yet robust model-data combination. Other presently available observations are of similar quality: sea-level data from space-borne altimeters and space-based observations of sea-ice cover. By nudging observations of this type into a global ocean circulation model, it is currently possible to arrive efficiently at a comprehensive and realistic estimate of the global state of the sea surface at mesoscale resolution. For the assessment of the state of the interior ocean consider the equatorial temperature field during the El Nino episode of 1997/98. Fig.4 shows the temperature difference Nudge-WOA along the equator for December 1997 where \"Nudge\" refers here to the GROB HOPE simulation of the global ocean circulation with NCEP forcing and nudging of daily SST observations, i.e. the run also portrayed in fig.3. In the abyssal Pacific, simulation and observation are seen to differ by typically less than half a degree. While the simulation is systematically colder than WOA, structural mismatches do not emerge. The agreement is less satisfactory in the abyssal Indic and Atlantic. In the near-surface Pacific, the model clearly exhibits the characteristic El Nino pattern. Relative to the WOA climatology, the eastern and central Pacific are colder while the West is anomalously warm. Comparison with observed subsurface temperatures [27] shows that the model simulates the phase of the process quite realistically. Since phase information is directly provided by forcing data, this model response is primarily indicative of the consistency of the simulation of near-surface wave propagation with with surface boundary conditions. Other features of fig.4 exhibit a lesser degree of realism. The warm anomaly in the surface waters of the central Pacific cannot be found in the observational record [27]. Here, the mixed-layer model of GROB HOPE fails to mix the heat supplied at the surface, sufficiently deep into the upper ocean. In the model, heat mainly penetrates to greater depth by slow diffusion processes. In the ocean, however, these transfers are dominated by turbulent mixing. As another consequence of the mixing parameterization, GROB HOPE underestimates mixed-layer depths throughout the year and thus fails to account for Kelvin wave downwelling during El Nino. Thermocline temperatures beneath the mixed layer are about \\(2^{\\circ}\\) Celsius too warm. Here, the model diffuses too much heat to depths of approximately 500m in the eastern equatorial Pacific which penetrates westward at approximately 250m. This mismatch is the result of unrealistically strong downward diffusion of heat and unrealistically weak upwelling of cold waters. For Primitive Equation models, vertical transfers pose a greater problem. Nonhydrostatic mixing processes have to be parameterized and such parameterizations are by no means trivial. The mixed-layer model of GROB HOPE is tuned to yield realistic mixing depths at moderate latitudes and compromises for the equatorial mixed layer are accepted. The alternative would be a far more complex and machine-intensive mixed-layer model. Moreover, vertical velocities are determined from mass conservation, independent of the momentum budget. Possible problems and ambiguities are smeared out by diffusion. Hence, models have a tendency to use diffusion where space-time- and phase-space-characteristics of the real ocean are determined by advection and propagation. Sequential assimilation of subsurface temperatures improves this state estimate significantly. Sub-surface temperature data are taken from the TAO/TRITON array which consists of approximately 70 moorings in the tropical Pacific between \\(8^{\\circ}S\\) and \\(8^{\\circ}N\\). The buoys record a number of atmospheric parameters, sea surface temperatures and subsurface temperatures at 10 irregularly spaced depths in the upper 500m. Records are transmitted to shore in real-time via the ARGOS satellite system. TAO/TRITON has become one of the most successful ground-based ocean observatories for two major reasons. In the first place, the relatively quiescent tropical waters allow the long-term deployment of buoys. For the more energetic high latitude oceans, the physical lifetime of a similar array would be significantly shorter than time scales of ocean processes of interest. Secondly, the observed variability is readily interpreted in terms of Matsuno's theory. Since 1985, the TAO/TRITON array has become an integral part of operational services as well as ocean and climate research. In contrast to the surface boundary condition, nudging of regional TAO/TRITON data is not an option. This array is physically embedded in the global ocean circulation and its data interact physically with their neighborhood in space-time as well as in phase space. In the Fokker-Planck picture of the Kalman Filter this is accounted for by phase space advection and diffusion which are updated at every assimilation step. To this end, model integration is halted at the end of each month and the model temperature field is updated by observed TAO temperatures. After the update, model integration resumes and continues for one month when model temperatures are updated again. Thus, model operation is constrained to the vicinity of the observed state of the ocean. Fig.5 shows the monthly mean temperature difference Assimilation-Nudge along the equator for December 1997. The data are seen to have three major effects on the estimate: the surface becomes colder, the mixed layer warmer and the thermocline colder. These modifications lead to a significantly higher degree of realism for the estimate. Assimilation ensures that heat supplied at the surface, is uniformly mixed into the upper layer and cold water is upwelled into the thermocline. In response to the data information, the model replaces diffusion dominated dynamics with mixing and advection. However, it is also seen that the assimilation still exhibits some pockets of warm water below 500m although far less than fig.4. Primarily, these pockets are a consequence of the lower boundary condition chosen for vertical transition probabilities: the model is assumed to be true at 500m. Obviously, there is room for improvement. A different view of these data effects is given in fig.6. The figure shows a time series of monthly mean temperature profiles at a location in the eastern equatorial Pacific for 1997. Simulated (black)and assimilated (red) profiles are compared. The simulation is clearly diffusion dominated, unable to produce a mixed layer and leaking too much heat into the thermocline. The assimilation also fails to produce well-defined mixed layers during the first part of 1997. Before the arrival of the downwelling Figure 4: Monthly Mean Temperatures on the Equator. December 1997. Nudge-WOA. Figure 5: Monthly Mean Temperatures on the Equator. December 1997. Assimilation-Nudge. Kelvin wave in the eastern Pacific, mixed layers here are shallow (typically 25m). Their absence in the assimilation during the first part of the year is a consequence of the poor vertical resolution of GROB HOPE. With the arrival of the Kelvin wave, assimilation produces the characteristic signature of turbulent mixing in the upper ocean with realistic mixed-layer depth. At the same time, the thermocline is cooled by upwelling of colder water and temperature gradients at the mixed-layer base increase. It is mentioned (corresponding plots not shown) that the model extrapolates data information beyond the temperature field in the TAO/TRITON region. By December 1997, temperatures as far as \\(30^{\\circ}\\) latitude on both hemispheres are modified by the assimilation. In phase space, model dynamics extrapolate temperature observations onto sealevel, salinity and the velocity field and improve the estimate for the complete state of the ocean. On the other hand, without continued assimilation the model loses memory of the data information from the upper equatorial Pacific after a time period of three to six months. ### Summary Societal needs of climate monitoring call for the installation of global ocean state estimation in the framework of an operational service similar to national weather prediction agencies. Novel ocean observation techniques provide a global data base for such estimates. For a number of parameters, these observations are available almost in real-time and at mesoscale resolution. Assimilators utilize numerical circulation models to dynamically extrapolate measurements in space-time and phase space and, at the same time, constrain model uncertainties. Given the data volume in ocean observation and modeling, requirements in computing resources are high. For efficiency, it is significant that model and data quality is sufficiently developed to allow nudging the model to essentially unprocessed data at short time constants. This applies particularly to the sea surface where data wealth is largest and determines primarily the surface forcing with a high degree of realism. The forcing problem that plagued ocean model development 20 years ago has been resolved. Dynamical extrapolation of interior ocean data requires advanced assimilation techniques. Monitoring problems are best addressed by sequential methods such as the Kalman Filter. Typically, observations do not coincide with the corresponding model solution and may even be incompatible Figure 6: Monthly Mean Temperature Profiles at \\(260^{\\circ}\\), \\(0^{\\circ}\\) in 1997. Nudge (black), Assimilation (red). with model dynamics. The Kalman Filter finds the initial value for model restart that is both, as close as possible to the data and compatible with model dynamics. To this end, the phase space representation of the filter solves (a large number of) simple, 2+1 dimensional, linear Fokker-Planck equations which represent model dynamics in terms of phase space advection and diffusion. Determination of these parameters by an elementary histogram method circumvents a number of highly complex, but essentially technical issues of the stochastics of nonlinear systems. For numerical analysis, the method proves efficient and reliable. At this time, models alone are generally unable to simulate the global ocean circulation with satisfactory realism. However, they do capture large-scale features such as gyres, water masses and their seasonal and longer-term variation quite realistically if they are determined by large-scale features of topography and external forcing. Problems typically emerge with the dynamical control of the density field. In the present study, this is demonstrated for equatorial mixing and upwelling and the paths of major ocean currents. On the other hand, the assimilation has shown that models are not antagonistic to (temporary) operation in closer vicinity of the data. At this development stage, model operation in the assimilation mode is capable of delivering practically relevant global ocean state estimates provided a continuous inflow of data is guaranteed. Some improvement in model performance is readily obtained by fairly simple measures. Higher vertical resolution is oftentimes possible within the framework of given computing resources. Significant increase of the overall horizontal resolution requires generally an upgrade of computer resources. For the performance of HOPE at higher spatio-temporal resolution, see [8]. Moreover, the modeling community continuously develops increasingly appropriate parameterizations of subscale processes and models are updated accordingly. On the other hand, as long as models are unable to account for basic laws of nature such as the first law of motion, long-term projections with little or no data input after initialization will remain questionable. In this context, it is noted that the \"Newtonization\" of Richardson's equations is actually more or less trivial: vertical integration of the Primitive Equations for an incompressible (multilayer) fluid leads again to a Newtonian set of equations of motion. In this framework, vertical variability appears as internal variability of the spatially strictly 2-dimensional fluid. Minor questions arise from the hierarchy problem that results from vertical integration of the nonlinear advection term. Since stratification is represented by the multilayer structure, low-order cut-offs of the advection hierarchy suffice for most purposes of circulation modeling. This shallow water approach to the circulation problem can be formulated with geometric-dynamic integrity and energetic consistency in both, the viscous nonrotating limit and the ideal rotating limit. The theory also admits comprehensive analytical studies of wave-circulation interaction. Currently, physically consistent global circulation models on the basis of shallow water theory are not available. ## References * [1] Levitus,S., T.Boyer, M.Conkright, T.O'Brien, J.Antonov, C.Stephens, L.Stathoplos, D.Johnson and R.Gelfeld 1998, _World Ocean Data Base_ (NODC, Washington, D.C.) * [2] Internat.Conf.on The Ocean Observing System for Climate, OOPC, St.Raphael, 1999. * [3] Matsuno,T. 1966, J.Meteorol.Soc.Jpn.**44**, 23. * [4] Wyrtki,K. 1975, J.Phys.Oceanogr.**5**, 572. * [5] Busalacchi, A. and J.O'Brien 1981, J.Geophys.Res.**C86**, 10901. * [6] Zebiak,S. and M.Cane 1987, Mon.Wea.Rev.**115**, 2262. * [7] Richardson,L. 1922, _Weather Prediction by Numerical Process_ (Cambridge University Press, Cambridge, East Anglia). * [8] Marsland,S., H.Haak, J.Jungclaus, M.Latif and F.Roske 2002, Ocean Modeling, in press. * [9] UNESCO 1983, Tech.Pap.Mar.Sci.**44**. * [10] Hibler,W. 1979, J.Phys.Oceanogr.**9**, 815. * [11] Cohn,S. 1997, J.Meteorol.Soc.Jpn.**75**, 257. * [12] Ghil,M. and P.Malanotte-Rizzoli 1991, Adv.Geophys.**33**, 141. * [13] Malanotte-Rizzoli,P. (Ed.) 1996, _Modern Approaches to Data Assimilation in Ocean Modeling_ (Elsevier, Amsterdam). * [14] Ghil,M., K.Ide, A.Bennett, P.Courtier, M.Kimoto, M.Nagata, M.Saiki and N.Sato 1997, _Data Assimilation in Meteorology and Oceanography_, Universal Academy Press, Tokyo). * [15] Talagrand,O. 1991, in: _Automatic Differentiation of Algorithms_, ed. by A.Griewank and G.Corliess, SIAM, (Philadelphia, PA). * [16] Giering,R. 1996, MPI Examensarbeit **44**, (Hamburg) * [17] Jazwinsky,A. 1970, _Stochastic Processes and Filtering Theory_ (Academic Press, N.Y.). * [18] van Kampen, N. 1981, _Stochastic Processes in Physics and Chemistry_ (North Holland, N.Y.). * [19] Belyaev,K., C.Tanajura and J.O'Brien 2001, Appl.Math.Mod.**25**, 655. * [20] Kalman,R. 1960, Trans.ASME **D 82**, 35. * [21] Muller,D. 1987, J.Phys.Oceanogr.**17**, 26. * [22] Belyaev,K., S.Meyers and J.O'Brien 2000, J.Math.Sci.**99**, 1393. * [23] Planck,M. 1917, Siztungsber.Preuss.Akad.Wissens., 324. * [24] Kolmogorov,A. 1931, Mathem.Annalen **104**, 415. * [25] Peaceman,D. and H.Rachford 1955, J.Soc.Ind.Appl.Math.**3**, 28. * [26] Kalnay,E. et al. 1996, Bull.Amer.Meteor.Soc. **77**, 437. * [27][http://www.pmel.noaa.gov/tao/jsdisplay](http://www.pmel.noaa.gov/tao/jsdisplay)
The feasibility of global ocean state estimation by sequential data assimilation is demonstrated. The model component of the assimilator is the GROB version of the MPIMET ocean circulation model HOPE. Assimilation uses the Fokker-Planck representation of the Kalman Filter. This approach determines the temporal evolution of error statistics by integration of the Fokker-Planck Equation. Phase space advection and diffusion are obtained from histogram techniques considering the model as a black box. For efficiency, the estimation procedure utilizes a combination of nudging and Kalman Filtering. The ocean state is estimated for the El Nino year 1997 by dynamical extrapolation of observed sea-surface temperatures and TAO/TRITON subsurface temperatures. The model-data combination yields improved estimates of the ocean's mean state and a realistic record of El Nino related variability. The assimilator proves as an efficient, viable and thus practical approach to operational global ocean state estimation.
Provide a brief summary of the text.
arxiv-format/0208009v2.md
# Evolution of Fluctuation in relativistic heavy-ion collisions Bedangadas Mohanty, Jan-e Alam and Tapan K. Nayak Variable Energy Cyclotron Centre, Calcutta 700064, India November 7, 2021 ## I Introduction Heavy-ion collisions at relativistic energies offer a unique environment for the creation and study of the Quark-Gluon Plasma (QGP) phase of nuclear matter. In the collision process the system may go through a phase transition whereby it evolves from a very hot and dense QGP state to normal hadronic matter. A characteristic feature of this process is that the system experiences large event-by-event fluctuations in thermodynamic quantities such as temperature, number density etc. Our ability to observe a characteristic fluctuation in various observables, in present day experiments, has been facilitated by the large number of particles produced in the relativistic heavy-ion collisions at the Super Proton Synchrotron (SPS) and the Relativistic Heavy-Ion Collider (RHIC) along with the advent of large acceptance detectors at these experiments [1]. The near Gaussian distributions of the experimental observables like multiplicity, transverse energy [2], transverse momentum and particle ratios [3] have provided an opportunity to relate these quantities to thermodynamical properties of matter, such as specific heat [4] and matter compressibility [5]. Fluctuations as a signature of quark-hadron phase transition has been experimentally studied extensively but no definite signature has been observed in this sector so far [2; 3; 6; 7]. The main task for using fluctuation as a probe of QGP phase transition would be to identify an observable whose fluctuation will survive from the time of formation of plasma till they get detected after the freeze-out. In this context, it was suggested that the study of the fluctuation in conserved quantities _e.g._ net baryon number, net charge or net strangeness will be very useful [8; 9; 10; 11]. The fluctuation in any conserved quantity (\\(\\mathcal{O}\\)) is given by [12] \\[(\\Delta\\mathcal{O})^{2}\\equiv\\langle\\mathcal{O}^{2}\\rangle-\\langle\\mathcal{O} \\rangle^{2}=T\\,\\frac{\\partial\\langle\\mathcal{O}\\rangle}{\\partial\\mu} \\tag{1}\\] where, \\(\\mu\\) is the associated chemical potential and \\(T\\) is the temperature of the system under consideration. Here \\(\\langle\\mathcal{O}\\rangle\\) represents the mean value of the conserved quantity over a large number of events and (\\(\\Delta\\mathcal{O}\\)) denotes the deviation of its value from the mean, event-by-event. It has been shown that the fluctuation in these conserved quantities with and without QGP formation are very different. Hence, study of fluctuations in these conserved quantities can be a good signature of transition from confined to de-confined state of matter. However, in the earlier calculations [9; 10] ideal gas EOS for both the QGP and hadronic gas was considered. For the QGP initial state, the evolution of fluctuation during mixed phase have also not been taken into account. Finally, modifications of hadronic spectral functions in the thermal bath was ignored. The aim of this paper is to study the evolution of the fluctuation in conserved quantities from time of formation to the time of detection. In addition to the QGP and hadronic initial state we consider an important case where the hadrons are formed at the initial stage but the spectral function of the hadrons are different from their vacuum properties. We also discuss the sensitivity of the results on different EOS. Here we concentrate only on the net baryon number fluctuation and the results are presented in terms of the experimentally measured quantity, \\((\\Delta N_{b})^{2}/N_{b}\\), where \\(N_{b}\\) is the number of baryons and \\(\\Delta N_{b}\\) is the fluctuation in the net baryon number. The paper is organized as follows. In the next section we discuss the possible evolution scenarios. Then the initial conditions used for solving the evolution equations for various evolution scenarios arepresented in section III. In section IV we study the evolution of the system. Section V and VI is devoted for the results at the freeze-out for SPS and RHIC energies respectively. And finally in section VII we present the summary and conclusions. ## II Evolution scenarios We consider the following three possible evolution scenarios. 1. QGP scenario: The system starts evolving from an initial QGP state formed at a time \\(\\tau_{i}\\) and temperature \\(T_{i}\\). It then expands, hence cools and reaches the critical temperature \\(T_{c}\\) at a time \\(\\tau_{q}\\). In the mixed phase the cooling due to expansion is compensated by the heating of the system due to the liberation of the latent heat in a first order phase transition. Hence the temperature remains constant at \\(T_{c}\\) (super-cooling is neglected here). After the mixed phase ends at a time \\(\\tau_{h}\\), further expansion takes place and finally the system disassembles at a time \\(\\tau_{f}\\) and temperature \\(T_{f}\\). At this stage, called the freeze-out stage, the mean free path of the particles are too large to have any further interactions, and those are detected experimentally. 2. Hadron gas scenario: The hot and dense system is formed in the hadronic state at a time \\(\\tau_{i}\\) and temperature \\(T_{i}\\) and the system expands from the initial to freeze-out state (at time \\(\\tau_{f}\\) when the temperature is \\(T_{f}\\)) without a phase transition. 3. Hadron gas with mass variation: The other possibility is that the system may form in the hadronic state as in (2) above, but the spectral function of the hadrons are different from their vacuum counterparts. Among others, in the present case we will consider the shift of the pole of the hadronic spectral function according to the universal scaling law proposed by Brown and Rho [13]: \\[m_{h}^{*}=m_{h}\\left(1-\\frac{T^{2}}{T_{c}^{2}}\\right)^{\\lambda},\\] (2) where \\(m_{h}^{*}\\) (\\(m_{h}\\)) is the in-medium (vacuum) mass of the hadrons (except pseudo-scalar) and the index \\(\\lambda\\) takes a value between 0 and 1. Here we choose \\(\\lambda=1/6\\) according to the well known Brown-Rho scaling [14]. ## III Initial conditions The initial conditions in terms of initial temperature (\\(T_{i}\\)) can be set for the three scenarios from the following relation: \\[dS=\\frac{2\\pi^{4}}{45\\zeta(3)}\\,dN=4\\frac{\\pi^{2}}{90}\\,g_{eff}\\,T_{i}^{3}\\, \\Delta\\,V, \\tag{3}\\] where \\(dS(dN)\\) is the entropy (number) contained within a volume element \\(\\Delta\\,V=\\pi\\,R^{2}\\,\\tau_{i}\\,d\\eta\\), \\(R\\) as the radius of the colliding nuclei. \\(g_{eff}\\) is the effective statistical degeneracy, \\(\\zeta(3)\\) denotes the Riemann zeta function and \\(\\eta\\) is the space-time rapidity. For massless bosons (fermions) the ratio of \\(dS\\) to \\(dN\\) is given by \\(2\\pi^{4}/(45\\zeta(3))\\,\\sim\\,3.6\\) (4.2), which is a crude approximation for heavy particles. For example the above ratio is 3.6 (7.5) for 140 MeV pions (938 MeV protons) at a temperature of 200 MeV. First we consider the situation at SPS energies where \\(dN/dy\\sim 700\\) for Pb+Pb collisions. For above three scenarios taking \\(g_{eff}\\) as given in Table 1, we obtain the initial temperature by using Eqn. 3. The values of the initial temperatures are given in Table 1. The chemical potential at the initial state is fixed by constraining the specific entropy (entropy per baryon) to the value obtained from the analysis of experimental data. The specific entropy at SPS is about 40 for Pb + Pb collisions [15; 16; 17; 18]. The net baryon number can be calculated using the baryon (anti-baryon) number density, \\(n_{b}\\) (\\(n_{\\bar{b}}\\)) given by, \\[n_{b}=\\frac{g}{(2\\pi)^{3}}\\int f(\\vec{p})d^{3}p, \\tag{4}\\] where \\(g\\) is the baryonic degeneracy. We take \\(g\\,\\,=\\,\\,4\\) for proton and neutron. \\(f(\\vec{p})\\) is the well known Fermi-Dirac distribution, \\[f(\\vec{p})=\\Big{[}exp\\Big{(}(E\\pm\\mu)/T\\Big{)}+1\\Big{]}^{-1} \\tag{5}\\] where \\(\\mu(-\\mu)\\) is the chemical potential for baryon (anti-baryon), \\(E=\\sqrt{p^{2}+m^{2}}\\) and \\(n_{\\bar{b}}\\equiv\\,n_{b}(\\mu\\to-\\mu)\\). For the QGP scenario we take the mass of the quarks as, \\({m_{q}}^{2}={m_{qc}}^{2}+{m_{q}}{n_{th}}^{2}\\), where \\(m_{q}\\)\\({}_{th}\\) is the thermal mass [19] and \\(m_{qc}\\) is the current quark mass. We have taken vacuum mass and effective mass for the hadrons (given by Eqn. 2) for cases 2 and 3 respectively. We obtain the value of the chemical potential to be 132 MeV, 340 MeV and 105 MeV for cases (1), (2) and (3) respectively. These are also summarized in Table 1. Having fixed the initial temperature and chemical potential we now calculate the initial fluctuationsfor the three scenarios. _1. Quark Gluon Plasma:_ From the initial net baryon number, one can easily calculate the net baryon number fluctuations using Eqn. 1. The fluctuation in the net baryon number in the QGP phase at time \\(\\tau_{i}\\), temperature \\(T_{i}\\) and chemical potential \\(\\mu_{i}\\), can be shown to be [10]: \\[(\\Delta N_{b}(\\tau_{i}))^{2}_{\\rm QGP}=\\frac{2V}{9}T_{i}^{3}\\left(1+\\frac{1}{3 }\\Big{(}\\frac{\\mu_{i}}{\\pi T_{i}}\\Big{)}^{2}\\right)\\,, \\tag{6}\\] With the initial conditions as discussed above, the initial fluctuations for the QGP scenario turns out to be \\((\\Delta N_{b}(\\tau_{i}))^{2}_{\\rm QGP}=35\\). It may be noted that the value of \\(g\\) in Eqn. 4 is taken to be 12, for two flavor case. The entropy density in the QGP phase is calculated from Eqn. 3 with \\(g_{eff}=37\\). The total entropy of the system is given by: \\[S=V\\frac{4\\pi^{2}g_{eff}}{90}T_{i}^{3}\\sim 2500. \\tag{7}\\] For fixed initial entropy of the system the initial temperature is different for the three cases because of the different values of the \\(g_{eff}\\). In the present work we solve the evolution equation for ideal fluid neglecting entropy generations due to various viscous effects. The total entropy is kept constant for all the three scenarios as it is obtained from the number of particles per unit rapidity measured experimentally. So the fluctuation in net baryon number per entropy is \\((\\Delta N_{b}(\\tau_{i}))^{2}_{\\rm QGP}/S\\) = 0.014, a value similar to that obtained in [10]. In this case the fluctuation per unit baryon, \\((\\Delta N_{b}(\\tau_{i}))^{2}_{\\rm QGP}/N_{b,y}\\) = 0.56 where \\(N_{b,y}\\) is the net baryon number per unit rapidity, \\(dN_{b}/dy\\sim 62\\) for SPS energies. _2. Hadron Gas :_ The net baryon number fluctuation in the hadronic gas can be calculated using Eqn. 1 and is given by: \\[(\\Delta N_{b})^{2}_{\\rm HG}=\\frac{gV}{4}\\Big{[}\\frac{2mT_{i}}{\\pi}\\Big{]}^{3/ 2}\\,exp(-m/T_{i})\\cosh(\\mu/T_{i})\\,. \\tag{8}\\] Substituting the values of \\(T_{i}\\), \\(\\mu_{i}\\) and \\(m~{}=~{}938\\) MeV for the nucleons, we obtain the fluctuation in net baryon number, \\((\\Delta N_{b}(\\tau_{i}))^{2}_{\\rm HG}~{}\\sim~{}72\\). The ratio of the fluctuation to total entropy is \\((\\Delta N_{b}(\\tau_{i}))^{2}_{\\rm HG}/S\\) = 0.029. This value is also similar to the one obtained in [10]. In this case the value of \\((\\Delta N_{b}(\\tau_{i}))^{2}_{\\rm HG}/N_{b,y}\\) =1.16. _3. Hadron gas with mass variation in medium:_ The initial fluctuation for this case can be obtained using the Eqn's 4 and 5 together with Eqn. 2 used for the value of mass. The initial conditions of \\(T_{i}\\) = 226 MeV and \\(\\mu_{i}\\) = 105 MeV are constrained to reproduce the hadronic multiplicity and the entropy per baryons. In this case we get \\((\\Delta N_{b}(\\tau_{i}))^{2}_{\\rm HG,m^{*}}\\)=153 and \\((\\Delta N_{b}(\\tau_{i}))^{2}_{\\rm HG,m^{*}}/S\\) = 0.061. The value of \\((\\Delta N_{b}(\\tau_{i}))^{2}_{\\rm HG,m^{*}}/N_{b,y}\\) =2.46 here. We observe that for the baryon number fluctuations corresponding to the above three scenarios: \\[\\frac{(\\Delta N_{b}(\\tau_{i}))^{2}_{\\rm HG}}{(\\Delta N_{b}(\\tau_{i}))^{2}_{ \\rm QGP}}\\sim 2~{}\\,{\\rm and}\\frac{(\\Delta N_{b}(\\tau_{i}))^{2}_{\\rm HG,m^{*}}}{( \\Delta N_{b}(\\tau_{i}))^{2}_{\\rm QGP}}\\sim 4. \\tag{9}\\] This clearly shows that in the initial stage, there is a clear distinction between the three cases. The initial values of fluctuations are summarized in Table 1. It will be of interest now to see, if these fluctuations (and the differences given by Eqn. 9) survive till the freeze-out. Next we discuss the evolution of this initial fluctuation for the three scenarios. ## IV Evolution of the initial fluctuation We follow ref. [10] to study the proper time (\\(\\tau\\)) evolution of the baryon fluctuation in the space-time rapidity interval \\(\\Delta\\,\\eta\\) for different EOS as well as initial states as mentioned above. At the freeze-out point the fluctuation measured experimentally contain the residue of the initial fluctuation which survived the space time evolution and the fluctuations generated due to the exchange of baryons with the adjacent sub-volumes. We discuss each of the above cases separately in the following sub-sections. ### Dissipation of the initial fluctuation The difference in baryon flux \\((\\Delta\\,n\\,\\bar{v})\\) originating from different densities inside and outside of the sub-volume \\((A\\tau\\Delta\\,\\eta)\\) leads to the following differential equation for \\(\\Delta N_{b}(=\\Delta\\,n\\bar{v}A\\tau)\\): \\[\\frac{d\\Delta N_{b}}{d\\tau}=-\\frac{\\bar{v}}{2\\Delta\\eta}\\frac{\\Delta N_{b}}{ \\tau} \\tag{10}\\] \\begin{table} \\begin{tabular}{|l l l l|} \\hline Initial Values/Scenarios & 1 & 2 & 3 \\\\ \\hline \\(g_{eff}\\) & & 37 & 15 & 24 \\\\ \\(T_{i}\\) (MeV) & & 196 & 264 & 226 \\\\ \\(\\mu_{i}\\) (MeV) & & 132 & 340 & 105 \\\\ \\((\\Delta N_{b}(\\tau_{i}))^{2}/S\\) & & 0.014 & 0.029 & 0.061 \\\\ \\((\\Delta N_{b}(\\tau_{i}))^{2}/N_{b,y}\\) & & 0.56 & 1.16 & 2.46 \\\\ \\hline \\end{tabular} \\end{table} Table 1: Initial conditions and the initial values of fluctuation for the three scenarios. where \\(\\bar{v}\\) is the average thermal velocity of the particles under consideration. The solution to the differential equation 10 from initial time \\(\\tau_{i}\\) to final time \\(\\tau_{f}\\) is then given as: \\[\\Delta N_{b}(\\tau_{f})=\\Delta N_{b}(\\tau_{i})\\exp\\left(-\\frac{1}{2\\Delta\\eta} \\int_{\\tau_{i}}^{\\tau_{f}}\\frac{d\\tau}{\\tau}\\bar{v}(\\tau)\\right)\\,. \\tag{11}\\] The average thermal velocity at a given temperature can be calculated by using the following equation, \\[\\bar{v}=\\frac{\\int\\frac{p}{E}d^{3}p\\,f(\\vec{p})}{\\int\\!d^{3}pf(\\vec{p})} \\tag{12}\\] The values of \\(\\bar{v}\\) as a function of temperature for the three cases are shown in Fig. 1. The curves are fitted by the polynomials of the form \\(a_{0}+a_{1}T+a_{2}T^{2}+a_{3}T^{3}+a_{4}T^{4}+a_{5}T^{5}+a_{6}T^{6}\\). The values of the parameters are shown in the caption of Fig. 1. The average velocity in QGP is found to be substantially larger than the velocity in hadronic gas. However, in case of the hadronic system where the effective masses of the baryons (neutron and proton here) approach zero at \\(T_{c}\\), their velocities approach the velocity of light. The equation of state plays a vital role in deciding how the fluctuations in the net baryon number will evolve. The variation of energy density (\\(\\epsilon\\)) as a function of temperature is obtained from the 3-flavor lattice QCD results [20] which is parametrized as follows: \\[\\epsilon=T^{4}A~{}tanh(B(\\frac{T}{T_{c}})^{C}), \\tag{13}\\] where the values of the parameters, \\(A,B\\) and \\(C\\) are 12.44, 0.517 and 10.04 respectively. Note that the effect of net baryons in the EOS is neglected here. Fig. 2 shows the variation of the energy density with temperature. The increase in the effective degeneracy near \\(T_{c}\\) can be obtained from the hadronic phase with effective masses varying with temperature as in Eqn. 2. The increase in the effective degeneracy originates from the heavier hadrons going to a massless situation ( see also ref.[21; 22]). The evolution of the system under the boost invariance along the longitudinal direction is governed by the equation [23], \\[\\frac{d\\epsilon}{d\\tau}+\\frac{\\epsilon+P}{\\tau}=0;~{}~{}P=c_{s}^{2}\\epsilon \\tag{14}\\] where \\(c_{s}\\) is the velocity of the sound in the medium. To perform the integration in Eqn. 11 we need \\(d\\tau/\\tau\\), which can be obtained as \\[\\frac{d\\tau}{\\tau}=-\\Big{[}\\frac{\\beta}{T}+\\frac{\\alpha}{\\chi}\\frac{d\\chi}{dT }\\Big{]}dT=-f(T)dT \\tag{15}\\] Figure 2: The variation of energy density as a function of temperature. The result shown by solid line is obtained from Eqn. 13. Open circles show the lattice results with out the error bars. Figure 1: Variation of average thermal velocity with temperature for three different scenarios. The results are fitted by nth-order polynomials. (1) For QGP scenario the fitting parameters are \\(a_{0}~{}=~{}0.93\\), \\(a_{1}~{}=~{}0.11\\), \\(a_{2}~{}=~{}-0.56\\), \\(a_{3}~{}=~{}1.32\\) and \\(a_{4}~{}=~{}1.20\\). (2) For hadronic gas scenario with fit parameters as \\(a_{0}~{}=~{}0.19\\), \\(a_{1}~{}=~{}4.0\\), \\(a_{2}~{}=~{}-14.4\\), \\(a_{3}~{}=~{}32.2\\) and \\(a_{4}~{}=~{}-31.2\\). (3) For hadronic gas with mass variation in medium with fit parameters as \\(a_{1}~{}=-4889.8\\), \\(a_{2}~{}=~{}76879\\), \\(a_{3}~{}=~{}-0.640E+06\\), \\(a_{4}~{}=~{}0.297E+07\\), \\(a_{5}~{}=~{}-0.73E+07\\) and \\(a_{6}~{}=~{}0.74E+07\\). where, \\(\\alpha=\\frac{1}{1+e_{s}^{2}}\\), \\(\\beta=4\\alpha\\) and \\[\\chi=\\left[tanhB(T/T_{c})^{C}\\right]^{\\alpha} \\tag{16}\\] The expression for the second term within the square bracket of Eqn.15 is given by \\[\\frac{1}{\\chi}\\frac{d\\chi}{dT}=\\alpha\\frac{BC}{T}(\\frac{T}{T_{c}})^{C}\\frac{1 }{cosh(B(\\frac{T}{T_{c}})^{C})\\ sinh(B(\\frac{T}{T_{c}})^{C})} \\tag{17}\\] Using the relation \\(c_{s}^{2}=\\frac{S}{T}\\frac{dS}{dT}=[3+\\frac{T}{g_{eff}}\\frac{dg_{eff}}{dT}]^{-1}\\) we get the velocity of sound corresponding to the lattice QCD calculations, \\[c_{s}^{2}=\\left[3+BC(\\frac{T}{T_{c}})^{C}\\frac{1}{cosh(B(\\frac{T}{T_{c}})^{C}) \\ sinh(B(\\frac{T}{T_{c}})^{C})}\\right]^{-1} \\tag{18}\\] Now the fluctuation at the freeze-out point can be written as, \\[\\Delta N_{b}(\\tau_{f})=\\Delta N_{b}(\\tau_{i})\\exp\\left(-\\frac{1}{2\\Delta\\eta} \\int_{T_{f}}^{T_{i}}f(T)dT\\,\\bar{v}(T)\\right)\\,. \\tag{19}\\] In order to study the effect of the equation of state, we will present results for three different values of \\(c_{s}\\): (a) \\(c_{s}^{2}=1/3\\) corresponds to the ideal gas case, (b) \\(c_{s}^{2}=0.18\\) corresponding to an EOS of hadronic gas where particles of mass upto 2.5 GeV has been taken into account from particle data book and (c) \\(c_{s}^{2}\\) obtained from Eqn. 18. It may be mentioned that for the ideal gas the rate of cooling is faster than the other two cases, (b) and (c). Now we shall calculate the total dissipation of fluctuation at the freeze-out temperature. We shall consider the freeze-out temperature to be 120 MeV [18; 24] and a critical temperature for the QGP transition to be 170 MeV [20]. In all the three cases below we have taken the value of \\(\\Delta\\eta=1\\). _1. Quark Gluon Plasma:_ For the QGP initial state, the dissipation equation has three parts. In the first part, we calculate the dissipation from the initial temperature \\(T_{i}\\) to \\(T_{c}\\) (for the QGP phase). The second part is the dissipation during the mixed phase and in the final phase it is the dissipation from \\(T_{c}\\) (end of mixed phase) to the freeze-out temperature \\(T_{f}\\). Hence the complete dissipation equation becomes, \\[\\Delta N_{b}(T_{f})=\\Delta N_{b}(T_{i})\\exp\\left(-\\frac{1}{2\\Delta\\eta}\\left( \\int_{T_{c}}^{T_{i}}\\bar{v}_{qgp}(T)f(T)dT+\\bar{v}_{mix}(T_{c})\\ ln(r)\\ +\\int_{T_{f}}^{T_{c}}\\bar{v}_{h}(T)f(T)dT\\right)\\right) \\tag{20}\\] where \\(r\\) is the ratio \\((g_{eff})_{qgp}/g_{eff})_{h}\\sim 2.5\\) and \\(\\bar{v}_{mix}\\sim\\bar{v}_{qgp}(T_{c})\\). The fluctuation in the net baryon number is evaluated for: (a) \\(c_{s}^{2}=1/3\\), for which we get \\((\\Delta N_{b}(\\tau_{f}))^{2}_{\\rm QGP}\\sim 6\\), (b) \\(c_{s}^{2}=0.18\\), giving \\((\\Delta N_{b}(\\tau_{f}))^{2}_{\\rm QGP}\\sim 2.23\\) and (c) \\(c_{s}^{2}\\) as given by Eqn. 18, resulting in \\((\\Delta N_{b}(\\tau_{f}))^{2}_{\\rm QGP}\\sim 0.54\\). The initial and final fluctuation values along with the evolution process of the values for the QGP scenario is shown in Fig. 3. Only the results for \\(c_{s}^{2}=1/3\\) and \\(c_{s}^{2}\\) as given by Eqn. 18 are shown in the figures for the clarity of presentation. We have checked that the results for \\(c_{s}^{2}=0.18\\) lies between the above two cases. _2. Hadron Gas :_ In case of hadronic initial state we have, \\[\\Delta N_{b}(T_{f})=\\Delta N_{b}(T_{i})\\exp\\left(-\\frac{1}{2\\Delta\\eta}\\int_{T _{f}}^{T_{i}}\\bar{v}_{h}(T)f(T)dT\\right) \\tag{21}\\] (a) For \\(c_{s}^{2}=1/3\\), the contribution from the exponential is \\(\\sim 0.5\\). Hence the fluctuation in net baryon number is \\((\\Delta N_{b}(\\tau_{f}))^{2}_{\\rm HG}\\) is 18, (b) for \\(c_{s}^{2}=0.18\\), we get \\((\\Delta N_{b}(\\tau_{f}))^{2}_{\\rm HG}\\) is 14.6 and if we take (c) \\(c_{s}^{2}\\) from Eqn. 18 the contribution from the exponential term is \\(\\sim 0.06\\), a factor of 8 smaller than the value for case (a). Consequently the fluctuation in net baryon number is \\((\\Delta N_{b}(\\tau_{f}))^{2}_{\\rm HG}\\) is 0.22. The evolution of fluctuation for this case is depicted in Fig. 3. _3. Hadron gas with mass variation in medium:_ For the hadronic initial state with mass variation, \\[\\Delta N_{b}(T_{f})=\\Delta N_{b}(T_{i})\\exp\\left(-\\frac{1}{2\\Delta\\eta}\\int_{T _{f}}^{T_{i}}\\bar{v}_{h}^{*}(T)f(T)dT\\right) \\tag{22}\\] (a) For \\(c_{s}^{2}=1/3\\), the contribution from the exponential is \\(\\sim 0.53\\). Hence the fluctuation in net baryon number is \\((\\Delta N_{b}(\\tau_{f}))^{2}_{\\rm m}\\sim 44\\), (b) for \\(c_{s}^{2}=0.18\\), we get: \\((\\Delta N_{b}(\\tau_{f}))^{2}_{\\rm m}\\sim 37\\) and (c) taking \\(c_{s}^{2}\\) from Eqn. 18, the contribution from the exponential term is \\(\\sim 0.08\\) a factor of 7 lower than the case (a). So the fluctuation in net baryon number is \\((\\Delta N_{b}(\\tau_{f}))^{2}_{\\rm HG,m^{*}}\\) is 0.92. The evolution of fluctuation for this case is shown in Fig. 3. ### Generation of fluctuation with time The baryon fluxes exchanged with neighboring sub-volumes can lead to generation of fluctuations, and is the main source that is to be detected experimentally at the freeze-out point. The total number of baryons leaving or entering (\\(N_{b}^{ex}\\)) the sub-volume (\\(A\\tau\\Delta\\eta\\)) between time \\(\\tau_{i}\\) and \\(\\tau_{f}\\) is given by [10] \\[N_{b}^{ex}(T_{f})=\\frac{N_{b}(T_{i})}{2\\Delta\\eta}\\left(\\int_{T_{f}}^{T_{i}} \\bar{v}(T)f(T)dT\\right) \\tag{23}\\] _1. Quark Gluon Plasma:_ As in the previous case the fluctuations has to be evaluated in three parts in a QGP formation scenario. In the first part, we calculate the generation from the initial temperature \\(T_{i}\\) to \\(T_{c}\\) (for the QGP phase). The second part is the generation during the mixed phase and in the final phase it is the generation from \\(T_{c}\\) (end of mixed phase) to the freeze-out temperature \\(T_{f}\\). Hence the complete evolution equation becomes, \\[(\\Delta N_{b}(T_{f}))^{2}=N_{b}(T_{i})\\left(\\frac{1}{2\\Delta\\eta}\\left(\\int_{ T_{c}}^{T_{i}}\\bar{v}_{qgp}(T)f(T)dT+\\bar{v}_{mix}(T_{c})\\ ln(r)\\ +\\int_{T_{f}}^{T_{c}}\\bar{v}_{h}(T)f(T)dT\\right)\\right) \\tag{24}\\] The fluctuation in the net baryon number is evaluated for: (a) \\(c_{s}^{2}=1/3\\), for which we get \\((\\Delta N_{b}(\\tau_{f}))^{2}_{\\rm QGP}\\) is 56.5, (b) \\(c_{s}^{2}=0.18\\), resulting in Figure 4: The generation of net baryon number fluctuation for the different scenarios as a function of temperature. Notations are same as that of Fig. 3. Figure 3: The dissipation of net baryon number fluctuation for the different scenarios as a function of temperature. Note that temperature is plotted in decreasing order to reflect the evolution in time. (1) Variation of \\((\\Delta N_{b})^{2}\\) for QGP scenario, (2) Variation of \\((\\Delta N_{b})^{2}\\) for hadronic gas scenario and (3) Variation of \\((\\Delta N_{b})^{2}\\) for hadronic gas with mass variation scenario. The dashed lines correspond to results obtained for \\(c_{s}^{2}=1/3\\), while the solid lines show the results obtained using the value of \\(c_{s}^{2}\\) given in Eqn. 18. \\((\\Delta N_{b}(\\tau_{f}))^{2}_{\\rm QGP}\\) is 60.4 and (c) \\(c_{s}^{2}\\) from Eqn. 18 giving \\((\\Delta N_{b}(\\tau_{f}))^{2}_{\\rm QGP}\\) is 129. The initial and final values of the fluctuations along with its time evolution for the QGP scenario are shown in Fig. 4. _2. Hadron Gas :_ In case of hadronic initial state we have, \\[(N_{b}(T_{f}))^{2}=N_{b}(T_{i})\\left(\\frac{1}{2\\Delta\\eta}\\int_{T_{f}}^{T_{i}} \\bar{v}_{h}(T)f(T)dT\\right) \\tag{25}\\] (a) For \\(c_{s}^{2}=1/3\\), the fluctuation in net baryon number is \\((\\Delta N_{b}(\\tau_{f}))^{2}_{\\rm HG}\\) is 44. (b) For \\(c_{s}^{2}=0.18\\), we get \\((\\Delta N_{b}(\\tau_{f}))^{2}_{\\rm HG}\\) is 50. (c) If we take \\(c_{s}^{2}\\) from Eqn. 18, the fluctuation in net baryon number is \\((\\Delta N_{b}(\\tau_{f}))^{2}_{\\rm HG}\\) is 179. The evolution of fluctuation for this case is displayed in Fig. 4. _3. Hadron gas with mass variation in medium:_ For the hadronic initial state with mass variation, \\[(\\Delta N_{b}(T_{f}))^{2}=N_{b}(T_{i})\\left(\\frac{1}{2\\Delta\\eta}\\int_{T_{f}}^ {T_{i}}\\bar{v}_{h}^{*}(T)f(T)dT\\right) \\tag{26}\\] (a) For \\(c_{s}^{2}=1/3\\), the fluctuation in net baryon number is \\((\\Delta N_{b}(T_{f}))^{2}_{\\rm n^{*}}\\sim 38.9\\). (b) For \\(c_{s}^{2}=0.18\\), we get \\((\\Delta N_{b}(T_{f}))^{2}_{\\rm HG,m^{*}}\\sim 44\\). (c) Taking \\(c_{s}^{2}\\) from Eqn. 18, the fluctuation in net baryon number is \\((\\Delta N_{b}(T_{f}))^{2}_{\\rm HG,m^{*}}/N_{b,y}\\sim 2.5\\). is 158. The evolution of fluctuation for this case is shown in Fig. 4. ## V Fluctuations at the freeze-out The net baryon fluctuation at the freeze-out (\\(T_{f}\\) = 120 MeV) is a combination of the dissipation and the generation effect as presented in the previous section. The resultant fluctuation is the sum of the variances \\((\\Delta N_{b}(T_{f}))^{2}\\) obtained for each of the two processes. We present results in terms of net baryon fluctuation per unit baryon, \\((\\Delta N_{b}(T_{f}))^{2}/N_{b,y}\\). For Poissonian noise this value should be close to unity. Deviation from this numerical values will indicate the presence of dynamical fluctuation. _1. Quark Gluon Plasma:_ The final fluctuation in the net baryon number per unit baryon in the QGP scenario at the freeze-out point for the three EOS mentioned above are given by, (a) for \\(c_{s}^{2}=1/3\\), \\((\\Delta N_{b}(T_{f}))^{2}_{\\rm QGP}/N_{b,y}\\sim 1.0\\), (b) for \\(c_{s}^{2}=0.18\\), \\((\\Delta N_{b}(T_{f}))^{2}_{\\rm QGP}/N_{b,y}\\sim 1.0\\) and (c) taking \\(c_{s}^{2}\\) from Eqn. 18, \\((\\Delta N_{b}(T_{f}))^{2}_{\\rm QGP}/N_{b,y}\\sim 2.0\\). _2. Hadron Gas :_ The net baryon number fluctuation per unit baryon in the hadronic gas at freeze-out for different values of of \\(c_{s}\\) are, (a) for \\(c_{s}^{2}=1/3\\), \\((\\Delta N_{b}(T_{f}))^{2}_{\\rm HG}/N_{b,y}\\sim 1.0\\) (b) for \\(c_{s}^{2}=0.18\\), \\((\\Delta N_{b}(T_{f}))^{2}_{\\rm HG}/N_{b,y}\\sim 1.0\\) (c) Taking \\(c_{s}^{2}\\) from Eqn. 18, \\((\\Delta N_{b}(T_{f}))^{2}_{\\rm HG}/N_{b,y}\\sim 2.8\\). _3. Hadron gas with mass variation in medium:_ The net baryon number fluctuation per unit baryon for the hadronic initial state with mass variation at freeze-out for the three EOS are, (a) for \\(c_{s}^{2}=1/3\\), \\((\\Delta N_{b}(T_{f}))^{2}_{\\rm HG,m^{*}}/N_{b,y}\\sim 1.3\\), (b) for \\(c_{s}=0.18\\), \\((\\Delta N_{b}(T_{f}))^{2}_{\\rm HG,m^{*}}/N_{b,y}\\sim 1.3\\) and (c) taking \\(c_{s}^{2}\\) from Eqn. 18, \\((\\Delta N_{b}(\\tau_{f}))^{2}_{\\rm HG,m^{*}}/N_{b,y}\\sim 2.5\\). The net baryon number fluctuations per unit baryon is summarized in Table 2. For all the scenarios the values of the quantity \\((\\Delta N_{b}(\\tau_{f}))^{2}/N_{b,y}\\) is larger than Poissonian noise for EOS taken from lattice QCD. This indicates presence of dynamical fluctuations. However, it should be mentioned here that the errors in lattice QCD calculations is large for temperature below the critical temperature. Furthermore \\((\\Delta N_{b}(\\tau_{f}))^{2}_{\\rm HG}/(\\Delta N_{b}(\\tau_{f}))^{2}_{\\rm QGP} \\sim 1.4\\) and \\((\\Delta N_{b}(\\tau_{f}))^{2}_{\\rm HG,m^{*}}/(\\Delta N_{b}(\\tau_{f}))^{2}_{\\rm QGP }\\sim 1.25\\). This means, although scenario (1) may be distinguishable from the scenarios (2) and (3) it is very difficult to differentiate (2) and (3) from fluctuation measurements. However, for EOS with \\(c_{s}^{2}=1/3\\) and 0.18 they are close to Poisson value of 1. This indicates that for these EOS; it is difficult to distinguish among the three scenarios. ## VI Results at RHIC energies The net baryon number fluctuation has been evaluated for RHIC energies of \\(\\sqrt{s}=200\\)AGeV of Au+Au collisions. We have taken the total (charged and neutral) \\(dN/dy\\sim 1100\\)[25]. The initial temperatures obtained from the above multiplicity are quite large, 370 MeV for hadronic gas and 290 MeV \\begin{table} \\begin{tabular}{|c c c c c|} \\hline & EOS/Scenarios & 1 & 2 & 3 \\\\ \\hline & \\(c_{s}^{2}=1/3\\) & 1.0 & 1.0 & 1.3 \\\\ SPS & \\(c_{s}^{2}=0.18\\) & 1.0 & 1.0 & 1.3 \\\\ & Lattice & 2.0 & 2.8 & 2.5 \\\\ \\hline & \\(c_{s}^{2}=1/3\\) & 1.46 & \\(-\\) & \\(-\\) \\\\ RHIC & \\(c_{s}^{2}=0.18\\) & 1.96 & \\(-\\) & \\(-\\for the case of hadronic gas with mass variation. As these temperatures are well above the critical temperature predicted by lattice QCD, we have considered only the QGP scenario for RHIC energies. We have taken the initial time to be 0.6 fm/c [26] and specific entropy to be 150 for Au + Au collisions. For QGP we take \\(g_{eff}~{}=~{}47.5\\) which gives \\(T_{i}~{}\\sim\\) 251 MeV and the constraint on the specific entropy gives an initial chemical potential of 73 MeV. With these initial conditions for QGP scenario at RHIC, the initial fluctuations turn out to be : \\((\\Delta N_{b}(\\tau_{i}))^{2}_{\\rm QGP}=42\\). The entropy density in the QGP phase is calculated from Eqn. 7 with \\(g_{eff}=47.5\\) to be \\(\\sim\\) 3900. So the fluctuation in net baryon number per entropy is \\((\\Delta N_{b}(\\tau_{i}))^{2}_{\\rm QGP}/S=0.011\\) and \\((\\Delta N_{b}(\\tau_{i}))^{2}_{\\rm QGP}/N_{b,y}=1.6\\). The value of \\(dN_{b}/dy\\sim 26\\) for RHIC energies a factor of about 2.4 lower than that for SPS energies. As before, here the values of \\(T_{c}\\) and \\(T_{f}\\) are 170 and 120 MeV respectively. (a) For \\(c_{s}^{2}=1/3\\), the fluctuation in net baryon number due to dissipation at freeze-out, \\((\\Delta N_{b}(\\tau_{f}))^{2}_{\\rm QGP}\\) turns out to be 2.7. Whereas the generation mechanism gives \\((\\Delta N_{b}(\\tau_{f}))^{2}_{\\rm QGP}~{}\\sim~{}35.5\\). So the resultant fluctuation in net baryon number per unit baryon is \\((\\Delta N_{b}(\\tau_{f}))^{2}_{\\rm QGP}/N_{b,y}~{}\\sim~{}1.46\\). (b) For \\(c_{s}^{2}=0.18\\), the fluctuation in net baryon number due to dissipation, \\((\\Delta N_{b}(\\tau_{f}))^{2}_{\\rm QGP}\\) turns out to be 2.3. Whereas the generation mechanism gives \\((\\Delta N_{b}(\\tau_{f}))^{2}_{\\rm QGP}~{}\\sim\\) 48.7. The evolution of the dissipation and the generation of fluctuation for the QGP scenario at RHIC energies is shown in the Figs. 5 and 6. So the resultant fluctuation in net baryon number per unit baryon is \\((\\Delta N_{b}(\\tau_{f}))^{2}_{\\rm QGP}/N_{b,y}~{}\\sim~{}1.96\\). (c) If we take \\(c_{s}^{2}\\) as given by Eqn. 18, the fluctuation in net baryon number due to dissipation, \\((\\Delta N_{b}(\\tau_{f}))^{2}_{\\rm QGP}\\) is 0.25, the generation mechanism gives \\((\\Delta N_{b}(\\tau_{f}))^{2}_{\\rm QGP}~{}\\sim~{}82.25\\). So the resultant fluctuation in net baryon number per unit baryon is \\((\\Delta N_{b}(\\tau_{f}))^{2}_{\\rm QGP}/N_{b,y}~{}\\sim~{}3.17\\). Note that though the absolute value of the fluctuations in case of RHIC is smaller than SPS the fluctuation per baryon at RHIC is larger at the freeze-out point. ## VII Summary We have discussed the evolution of the fluctuation in net baryon number from the initial state to the final freeze-out state for three different scenarios, \\(viz.\\), (1) formation of QGP (2) hadronic gas and (3) hadronic gas with modified mass in the medium. In case of QGP formation we have assumed a first order phase transition. We find that the fluctuations at the initial stage agree with previously obtained values [10] where there are clear distinctions among the three cases. The fluctuations at the freeze-out point depend crucially on the equation of state (value of \\(c_{s}\\)). Fluctuations with ideal EOS is seen to dissipate at a slower rate compared to EOS from lattice calculation. At SPS energies the values of the variance depend crucially on the EOS. For EOS from lattice QCD parametrization the fluctuation at the freeze-out is larger than the Poissonian noise. However, for ideal gas EOS and EOS with \\(c_{s}^{2}=0.18\\), we do not observe fluctuations of dynamical origin. At RHIC energies the value of the fluctuations are larger than the Poissonian noise for all the three EOS under consideration here, indicating the fluctuations of dynamical origin. The effects of the finite acceptance on the fluctuation enters our calculations through Figure 6: The generation of net baryon number fluctuation for the QGP scenarios as a function of temperature calculated at RHIC energies of 200 AGeV Au + Au Collisions. Notations are same as Fig. 5. Figure 5: The dissipation of net baryon number fluctuation for the QGP scenarios as a function of temperature calculated for Au + Au collisions at RHIC energies. The dashed lines corresponds to results obtained for \\(c_{s}^{2}=1/3\\), while the solid lines corresponds to results obtained using the value of \\(c_{s}^{2}\\) given in Eqn. 18. \\(\\Delta\\eta\\) and according to Eqn. 11 and 23 it is same for all the three scenarios (1), (2) and (3) discussed above. For general discussions on the effects of acceptance on the fluctuations we refer to Ref. [9; 27]. The dependence of the fluctuation on the centrality of the collisions (impact parameter) get canceled to a large extent in the ratio, \\((\\Delta N_{b}(\\tau_{f}))^{2}_{\\rm QGP}/N_{b,y}\\). It is shown in [2] that it is possible to control the impact parameter dependence of the fluctuation by measuring \\(E_{T}\\) and analyzing data in narrow bins of \\(E_{T}\\). A full (3+1) dimensional expansion will lead to faster cooling, and hence it is interesting to see the survivability of fluctuations in such a scenario [28]. ###### Acknowledgements. One of us (B.M.) is grateful to the Board of Research on Nuclear Science and Department of Atomic Energy, Government. of India for financial support. We would like to thank M. Asakawa for useful comments. We are thankful to the referee for his useful comments on the present manuscript. ## References * (1) Proceedings of Quark Matter '2001, edited by T. J. Hallman, D.E. Kharzeev, J. T. Mitchell and T. Ullrich [Nucl. Phys A 698 (2002)]. * (2) M.M. Aggarwal et al., (WA98 Collaboration), Phys. Rev. **C65**, 054912 (2002). * (3) H. Appelshauser et al., (NA49 Collaboration), Phys. Lett. **B459**, 679 (1999). * (4) L. Stodolsky, Phys. Rev. Lett. **75**, 1044 (1995). * (5) S. Mrowczynski Phys. Lett. **B430**, 9 (1998). * (6) K. Adcox et. al., Phys. Rev. Lett. **89**, 082301 (2002). * (7) H. Heiselberg, Phys. Rep. **351**, 161 (2001). * (8) E. V. Shuryak and M. A. Stephanov, Phys. Rev. **C 63**, 064903 (2001). * (9) S. Jeon, V. Koch, Phys. Rev. Lett. **85**, 2076 (2000). * (10) M. Asakawa, U. Heinz, B. Muller, Phys. Rev. Lett. **85**, 2072 (2000). * (11) M. Prakash, R. Rapp, J. Wambach, I. Zahed, Phys. Rev. **C65**, 034906 (2002). * (12) L.D. Landau and E.M. Lifshitz, Statistical Physics, Part I (Pergamon, 1980). * (13) G. E. Brown and M. Rho, Phys. Rev. Lett. **66**, 2720 (1991). * (14) G. E. Brown and M. Rho, Phys. Rep. **269**, 333 (1996). * (15) P. Braun-Munzinger, I. Heppe and J. Stachel, Phys. Lett. **B 365**, 1 (1996). * (16) J. Cleymans and K. Redlich, Phys. Rev. **C60**, 054908 (1999). * (17) G. Roland et. al., Nucl. Phys. **A 638**, 91c (1998). * (18) H. Appelshauser _et.al_, NA49 Collaboration, Eur. Phys. J. **C2**, 661 (1998). * (19) M. Le Bellac, \"Thermal Field Theory\", Cambridge Univ. Press, Cambridge, UK, 1996. * (20) F. Karsch, hep-ph/0103314. * (21) V. Koch and G. E. Brown, Nucl. Phys. **A 560**, 345 (1993). * (22) G. E. Brown, H. A. Bethe, A. D. Jackson and P. M. Pizzochero, Nucl. Phys. **A 560**, 1035 (1993). * (23) J. D. Bjorken, Phys. Rev. **D27**, 140 (1983). * (24) M.M. Aggarwal et al., (WA98 Collaboration), Phys. Rev. Lett **83**, 926 (1999). * (25) B.B. Back et al., (PHOBOS Collaboration), Phys. Rev. **C65**, 061901 (2002). * (26) A. Dumitru and D. H. Rischke, Phys. Rev. **C 59** 354 (1999). * (27) D. P. Mahapatra, B. Mohanty and S. C. Phatak, Int. J. Mod. Phys. **A 17**, 675 (2002). * (28) B. Mohanty et al., to be published.
We have studied the time evolution of the fluctuations in the net baryon number for different initial conditions and space time evolution scenarios. We observe that the fluctuations at the freeze-out depend crucially on the equation of state (EOS) of the system and for realistic EOS the initial fluctuation is substantially dissipated at the freeze-out stage. At SPS energies the fluctuations in net baryon number at the freeze-out stage for quark gluon plasma and hadronic initial state is close to the Poissonian noise for ideal as well as for EOS obtained by including heavier hadronic degrees of freedom. For EOS obtained from the parametrization of lattice QCD results the fluctuation is larger than Poissonian noise. It is also observed that at RHIC energies the fluctuations at the freeze-out point deviates from the Poissonian noise for ideal as well as realistic equation of state, indicating presence of dynamical fluctuations. pacs: 25.75.-q,05.40.-a,12.38.Mh
Summarize the following text.
arxiv-format/0208044v1.md
# The Spectrum of the Mass Donor Star in SS 433 D. R. Gies1, W. Huang1, and M. V. McSwain Center for High Angular Resolution Astronomy, Department of Physics and Astronomy Georgia State University, Atlanta, GA 30303 Electronic mail: [email protected], [email protected], [email protected] Footnote 1: affiliation: Visiting Astronomer, University of Texas McDonald Observatory. ## 1 Introduction SS 433 is still one of the most mysterious of the X-ray binaries even after some 25 years of observation (Margon, 1984; Zwitter et al., 1989; Gies et al., 2002). We know that the mass donor feeds an enlarged accretion disk surrounding a neutron star or black hole companion,and a small portion of this inflow is ejected into relativistic jets that are observed in optical and X-ray emission lines and in high resolution radio maps. There are two basic timescales that control the spectral appearance and system dynamics, a 162 d disk and jet precessional cycle and a 13 d orbital period. The mass function derived from the He ii \\(\\lambda\\)4686 emission line indicates that the donor star mass is in excess of 8 \\(M_{\\odot}\\) (Fabrika & Bychkova 1990), and the donor is probably a Roche-filling, evolved star (King et al. 2000). However, the spectral signature of this star has eluded detection. This is probably because the binary is embedded in an expanding thick disk that is fed by the wind from the super-Eddington accretion disk (Zwitter et al. 1991). The outer regions of this equatorial thick disk have been detected in high resolution radio measurements by Paragi et al. (1999) and Blundell et al. (2001). We recently showed how many of the properties of the \"stationary\" emission lines can be explained in terms of a disk wind (Gies et al. 2002). The task of finding the spectrum of the donor is crucial because without a measurement of its orbital motion, the mass of the relativistic star is unknown. The best opportunity to observe the flux from the donor occurs at the precessional phase when the disk normal is closest to our line of sight and the donor star appears well above the disk plane near the donor inferior conjunction orbital phase (Gies et al. 2002). This configuration occurs only a few nights each year for ground-based observers. The choice of spectral region is also important. Goranskii et al. (1998a) found that the regular eclipse and precessional variations seen clearly in the blue are lost in the red due to an erratically variable flux component. Thus, we need to search for the donor spectrum blueward of the \\(R\\)-band in order to avoid this variable component. On the other hand, the color variations observed during eclipses suggest that the donor is cooler than the central portions of the disk (Antokhina & Cherepashchuk 1987; Goranskii et al. 1997). Thus, the disk will tend to contribute a greater fraction of the total flux at lower wavelengths. The best compromise is in the blue where there are a number of strong absorption lines in B- and later-type stars. Here we present the results of a blue spectral search for the donor's spectrum made during an optimal disk and orbital configuration in 2002 June (SS2). We first discuss the dominant emission features formed in the jets and disk wind (SS3). We then focus on a much weaker set of absorption lines (SS4), and we present arguments linking these to the photosphere of the donor star. ## 2 Observations The blue spectra of SS 433 were obtained on three consecutive nights, 2002 June 5 - 7, with the Large Cassegrain Spectrograph (LCS) on the 2.7-m Harlan J. Smith Telescope at the University of Texas McDonald Observatory (Cochran 2002). These dates correspond to precessional phases between \\(\\Psi=0.998\\) and 0.011 (where \\(\\Psi=0.0\\) corresponds to the time when the jets are closest to our line of sight and their emission lines attain their extremum radial velocities) and to orbital phases between \\(\\phi=0.012\\) and 0.178 (where mid-eclipse and donor inferior conjunction occur at \\(\\phi=0.0\\)) according to the phase relations adopted in Gies et al. (2002). The spectra were made in first order with the #46 grating (1200 grooves mm\\({}^{-1}\\), blazed at 4000 A), and they cover the wavelength range between 4060 and 4750 A. We used a long slit configuration with a slit width of \\(2\\farcs 0\\), which corresponds to a projected width of 2 pixels FWHM on the detector, a \\(800\\times 800\\) format TI CCD with 15\\(\\mu\\)m square pixels. The reciprocal dispersion is 0.889 A pixel\\({}^{-1}\\) and the spectral resolving power is \\(\\lambda/\\triangle\\lambda=2500\\). Unfortunately, the weather was partially cloudy throughout the run, and we were only able to obtain a few consecutive, 20 minute exposures on each night. We also obtained a full suite of calibration bias, dome flat field, and argon comparison frames throughout the run. The spectra were extracted and calibrated using standard routines in IRAF2. We co-added all the consecutive spectra from a given night to increase the signal-to-noise ratio (S/N). The co-added spectra were then rectified to a unit continuum by the fitting of line-free regions. Note that this rectification process arbitrarily removes the continuum flux variations that occurred in SS 433 during the eclipse. All the spectra were then transformed to a common heliocentric wavelength grid for ease of comparison. The final S/N ratios in the continuum at the long-wavelength end of the spectra are 31, 16, and 40 pixel\\({}^{-1}\\) for the three consecutive nights, respectively. Footnote 2: IRAF is distributed by the National Optical Astronomy Observatories, which is operated by the Association of Universities for Research in Astronomy, Inc., under cooperative agreement with the National Science Foundation. ## 3 Emission Line Spectrum Our three spectra are shown in chronological sequence in Figure 1. This part of the spectrum is dominated by the familiar strong emission lines of H\\(\\delta\\), H\\(\\gamma\\), and He ii \\(\\lambda 4686\\) (Murdin et al. 1980; Panferov & Fabrika 1997; Fabrika et al. 1997a). Most of these lines appear stronger in the first, mid-eclipse spectrum made when the continuum flux was low. We begin by examining the radial velocity and intensity variations of the emission lines before turning to the weaker absorption features in the next section. The only strong jet emission feature in this spectral range at this time is the blueshifted H\\(\\beta-\\) line (formed in the approaching jet), which is found just longward of H\\(\\gamma\\). This spectral range should also contain redshifted jet components from the upper Balmer sequence (from the Balmer limit at \\(\\approx 4267\\) A to H\\(\\epsilon\\) at \\(\\approx 4645\\) A), but, if present, they are too weak to detect in our spectra. We measured the radial velocity of the H\\(\\beta-\\) emission feature by fitting a single Gaussian in each case. These measurements are partially compromised by the appearance of multiple sub-peaks in some cases (see the final profile that contains a redshifted, sub-peak corresponding in radial velocity to the \"bullet\" that appeared on the previous night) and by the possible existence of very weak He i \\(\\lambda 4387\\) emission. Nevertheless, these measurements show the familiar oscillations in the jet velocities due to the tidal \"nodding\" of the accretion disk (Vermeulen et al., 1993; Gies et al., 2002). Table 1 lists the heliocentric Julian date for the mid-time of each observation, the observed H\\(\\beta-\\) radial velocity as \\(z=V_{r}/c\\) (estimated errors \\(\\pm 0.001\\)), and the predicted \\(z\\) value based upon the extrapolation of the fit to the H\\(\\alpha\\) jet velocities observed between 1998 and 1999 (Gies et al., 2002). The good agreement between the observed and predicted motions suggests that our jet precessional velocity fit is still reliable 3 years after the last H\\(\\alpha\\) observations. The stationary emission lines generally fall into two categories: lines like He ii \\(\\lambda 4686\\) that form in a region symmetric about the center of the accretion disk and that show orbital radial velocity curves of the form \\(-K_{1}\\sin(2\\pi\\phi)+V_{1}\\), and features like the hydrogen Balmer lines that probably form in a large volume in the disk's wind and have a radial velocity variation of the form \\(K_{2}\\cos(2\\pi\\phi)+V_{2}\\)(Gies et al., 2002). We measured the radial velocities of the stationary lines in our spectra to confirm their radial velocity behavior. The profiles were obtained by Gaussian fitting in all cases except for the N iii + C iii complex near 4644 A, where we measured relative shifts by cross-correlating the profiles for the second and third nights against that from the first night. Our results are listed in Table 2 (errors are \\(\\pm 30\\) km s\\({}^{-1}\\)). All the lines showed the blueward motion expected in this phase interval, but with a larger amplitude than predicted. The largest decline was found in N iii + C iii and He ii \\(\\lambda 4686\\), which decreased in radial velocity by \\(\\approx 214\\) km s\\({}^{-1}\\) over the duration of observations. Fabrika & Bychkova (1990) estimate that the disk semiamplitude is \\(K_{1}\\approx 175\\) km s\\({}^{-1}\\), much smaller than we observed. The same situation occurred in lines of the other group, H\\(\\delta\\), H\\(\\gamma\\), and He i \\(\\lambda 4471\\) which declined by some 108 km s\\({}^{-1}\\) (compared to the expected \\(K_{2}\\approx 64\\) km s\\({}^{-1}\\); Gies et al. (2002)). We speculate that this difference is due to the emergence of the approaching portion of the disk after the eclipse, which biases the radial velocities to more negative values. The other clue about the origin of the stationary lines comes from their intensity variations during the eclipse. If the emission source has a constant flux, then during eclipse the line will appear proportionally stronger relative to the lower continuum flux. However, if the emission source is also (partially) eclipsed, then the relative strength in rectified intensity remains approximately constant in eclipse. We show in Table 3 the rectified intensities of the emission lines for the first two nights relative to that on the final night. The final column gives the predicted constant emission flux variation relative to the last night based upon the \\(V\\)-band light curve (Goranskii et al. (1998b); see their Fig. 7b) and the \\(B-V\\) color variations (Goranskii et al. (1997); see their Fig. 3). We find that most of the line intensities decreased by a factor of 2.4\\(\\times\\) between the first (mid-eclipse) and last (out-of-eclipse) observations, compared to an expected decrease of 1.9\\(\\times\\) for a constant emission flux source. This difference is probably not significant since the intrinsic system flux does vary on short timescales, but the result does suggest that most of the lines form in regions large compared to the continuum-forming part of the accretion disk (so that the continuum-forming inner disk is eclipsed while the much larger line forming region suffers only minor occultation). The main exception is the N iii + C iii line which varies little in rectified intensity and must therefore form co-spatially with the continuum in the inner, hot part of the accretion disk. The other possible exception is the He ii \\(\\lambda\\)4686 line. Goranskii et al. (1997) and Fabrika et al. (1997b) observed profile shape variations during the eclipse that we also find (for example, from single- to double-peaked with egress from the eclipse), and they argue that the feature has two components, one formed in gas close to the disk center (eclipsed) and one formed in a larger volume (not eclipsed). This assessment agrees with the fact that, after the N iii + C iii line, the He ii \\(\\lambda\\)4686 feature shows the second smallest decrease in rectified flux, suggesting that its line forming region is more occulted during the eclipse than is the case for the H and He i lines (formed in the larger disk wind). ## 4 Absorption Line Spectrum We were struck in comparing the individual spectra by the similarity of patterns of what might first be considered \"noise\" in the continuum between the strong emission lines. We formed a global average spectrum with each spectrum weighted by the square of its S/N ratio, and an expanded version of this average spectrum is shown in Figures 2 and 3. We show below the average SS 433 plot examples of stellar spectra that could correspond to the spectral type of the donor star. The two B-star spectra were obtained from the atlas of Walborn & Fitzpatrick (1990), and we obtained the spectrum of the A-supergiant, HD 148743, with the LCS. All these spectra were Gaussian smoothed to an effective 2 pixel resolution of 1.78 A FWHM in order to compare them at the same instrumental resolution. We see that there is a significant (and largely unresolved) system of absorption lines in SS 433 that has eluded detection in earlier work. The line patterns bear little resemblance to those in B-supergiants but there are a number of features in common with the A-supergiant (\\(T_{\\rm eff}=7800\\) K; Venn (1995)). We show some preliminary identifications of the absorption lines based upon lists in Ballereau (1980) and Venn (1995). We measured the radial velocity of the absorption system by cross-correlating subsections of the spectrum containing the deepest features with the same in the first spectrum of the set. These relative velocities are listed in the final column of Table 2. (We found that the absorption lines in the first spectrum had a cross-correlation velocity of \\(-39\\pm 20\\) km s\\({}^{-1}\\) compared to the rest frame spectrum of HD 148743, so adding this value to the velocities in Table 2 gives an estimate of the absolute velocities.) Unlike the emission lines, the absorption lines moved significantly redward during the run, as expected for the donor star. Direct inspection of Figure 1 suggests that the absorption line spectrum grew fainter with egress from the eclipse, as predicted for the donor's spectrum. It is difficult to measure the change in individual lines, so we measured the strength of the cross-correlation of the absorption lines with those in HD 148743. The relative cross-correlation intensities are listed in the second last column of Table 3 (errors of \\(\\pm 0.5\\)). The absorption depths weakened in the same way as the emission intensities, suggesting that both became diluted by the emerging flux of the disk. Both the radial velocity and intensity variations indicate that the absorption lines form in the long-sought donor star. If they were formed in the interstellar medium (the case of the absorptions at 4428, 4501, and 4726 A), then we would observe no velocity or intensity variations through these eclipse phases. Similar arguments rule out formation in an extended \"shell\" surrounding the binary system. The absorption line velocities are insufficient for a general orbital solution, but we can make a restricted solution if we assume that the orbit is circular and we adopt the orbital period and epoch of mid-eclipse from Goranskii et al. (1998b). Then we can solve for two parameters, the systemic velocity, \\(V_{0}\\), and orbital semiamplitude, \\(K_{O}\\), from our 3 radial velocity measurements of the optical star. We used the orbital fitting code of Morbey & Brosterhus (1974) to find \\(V_{0}=(-44\\pm 9)\\) km s\\({}^{-1}\\) and \\(K_{O}=(100\\pm 15)\\) km s\\({}^{-1}\\) with residual errors of 9 km s\\({}^{-1}\\). Fabrika & Bychkova (1990) find the semi-amplitude of the disk is \\(K_{X}=(175\\pm 20)\\) km s\\({}^{-1}\\) based upon the He ii \\(\\lambda 4686\\) radial velocity curve. They adopt the system inclination from the \"kinematical\" model of the jets to arrive at a mass relation, \\(M_{O}/(1+q)^{2}=(7.7\\pm 2.7)\\)\\(M_{\\odot}\\) where the mass ratio is \\(q=M_{X}/M_{O}\\). We can now combine the semiamplitudes to estimate the mass ratio, \\(q=K_{O}/K_{X}=0.57\\pm 0.11\\), and the resulting masses are \\(M_{O}=(19\\pm 7)\\)\\(M_{\\odot}\\) and \\(M_{X}=(11\\pm 5)\\)\\(M_{\\odot}\\). Thus, our results suggest that the companion is a black hole. The absorption line depths strengthen from main sequence to supergiant in the A stars, and they appear sufficiently strong in the first eclipse spectrum to rule out a main sequence class. The donor star probably fills the critical Roche surface, and our estimate of the mass ratio indicates a Roche volume radius of \\((31\\pm 3)\\)\\(R_{\\odot}\\), consistent with a supergiant class. Thus, our results for SS 433 support the evolutionary scenario described by King et al. (2000) in which mass transfer is occurring on a thermal timescale as the donor crosses the Hertzsprung gap. We can estimate the magnitude difference between the star and disk, \\(\\triangle B\\), based upon the apparent line depths. Suppose that during the central eclipse a disk flux of \\(F_{1}\\) is occulted while a disk flux of \\(F_{2}\\) and donor star flux \\(F_{\\star}\\) remain visible. The stellar line depths will then appear diluted by a factor \\(F_{\\star}/(F_{2}+F_{\\star})\\). The observed line depths during eclipse (relative to the spectrum of the A-supergiant) suggest that the dilution is minimal, so \\(F_{2}/F_{\\star}=0\\) to 1. Based upon the line intensity variations we observed (Table 3), the ratio of out-of-eclipse to mid-eclipse flux is \\(2.38\\pm 0.15\\), and thus, the disk to star flux ratio is \\((F_{1}+F_{2})/F_{\\star}=1.4\\) to 2.8 (or \\(\\triangle B=0.3\\) to 1.4 mag). A donor star this bright may appear to be in conflict with earlier results (Antokhina & Cherepashchuk 1987; Gies et al. 2002), but we suspect that the star is heavily obscured at other precessional and orbital phases so that the line spectrum is difficult to find (and thus the estimate of the donor star's flux based on the absence of the lines will to be too low). Clearly our results should be regarded as preliminary since we have only observed the absorption spectrum during this one eclipse event, and SS 433 is known to display spectroscopic variations on timescales unrelated to the orbit. Nevertheless, confirmation of our results (especially in spectra of higher S/N and resolution) would be of particular importance in determining the stellar parameters of the donor (\\(T_{\\rm eff}\\), \\(\\log g\\), and abundance) and in refining the mass estimates. We emphasize again that the successful detection of the donor spectrum is probably limited to times near \\(\\Psi=0\\) and \\(\\phi=0\\). The next opportunities will occur near 2003 April 28 and 2003 October 2. We are grateful to the staff of McDonald Observatory and especially Dr. Anita Cochran for their help. Support for this work was provided by NASA through grant number GO-8308 from the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555. Institutional support has been provided from the GSU College of Arts and Sciences and from the Research Program Enhancement fund of the Board of Regents of the University System of Georgia, administered through the GSU Office of the Vice President for Research. We gratefully acknowledge all this support. ## References * (1) * (2) Antokhina, E. A., & Cherepashchuk, A. M. 1987, Soviet Astr., 31, 295 * (3) * (4) Ballereau, D. 1980, A&AS, 41, 305 * (5) * (6) Blundell, K. M., Mioduszewski, A., Muxlow, T. W. B., Podsiadlowski, P., & Rupen, M. 2001, ApJ, 562, L79 * (7) * (8) Cochran, A. L. 2002, User's Manual for the Large Cassegrain Spectrograph and Automated Telescope Offset Guider (Austin: Univ. of Texas McDonald Obs.) ([http://www.as.utexas.edu/mcdonald/computer/atog_manual.pdf](http://www.as.utexas.edu/mcdonald/computer/atog_manual.pdf)) * (9) * (10) Fabrika, S. N., & Bychkova, L. V. 1990, A&A, 240, L5 * (11) * (12) Fabrika, S. N., Panferov, A. A., Bychkova, L. V., & Rakhimov, V. Yu. 1997b, Bull. Special Astrophys. Obs., 43, 95 * (13) * (14) Fabrika, S. N., et al. 1997a, Bull. Special Astrophys. Obs., 43, 109 * (15) * (16) Gies, D. R., McSwain, M. V., Riddle, R. L., Wang, Z., Wiita, P. J., & Wingert, D. W. 2002, ApJ, 566, 1069 * (17) * (18) Goranskii, V. P., Fabrika, S. N., Rakhimov, V. Yu., Panferov, A. A., Belov, A. N., & Bychkova, L. V. 1997, Astr. Rep., 41, 656 * (19) * (20) Goranskii, V. P., Esipov, V. F., & Cherepashchuk, A. M. 1998a, Astr. Rep., 42, 336 * (21) * (22) Goranskii, V. P., Esipov, V. F., & Cherepashchuk, A. M. 1998b, Astr. Rep., 42, 209 * (23) * (24) King, A. R., Taam, R. E., & Begelman, M. C. 2000, ApJ, 530, L25 * (25) * (26) Margon, B. 1984, ARA&A, 22, 507 * (27) * (28) Morbey, C. L., & Brosterhus, E. B. 1974, PASP, 86, 455 * (29) * (30) Murdin, P., Clark, D. H., & Martin, P. G. 1980, MNRAS, 193, 135 * (31) * (32) Panferov, A. A., & Fabrika, S. N. 1997, Astr. Rep, 41, 506 * (33) * (34) Paragi, Z., Vermeulen, R. C., Fejes, I., Schilizzi, R. T., Spencer, R. E., & Stirling, A. M. 1999, A&A, 348, 910 * (35) * (36) Venn, K. A. 1995, ApJS, 99, 659 * (37) * (38)* (1993) Vermeulen, R. C., et al. 1993, A&A, 270, 204 * (1990) Walborn, N. R., & Fitzpatrick, E. L. 1990, PASP, 102, 379 * (1989) Zwitter, T., Calvani, M., Bodo, G., & Massaglia, S. 1989, Fund. Cosmic Phys., 13, 309 * (1991) Zwitter, T., Calvani, & D'Odorico, S. 1991, A&A, 251, 92Figure 1: The rectified spectra of SS 433 marked with labels for the prominent features. The spectra are shown chronologically from the first night (_upper_) to the last night (_lower_), and the spectrum for each night is offset by unity for clarity. The orbital phases of observation, \\(\\phi\\), are listed above each spectrum. Figure 3: A detailed plot of the long wavelength portion of the co-added spectrum of SS 433 in the same format as Fig. 2. Figure 2: A detailed plot of the low wavelength portion of the co-added spectrum of SS 433 (_thick line_). The individual spectra were shifted to the rest frame prior to co-addition. Several examples of the predicted donor spectrum are shown below (_thin lines_): HD 148743 (A7 Ib), \\(\\beta\\) Ori (B8 Ia), and HD 148688 (B1 Ia). All the spectral intensities were increased by a factor of 2 and offset in intensity for ease of comparison. Preliminary identifications are given for a number of weak absorption lines in the spectrum of SS 433. A plus sign following the identification indicates a blend of multiple lines. \\begin{table} \\begin{tabular}{l l l} \\hline \\hline Date & Observed & Predicted \\\\ (HJD - 2450000) & \\(z\\)(H\\(\\beta-\\)) & \\(z\\)(H\\(\\alpha-\\)) \\\\ \\hline 2430.7616 & \\(-\\)0.099 & \\(-\\)0.098 \\\\ 2431.8226 & \\(-\\)0.097 & \\(-\\)0.095 \\\\ 2432.9381 & \\(-\\)0.103 & \\(-\\)0.100 \\\\ \\hline \\end{tabular} \\end{table} Table 1: Jet Radial Velocity Measurements \\begin{table} \\begin{tabular}{c c c c c c c} \\hline \\hline \\multicolumn{1}{c}{ Date} & \\(V_{r}\\)(H\\(\\delta\\)) & \\(V_{r}\\)(H\\(\\gamma\\)) & \\(V_{r}\\)(He i) & \\(V_{r}\\)(N iii)a & \\(V_{r}\\)(He ii) & \\(V_{r}\\)(Abs.)a \\\\ (HJD - 2450000) & (km s\\({}^{-1}\\)) & (km s\\({}^{-1}\\)) & (km s\\({}^{-1}\\)) & (km s\\({}^{-1}\\)) & (km s\\({}^{-1}\\)) & (km s\\({}^{-1}\\)) \\\\ \\hline 2430.7616 & 169 & 204 & 264 & 0 & 168 & 0 \\\\ 2431.8226 & 25 & 118 & 204 & \\(-96\\) & 50 & \\(38\\pm 35\\) \\\\ 2432.9381 & 77 & 80 & 157 & \\(-216\\) & \\(-44\\) & \\(87\\pm 15\\) \\\\ \\hline \\end{tabular} \\end{table} Table 2: Stationary Line Radial Velocity Measurements \\begin{table} \\begin{tabular}{l c c c c c c c c} \\hline \\hline \\multicolumn{1}{c}{ Orbital Phase} & H\\(\\delta\\) & H\\(\\gamma\\) & He i & N iii & He ii & H\\(\\beta-\\) & Abs.a & \\(B\\) Light Curveb \\\\ \\hline 0.012 & 2.28 & 2.53 & 2.47 & 1.20 & 2.16 & 2.44 & 2.6 & 1.91 \\\\ 0.093 & 1.58 & 1.38 & 1.56 & 1.16 & 1.40 & 1.25 & 1.5 & 1.22 \\\\ \\hline \\end{tabular} \\end{table} Table 3: Relative Line Intensities \\(F_{l}/F_{c}\\) : \\(F_{l}/F_{c}(\\phi=0.178)\\)Figure 2: Figure 3:
We present results from a short series of blue, moderate resolution spectra of the microquasar binary, SS 433. The observations were made at a time optimized to find the spectrum of the donor star, i.e., when the donor was in the foreground and well above the plane of the obscuring disk. In addition to the well-known stationary and jet emission lines, we find evidence of a weak absorption spectrum that resembles that of an A-type evolved star. These lines display radial velocity shifts opposite to those associated with the disk surrounding the compact star, and they appear strongest when the disk is maximally eclipsed. All these properties suggest that these absorption lines form in the atmosphere of the hitherto unseen mass donor star in SS 433. The radial velocity shifts observed are consistent with a mass ratio \\(M_{X}/M_{O}=0.57\\pm 0.11\\) and masses of \\(M_{O}=(19\\pm 7)\\)\\(M_{\\odot}\\) and \\(M_{X}=(11\\pm 5)\\)\\(M_{\\odot}\\). These results indicate that the system consists of an evolved, massive donor and a black hole mass gainer. stars: early-type -- stars: individual (SS 433; V1343 Aql) -- X-rays: binaries + Footnote †: slugcomment: Submitted to ApJL.
Provide a brief summary of the text.
arxiv-format/0208069v1.md
# RX J1856 \\(-\\)3754: Evidence for a Stiff EOS Timothy M. Braje and Roger W. Romani Physics Department, Stanford University, Stanford, CA 94305 [email protected], [email protected] ## 1 Introduction RX J1856 \\(-\\)3754, discovered by Walter, Wolk, & Neuhauser (1996), is the nearest and brightest known neutron star not showing emission dominated by non-thermal magnetospheric processes. As such it offers a unique opportunity to study the bare thermal surface emission. Measurements of the spectrum can probe the neutron star mass (M\\({}_{\\rm NS}\\)) and radius (R\\({}_{\\rm NS}\\)), constraining the high density equation of state (EOS). Since the discovery, there have been several intensive observing campaigns covering the optical-UV (_HST_) and most recently the detailed soft X-ray spectrum (_CXO_). An initial 50ks Low-Energy Transmission Grating (LETG) spectrum showed a broad band spectrum remarkably consistent with a simple blackbody (Burwitz _et al._ 2001), although hints of spectral features were suggested (van Kerkwijk 2002). A deeper observation using 450 ks of Director's Discretionary Time (DDT) was made. This unique data set has been the subject of prompt study; several authors show that lines in the spectrum are undetectable, while pulse searches have placed stringent limits on the observed soft X-ray pulse fraction (Ransom, Gaensler, & Slane 2002; Drake _et al._ 2002). These data have been variously interpreted, including the widely reported speculation (based on the X-ray spectrum alone) that RX J1856 \\(-\\)3754 might be a bare quark star (esp. Drake _et al._ 2002). Despite the very stringent constraints placed on the X-ray pulse fraction, (\\(<4.5\\)% at 99% confidence, including a \\(\\dot{P}\\) search Ransom, Gaensler, & Slane 2002), there is strong evidence that RX J1856 \\(-\\)3754 is a rotation-powered pulsar. van Kerkwijk & Kulkarni (2001b) discovered an H\\(\\alpha\\) nebula surrounding the neutron star, concluding that it could be best interpreted as a bow-shock nebula powered by a relativistic wind of \\(e^{\\pm}\\) generated by pulsar spindown. The bow shock geometry then provides an estimate of the spindown power \\(\\dot{E}=I\\Omega\\dot{\\Omega}=8\\times 10^{32}\\) erg/s \\(d_{140}^{3}\\) (van Kerkwijk & Kulkarni 2001b). Adopting magnetic dipole braking at constant \\(B\\), this gives \\(\\dot{E}=10^{34}(B_{12}\\tau_{6})^{-2}\\) erg/s for a surface dipole field \\(10^{12}B_{12}\\) G and characteristic age \\(10^{6}\\tau_{6}\\) y, suggesting \\(B_{12}\\tau_{6}\\sim 3\\). A critical parameter in the discussion of this source is the distance, which has been the subject of some controversy. Initial estimates from _HST_ astrometry gave a parallax distance of 61 pc (Walter 2001). Kaplan, van Kerkwijk, & Anderson (2002), however re-analyzed these data, deriving \\(d=143\\) pc. A fourth _HST_ observation appears to have resolved this discrepancy, giving an overall measurement of \\(d=117\\pm 12\\) pc (Walter & Lattimer 2002); we adopt this value. ## 2 Spectral Fits For some time now, it has been clear that the broad band spectrum of RX J1856 \\(-\\)3754 from _HST_, _ROSAT_, and _EUVE_ data (for a detailed discussion, see Pons _et al._ 2002) is inconsistent with a light element (\\(\\sim\\) Kramer's law opacity) H or He atmospheres. H models, for example, overpredict the optical/X-ray flux ratio by a factor \\(\\sim 100\\) (Pavlov _et al._ 1996). Blackbody, heavy element or composite models gave acceptable fits. To produce such \\(\\sim\\) Planckian spectra, one must have nearly isothermal conditions at optical depth \\(\\tau_{\ u}\\approx 1\\) across the observed band. One possibility is that the surface is in a solid or liquid state, precluding large temperature gradients through the photosphere. Theoretical studies to date, while limited, suggest that this is not the case unless H is present and \\(B>10^{13}\\)G (Lai & Salpeter 1997). In an atmosphere, the easiest way to form the spectrum in a small depth range is to invoke a rich line spectrum; thus, at first sight, blackbody-like spectra should require a high spectral density of opacity features (lines and edges). This led to the expectation that the blackbody-like spectrum of RX J1856 \\(-\\)3754 would show many spectral features when examined with good S/N LETG spectra. Unfortunately, the initial 50 ks exposure already showed that the line features were substantially weaker than expected in a simple low-\\(B\\) single temperature atmosphere model dominated by heavy elements. The new DDT exposure only enhances this conclusion, placing very strong limits (typical equivalent width \\(\\lesssim 0.02\\)A Drake _et al._ 2002) on spectral features. We will be concerned with the quality of statistical fits to various atmosphere models. It is important to note here that blackbody spectral fits to the _CXO_ RX J\\(1856-3754\\) LETG data are, contrary to early reports, _not_ statistically perfect fits to a simple Planck spectrum. This conclusion was drawn from fits to basic CIAO extractions, which appreciably underpin the spectrum. Instead we find that at a more modest binning (equally spaced \\(\\sim 0.7\\)Abins) we obtain \\(\\chi^{2}\\)/DOF=1.6. As the bin size is increased, the \\(\\chi^{2}\\)/DOF grows to \\(\\sim 4.8\\), until the number of degrees of freedom becomes small. This is a clear signature of spectral departures on resolved energy scales, and with appropriate binning one indeed finds systematic, grouped residuals to the Planck function fit at the \\(\\sim 10\\%\\) level. We believe that these represent the limit of accuracy in calibration of the response matrix, as the broad band spectral shape is an excellent fit to the Planck function. Drake _et al._ (2002) have reached similar conclusions. Recognizing that very subtle departures from a pure blackbody may be present in these data, we adopt the conservative assumption that these departures are fully accounted for by response matrix systematics. To accommodate an extended atmosphere, one must suppress the spectral features. One possibility is that external heat sources (such as precipitating magnetosphereic \\(e^{\\pm}\\)) drive the atmosphere towards isothermality. Sample atmospheres showing this effect have been computed in Gansicke, Braje, & Romani (2002). A second possibility is that the line energies for a given species vary strongly across the neutron star surface. For normal pulsar fields, \\(B\\sim 10^{12}\\) G or higher, the strong dependence of the transition energies on the local \\(B\\) (_eg._ Rajagopal, Romani, & Miller 1997; Pavlov _et al._ 1995), coupled with substantial \\(\\geq 2\\times\\) variation of \\(B\\) across the surface, even for the simplest dipole models, ensures that such'magnetic smearing' will strongly suppress the phase-averaged line width (Romani, in prep.). We discuss briefly here a third possibility, that the lines experience variable shifts as the pulsar rotates due to Doppler and other dynamic effects. Again, the phase-averaged spectrum observed for RX J\\(1856-3754\\) would be expected to show broadened and blended lines, driving the spectrum towards a Planck curve. If, as is required for a normal neutron star, the soft X-ray emission of RX J\\(1856-3754\\) is dominated by hot polar caps, the rotation of these past the line of sight produces phase dependent Doppler shifts (Braje, Romani, & Rauch 2000). These are only significant for \\(v_{surf}\\sim 2\\pi\\rm R_{NS}/P_{NS}\\longrightarrow c\\), _i.e._\\(\\rm P_{NS}\\lesssim a\\) few ms. Such small \\(\\rm P_{NS}\\) are not excluded, since the HRC-S wiring error limits arrival time accuracy and precludes sensitive searches for \\(\\rm P_{NS}\\lesssim 10\\,ms\\). Moreover, the \\(\\tilde{E}\\) from the bow shock stand-off gives \\(\\rm P_{NS}=4.6(B/10^{8}\\,G)^{1/2}\\) ms, so a low field star, having a non-magnetic atmosphere would have a \\(\\sim\\) ms spin period. For a concrete example, we assume \\(\\rm M_{NS}=1.4M_{\\odot}\\), \\(\\rm R_{NS}=10\\) km, and \\(\\rm P_{NS}=1.5\\) ms (allowed, as the existence of \\(PSR~{}1937+21\\) shows). We have tried both solar abundance and iron model atmospheres, tested a range of magnetic inclinations \\(\\alpha\\), and computed phase averaged spectra using an extension of the Monte Carlo simulation code described in Braje, Romani, & Rauch (2000). We then tested these models, fitting the most recent _CXO_ data, allowing the temperature, observer viewing angle \\(\\zeta\\), and interstellar absorption \\(\\rm n_{H}\\) to vary. The large L and M shell edges in the iron models ensure that these are always poor fits. The solar abundance atmospheres have a much richer line structure which is more easily blurred into a pseudo-continuum. In Figure 1, we display the solar abundance millisecond pulsar model, overlaid on the _CXO_ data. For comparison the best fit non-rotating model is shown in the upper panel. Doppler boosting produces a qualitatively acceptable fit below \\(\\sim 0.5\\) keV, but the simple blackbody remains statistically an appreciably better model (\\(\\chi^{2}\\)/DOF=1.6 _vs._ 3.7 for the binning chosen). We must conclude that a simple blackbody fit remains the best available, although not yet physically explained. Planck emission from a physical neutron star can be compared with the data to obtain significant constraints on the neutron star parameters; we pursue this in the remainder of the paper. ## 3 Two Temperature Blackbody fits A blackbody fit to the _CXO_ data alone results in a temperature of \\(T_{\\infty}\\sim 61\\) eV with very small statistical uncertainty (Drake _et al._ 2002, and our own analysis). We find, as also reported by Drake _et al._ (2002), that systematics (likely in the the effective area, as noted above) provide the dominant error. Drake _et al._ (2002) quote \\(61.2\\pm 1.0\\) eV. Taken at face value this \\(T_{\\infty}\\) with the parallax distance gives a radius as measured at infinity of \\(\\rm R_{\\infty}=(1+z)R_{NS}=3.8-8.2\\) km. If interpreted as the full star radius, this demands exotic equations of state (_i.e._ quark stars). Of course, this is only a lower limit to the stellar radius. The optical/UV data points, which closely follow a Rayleigh-Jeans spectrum (_eg._ Pons _et al._ 2002; van Kerkwijk & Kulkarni 2001a), are most easily interpreted as a second cooler Planck spectrum representing flux from the full surface. This has been previously recognized, but Drake _et al._ (2002) argue against this interpretation, citing the absence of the X-ray pulse expected from such a hot polar cap/cool surface combination. We have Figure 1: _Top:_ Best fit solar abundance model with no Doppler shifts. _Bottom:_ Best fit solar abundance model with Doppler shifts for a \\(\\rm P_{NS}=1.5\\) ms, \\(\\rm R_{NS}=10\\) km pulsar. addressed this concern quantitatively, computing detailed light curves and spectra. ### Analytic Two-Temperature Models A simple analytic two-temperature blackbody fit delineates the basic model parameters. For a range of effective (cold surface) radii, we fit \\(\\rm T_{hot}\\), \\(\\rm T_{cold}\\), the hot area, and the absorption column density. In Figure 2, we plot the minimum \\(\\chi^{2}\\) as a function of cold radius. The optical-UV data points fix \\(\\rm R_{NS}^{2}T_{cold}\\). The fit becomes poor when \\(\\rm T_{cold}\\) starts to allow significant Wien peak contribution to the _CXO_ X-ray band; this sets the minimum stellar radius. Formally, there is a maximum acceptable radius beyond which low \\(\\rm T_{cold}\\) predicts Wien peak curvature inconsistent with the \\(\\sim\\)Rayleigh-Jeans UV data points. ### More Realistic Two-Temperature Models In addition to the phase averaged spectrum, we have a limit on the _CXO_-band pulse fraction. A more realistic model is required to address the detailed spectrum and pulsations. We adopt a two-temperature model with two opposing hot spots (polar caps) at \\(\\rm T_{hot}\\) and the remainder of the surface at \\(\\rm T_{cold}\\). The caps' orientation (\\(\\alpha\\) and \\(\\zeta\\)) are free parameters. We radiate from these surfaces, tracing the photons to infinity to form phase resolved spectra and light curves. For details of these Monte-Carlo sums see Braje, Romani, & Rauch (2000). The analytic model results allow some useful simplifications. Since \\(\\rm T_{hot}\\) is virtually constant over the full acceptable \\(\\rm R_{NS}\\) range, we fix this value in exploring the rest of the parameter space. Further, the X-ray flux amplitude allows an initial estimate of the cap half angle \\(\\Delta\\) (which depends on \\(\\alpha\\) and \\(\\zeta\\)) that ensures that the pulse formation is accurate and results in quick convergence to the true minimum. The fit parameters of interest are \\(\\rm R_{NS}\\), \\(\\Delta\\), \\(\\rm T_{cold}\\), \\(\\zeta\\), \\(\\alpha\\), and \\(\\rm n_{H}\\). To explore the sensitivity to several parameters, we have computed a model grid. We calculate all models for \\(\\alpha=5^{\\circ}\\) to \\(\\alpha=90^{\\circ}\\) in five degree steps; \\(\\zeta=0^{\\circ}\\) to \\(\\zeta=90^{\\circ}\\) in five degree steps; and \\(\\rm R_{NS}=12\\;km\\) to \\(\\rm R_{NS}=20km\\) in one kilometer steps. While not strictly a fit parameter, we also vary the stellar mass \\(\\rm M_{NS}\\). The quality of the spectral fit turns out to be quite insensitive to the choice of \\(\\alpha\\) and \\(\\zeta\\). In Figure 3, we display a typical spectrum for a model with \\(\\rm M_{NS}=1.4M_{\\odot}\\), \\(\\rm R_{NS}\\gtrsim 14\\;km\\). As the radius becomes smaller, the optical-UV flux is underpredicted and the fit becomes unacceptable. For each \\(\\alpha\\) and \\(\\zeta\\) we compute \\(\\chi^{2}\\) as a function of \\(\\rm R_{NS}\\). The average over angles is shown by the points in Figure 2. Note that the minimum value is not at \\(\\chi^{2}_{\ u}=1\\), a consequence of the aforementioned systematic errors. This means that the errors are not Poisson distributed according to the bin counts. To establish confidence levels (CL), we must Monte Carlo according to the observed error distribution. We have computed the \\(\\chi^{2}\\) distribution about the best fit model for each (\\(\\alpha\\), \\(\\zeta\\)) combination, obtaining a histogram of \\(\\chi^{2}\\) values. These were almost completely insensitive to the angles, so we combined all the \\(\\chi^{2}\\) distributions to obtain confidence levels to the \\(\\chi^{2}\\) increases associated with variations in \\(\\rm R_{NS}\\). Examinations of the differences between the different angles and between independent Monte Carlo runs show that the estimates of 90% and 99% confidence level limits on the radius are uncertain by no more than \\(\\pm 0.5\\;\\rm km\\) from systematic and computational errors in this procedure. ### Cap Shape Constraints One might question whether a simple, circular uniform \\(\\rm T_{hot}\\) polar cap is merely an adequate approximation to the data. We have fit some alternative surface temperature distributions \\(T(\\eta)\\), where \\(\\eta\\) is the magnetic co-latitude, comparing with the best fit circular cap which had \\(k\\rm T_{hot}=62.8\\;\\rm eV\\), \\(\\Delta=21^{\\circ}\\). The best fit Gaussian \\(T(\\eta)\\) had a peak temperature of \\(k\\rm T_{hot}=74.8eV\\) and width \\(\\sigma=19^{\\circ}\\), but showed an increase of \\(\\chi^{2}\\approx 80\\) over the simple cap model, sufficient to exclude at the \\(\\sim 99\\%\\) CL. Adding a simple linear \\(\\rm T_{hot}\\) to \\(\\rm T_{cold}\\) ramp to a uniform cap makes no discernible difference until the ramp width is twice that of the cap. At this point the best fit model has a cap with \\(k\\rm T_{hot}=69\\;\\rm eV\\) and \\(\\Delta\\approx 12^{\\circ}\\), but the model is excluded at the \\(\\sim 90\\%\\) CL. Finally, we fit both a simple \\(\\rm T_{hot}\\propto\\cos(\\eta)\\) Figure 3.— Broad band spectral fit to RXJ1856\\(-\\)3754. Optical/UV data points are drawn from van Kerkwijk & Kulkarni (2001a) and Pons _et al._ (2002). The dotted lines show the unabsorbed hot and cold blackbody components. Figure 2.— Solid line: \\(\\chi^{2}\\) (257 DOF) as a function of cold sphere radius. Points: \\(\\chi^{2}\\) values from the polar-cap model fits. model and the surface \\(T(\\eta)\\) distribution of Greenstein & Hartke (1983) which is motivated by magnetic anisotropy in the thermal conductivity. Both were excluded by the _CXO_ data at very high confidence. All fits were at \\(1.4\\rm M_{\\odot}\\), best fit stellar radii and (\\(\\alpha\\), \\(\\zeta\\)) chosen so that the pulse fraction is \\(\\lesssim 5\\%\\) for the default cap model. Evidently, the _CXO_ data require a quite uniform distribution of the high temperature excess, and suggests that it is induced by exterior heating rather than interior conductivity. This cap size is substantially larger than the \\(\\Delta\\sim 3^{\\circ}(100\\,\\rm ms/P_{\\rm NS})^{1/2}\\) expected for a dipole surface cap, unless the period is very small. Higher magnetic multipoles would generally have even smaller open zone caps. The large, uniformly heated area is puzzling in the context of pulsar surface acceleration models, but might be most easily accommodated in the more modern pictures of a GR-induced potential that is relatively uniform across the polar cap and forms a pair formation front at relatively high altitudes (Harding & Muslimov 1998). Perhaps a more plausible interpretation invokes a high-altitude acceleration zone, with the inward-directed \\(\\gamma\\)-rays pair converting in the closed zone above the polar cap (Wang _et al._ 1998), which should give a \\(\\sim\\) large zone of uniform surface heating. ## 4 Pulse fraction constraints We have seen that the spectral fits are quite insensitive to the cap viewing angles. They place a firm minimum on the allowed \\(\\rm R_{NS}\\) but allow radii that are implausibly large. However, the observed pulse fraction depends strongly on viewing geometry and, through gravitational focusing, the value of \\(\\rm M_{NS}/R_{NS}\\). In general, we expect a strong X-ray pulse when the magnetic axis passes close to Earth's line-of-sight (small \\(|\\alpha-\\zeta|\\)). Of course, for an aligned rotator \\(\\alpha\\approx 0\\) the pulse can be very small and for \\(\\alpha\\approx\\pi/2\\) there is an appreciable region where the pulse is weak as viewed from Earth. We have examined the _CXO_-band pulse profile of our model grid to find the allowed region of \\((\\alpha,\\zeta)\\) parameter space in which the model pulse fraction is weaker than that observed (\\(\\lesssim 4.5\\%\\)); this is shown in the inset of Figure 4. This exercise is repeated for each \\(\\rm M_{NS},R_{NS}\\); the allowed phase space is larger for small \\(\\rm R_{NS}/M_{NS}\\) as gravitational bending dilutes the observed pulse. By demanding a certain minimum probability that a pulse fraction as low as that observed is seen at Earth, we obtain a _maximum_ acceptable radius for the neutron star. One important additional constraint can be invoked. The non-detection of this neutron star in the radio band (Brazier & Johnston 1999) puts a very strong bound on pulsar emission directed towards Earth. Assuming that this is a normal radio pulsar (consistent with \\(\\dot{E}\\) inferred from the bow shock), we must conclude _a priori_ (independent of the X-ray data) that our line of sight lies outside of the pulsar radio beam and that \\(|\\alpha-\\zeta|\\) is not small. The region excluded depends on the size of the radio beam (\\(\\Theta_{\\rm rad}\\)), which in turn depends on the spin period. A typical estimate is (_eg._ Rankin 1993) \\[\\Theta_{\\rm rad}=5.8^{\\circ}(\\rm P_{NS}/1s)^{-1/2} \\tag{1}\\] at the radio frequency \\(\ u=1\\) GHz. With \\(\\rm P_{NS}=0.3\\) s, this gives \\(\\Theta_{\\rm rad}\\sim 10.6^{\\circ}\\). Radio limits on \\(\\rm RX~{}J1856-3754\\) are strong at even lower \\(\ u\\) where radius-to-frequency mapping produces larger \\(\\Theta_{\\rm rad}\\), and for this nearby object, the radio luminosity constraints are so severe that we are unlikely to intersect even the faint fringe of the radio beam. Both effects argue for \\(\\Theta_{\\rm rad}\\) larger than that above. Accordingly, in the Figure 4 inset we show by diagonal lines the regions near \\(\\alpha=\\zeta\\) excluded by the lack of radio detection for \\(1\\times\\) and \\(2\\times\\) this fiducial beam size. Shorter periods allow an _a priori_ exclusion of even more phase space. The point is that from the lack of radio detection we should have _expected_ a low X-ray pulse fraction. The probability for obtaining pulses as weak as observed is then set by the fraction of the remaining allowed solid angle. These fractions are plotted in Figure 4 with and without the radio prior. The allowed radius range depends on mass and in Table 1, we give the 90% and 99% CL upper bound on the neutron star radius for these priors and several neutron star masses. ## 5 Equation of state constraint We see that the X-ray/optical data, using the spectral and pulse fraction arguments, give a range of allowed stellar radii for each mass. Strictly speaking, the minimum and maximum radius CL have somewhat different interpretations, but it is interesting to place these bounds in the \\(\\rm M_{NS}-R_{NS}\\) plane to compare with the predictions of various EOS. In Figure 5, we show the combined constraints, assuming \\(\\Theta_{\\rm rad}=21.2^{\\circ}\\). The spectral radius lower bounds at 90% and 99% CL approximate curves of constant \\(\\rm M_{NS}/R_{NS}\\). The pulse fraction upper bounds (90%, 95%) rapidly drive one to small radii at low masses. For \\(\\rm M_{NS}\\lesssim 1.3\\rm M_{\\odot}\\) no simultaneous solutions are allowed consistent with the 90% bounds. At \\(\\rm M_{NS}=1.5\\rm M_{\\odot}\\) the allowed range from the fit is quite small, \\(\\rm R_{NS}=13.7\\pm 0.6\\) km. The additional uncertainty in the distance actually dominates the errors Figure 4: Fraction of the sky allowed by pulse fraction constraints as a function of stellar radius. The solid line assumes no priors; the dotted line for a prior \\(1\\times\\Theta_{\\rm rad}=10.6^{\\circ}\\); and the dashed line for \\(2\\times\\Theta_{\\rm rad}\\). The inset shows the allowed (\\(\\alpha\\),\\(\\zeta\\)) parameter space shaded in light, medium, and dark gray for \\(12,14,16\\)km, respectively at \\(\\rm M_{NS}=1.4\\rm M_{\\odot}\\). The lines in the inset depict the parameter space excluded by radio prior. (arrowed bar). For comparison, several EOS curves (after Lattimer & Prakash 2001) are plotted. We see that large radius (stiff at nuclear density) EOS are preferred. Formally, the relativistic field theoretical model by Muller & Serot (1996) and the model GS2 by Glendenning & Schaffner-Bielich (1999) are the only modern models allowed (the original PS model of Pandharipande & Smith (1975) is also allowed). We note that the GS models are very sensitive to the K meson potential; for example, GS1 is strongly excluded. Interestingly, no potential or variational method computations agree with the formal overlap. If one includes the distance uncertainty, a few more intermediate radius models are not excluded at the 90% CL. However, even including the distance uncertainties, all quark star models are excluded at the \\(\\sim 95\\%\\) level for M\\({}_{\\rm NS}\\lesssim 1.5\\)M\\({}_{\\odot}\\) and are only barely consistent at the highest allowed masses. Improved _HST_ parallax measurements could boost this exclusion to the \\(3\\sigma\\) level. The model GS2 differs from the MS models in that it includes a kaon condensate in the core. One principal effect of exotic interior condensates is to enhance neutrino cooling after \\(\\sim 100\\,\\)y. This can also be achieved without exotic for a proton fraction \\(\\gtrsim 11\\%\\) through the direct URCA process (Lattimer _et al._ 1991). Our inferred T\\({}_{\\rm cold}\\), interpreted as the signature of the cooling from the initial heat of formation, corresponds to \\(L\\approx 3.2\\times 10^{31}\\,\\)erg/s for our best fit radius. Such luminosities are reached in \\(\\sim 5\\times 10^{5}\\)y (the preferred age of Walter & Lattimer 2002) when enhanced neutrino cooling can occur, but are achieved after \\(\\sim 1.3\\times 10^{6}\\) y for stars with low density (stiff) cores Tsuruta _et al._ (2002). Thus, depending on the actual stellar age, this cool surface may be seen as weak evidence for an exotic composition and/or significant softening of the EOS at very high densities. The prospects for further tests of the ideas in this paper hinge on the detection of a pulse from RX J1856 \\(-\\) 3754 along with measurement of the pulse fraction and the period derivative. Given that significant allowed pulse fraction parameter space lies just slightly below the _CXO_ LETG detection threshold, the prospects for a pulse measurement with the recently completed 58 ks _XMM_ observation are quite good. Even if only thermal, the phase resolved spectrum should provide important constraints on the cap temperature, size and orientation. In particular our thermal model predicts that the pulse fraction should increase by 70% from 0.15 keV to 0.5 keV, where the _XMM_ data should still deliver \\(\\sim 0.8\\) PN camera cps, allowing detection of pulse fractions well under 1%. We are grateful to Boris Gansicke for collaboration on the atmosphere models used in SS2; and to Herman Marshall for sharing an independent LETG response matrix. This work was supported in part by NASA grant SP2-2002X. ## References * Braje _et al._ (2000) Braje, T.M., Romani, R.W., & Rauch, K.P. 2000, ApJ, 531, 447 * Brazei & Johnston (1999) Brazei, K. T. S. & Johnston, S. 1999, MNRAS, 305, 671 * Buzwitz _et al._ (2001) Buzwitz, V., Zavlin, V. E., Neuhauser, R., Predehl, P., Trumper, & Brinkmann, A. C. 2001, A&A, 397, L35 * Drake _et al._ (2002) Drake, J. J., Marshall, H. L., Dreizler, S., Freeman, P. E., Fruscione, A., Juda, M., Kashyap, V., Nicastro, F., Pease, D. O., Wargelin, B. J., & Werner, K. 2002 ApJ, 572, 996 * Gansicke _et al._ (2002) Gansicke, B.T. Braje, T.M. & Romani, R.W. 2002, A&A, 386, 1001 * Glendenning & Schaffner-Bielich (1999) Glendenning, N. K. & Schaffner-Bielich, J. 1999, Phys. Rev. C, 60, 025803 * Greenstein & Hartke (1983) Greenstein, G. & Hartke, G. J. 1983, ApJ, 271, 283 * Harding & Muslimov (1998) Harding, A. K. & Muslimov, A. G. 1998, ApJ, 508, 328 * Kaplan _et al._ (2002) Kaplan, D. L., van Kerkwijk, M. H., & Anderson, J. 2002, ApJ, 571, 447 * Lai & Salpeter (1997) Lai, D. & Salpeter, E. E. 1997, ApJ, 491, 270 * Lattimer _et al._ (1991) Lattimer, J. M., Pethick, C. J., Prakash, M., & Haensel, P. 1991, Phys. Rev. Lett., 66, 2701 * Lattimer & Prakash (2001) Lattimer, J. M. & Prakash, M. 2001, ApJ, 550, 426 * Miller & Serot (1996) Miller, H. & Serot, D. B. 1996, Nucl. Phys. A, 606, 508 * Pandharipande & Smith (1975) Pandharipande, V. R. & Smith, R. A. 1975, Nucl. Phys. A, 237, 507 * Pavlov _et al._ (1995) Pavlov, G.G., Shibanov, Y.A., Zavlin, V.E., & Meyer, R.D. 1995, in The Lives of Neutron Stars, ed. M. A. Alpar, U. Kiziloglu, & J. van Paradijs (Dordrecht: Kluwer), 71-90 * Pavlov _et al._ (1996) Pavlov, G. G., Zavlin, V. E., Trumper, J., & Neuhauser, R. 1996, ApJ, 472, L33 * Pons _et al._ (2002) Pons, J. A., Walter, F. M., Lattimer, J. M., Prakash, M., Neuhauser, R., & An, P. 2002, ApJ, 564, 981 * Rajagopal _et al._ (1997) Rajagopal, M., Romani, R. W., & Miller, M. C. 1997, ApJ, 479, 347 * Rankin (1993) Rankin, J. M. 1993, ApJ, 405, 285 * Ransom _et al._ (2002) Ransom, S. M., Gaensler, B. M., & Slane, P. O. 2002, ApJ, 570, L75 * Tsuruta _et al._ (2002) Tsuruta, S., Teter, M.A., Takatsuka, T., Tatsumi, T. & Tamagaki, R. 2002, ApJ, 571, L143 * van Kerkwijk & Kulkarni (2001) van Kerkwijk, M. H. & Kulkarni, S. R. 2001, A&A, 378, 986 * van Kerkwijk & Kulkarni (2001) van Kerkwijk, M. H. & Kulkarni, S. R. 2001, A&A, 380, 221 * van Kerkwijk (2002) van Kerkwijk, M. H. 2002, Proc. Jan van Paradijs Memorial Symposium, ed. Van den Heuvel, E. P. J., Kaer, L., Rol, E., ASP, San Francisco * Walter _et al._ (1996) Walter, F. M., Wolk, S. J., & Neuhauser, R. 1996, Nature, 379, 233 Figure 5.— Radius constraints for different possible RX J1856 \\(-\\) 3754 masses. The triangular gray shaded region represents the formal 90% CL overlap from the pulse fraction and spectral constraints. The two-sided arrow represents the systematic range induced by the distance uncertainties. Equation of state curves and labels are drawn from Lattimer & Prakash (2001). See this paper for EOS labels and references.
We have examined the soft X-ray plus optical/UV spectrum of the nearby isolated neutron star RX J1856 \\(-\\)3754, comparing with detailed models of a thermally emitting surface. Like previous investigators, we find the spectrum is best fit by a two-temperature blackbody model. In addition, our simulations constrain the allowed viewing geometry from the observed pulse fraction upper limits. These simulations show that RX J1856 \\(-\\)3754 is very likely to be a normal young pulsar, with the non-thermal radio beam missing Earth's line of sight. The SED limits on the model parameter space put a strong constraint on the star's \\(M/R\\). At the measured parallax distance, the allowed range for M\\({}_{\\rm NS}=1.5\\)M\\({}_{\\odot}\\) is R\\({}_{\\rm NS}=13.7\\pm 0.6\\) km. Under this interpretation, the EOS is relatively stiff near nuclear density and the 'Quark Star' EOS posited in some previous studies is strongly excluded. The data also constrain the surface \\(T\\) distribution over the polar cap. stars: neutron, equation of state + Footnote †: slugcomment: ApJ in press.
Write a summary of the passage below.
arxiv-format/0208118v1.md
Universal Quantum Computation using Exchange Interactions and Teleportation of Single-Qubit Operations Lian-Ao Wu Daniel A. Lidar Chemical Physics Theory Group, University of Toronto, 80 St. George Str., Toronto, Ontario M5S 3H6, Canada ###### Quantum computers (QCs) hold great promise for inherently faster computation than is possible on their classical counterparts, but so far progress in building a large-scale QC has been slow. An essential requirement is that a QC should be capable of performing \"universal quantum computation\" (UQC). I.e., it should be capable of computing, to arbitrary accuracy, any computable function, using a spatially local and polynomial set of logic gates. One of the chief obstacles in constructing large scale QCs is the seemingly innocuous, but in reality very daunting set of requirements that must be met for universality, according to the standard circuit model [1]: (1) preparation of a fiducial initial state (_initialization_), (2) a set of single and two-qubit unitary transformations generating the group of all unitary transformations on the Hilbert space of the QC (_computation_), and (3) single-qubit measurements (_read-out_). Since initialization can often be performed through measurements, requirements (1) and (3) do not necessarily imply different experimental procedures and contraints. Until recently it was thought that computation is irreducible to measurements, so that requirement (2), a set of unitary transformations, would appear to be an essential component of UQC. However, unitary transformations are sometimes very challenging to perform. Two important examples are the exceedingly small photon-photon interaction that was thought to preclude linear optics QCs, and the difficult to execute single-spin gates in certain solid state QC proposals, such as quantum dots [2; 3] and donor atom nuclear spins in silicon [4; 5]. The problem with single-spin unitary gates is that they impose difficult demands on \\(g\\)-factor engineering of heterostructure materials, and require strong and inhomogeneous magnetic fields or microwave manipulations of spins, that are often slow and may cause device heating. In the case of exchange Hamiltonians, a possible solution was recently proposed in terms of qubits that are encoded into the states of two or more spins, whence the exchange interaction alone is sufficient to construct a set of universal gates [6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19] (the \"encoded universality\" approach). In the linear optics case, it was shown that photon-photon interactions can be induced indirectly via _gate teleportation_[20]. This idea has its origins in earlier work on fault-tolerant constructions for quantum gates [21; 22; 23] (generalized in [24]) and stochastic programmable quantum gates [25; 26]. The same work inspired more recent results showing that, in fact, measurements and state preparation _alone_ suffice for UQC [27; 28; 29; 30]. Experimentally, a minimalistic approach to constructing a QC seems appealing. In this sense, retaining only the absolutely essential ingredients needed to construct a universal QC may be an important simplification. Since read-out is necessary, _measurements are inevitable_. Here we propose a minimalistic approach for universal quantum computation that is particularly well suited to the important class of spin-based QC proposals governed by exchange interactions [2; 3; 4; 5; 31; 32], and other proposals governed by _effective_ exchange interactions [33; 34; 35]. In particular, we show that _UQC can be performed using only single- and two-qubit measurements and controlled exchange interactions, via gate teleportation_. In our approach, which offers a new perspective on the requirements for UQC, the need to perform the aforementioned difficult single-spin unitary operations is obviated, and replaced by measurements, which are any-how necessary. The tradeoff is that the implementation of gates becomes probabilistic (as in all gate-teleportation based approaches), but this probability can be boosted arbitrarily close to 1 exponentially fast in the number of measurements. We begin our discussion with a relatively simple example of the utility of measurement-aided UQC. This example is not in the exchange-interaction category, but both serves to illustrate some of the more complex ideas needed below, and solves a problem of relevance to an important solid-state QC proposal. The proposal we have in mind is that using d-wave grain boundary (dGB) phase qubits [36; 37]. The system Hamiltonian is: \\[H_{S}=H_{X}+H_{Z}+H_{ZZ}, \\tag{1}\\] where \\(H_{X}=\\sum_{i}\\Delta_{i}X_{i}\\) describes phase tunneling, \\(H_{Z}=\\sum_{i}b_{i}Z_{i}\\) is a bias, and \\(H_{ZZ}=\\sum_{i,j}J_{ij}Z_{i}Z_{j}\\) represents Josephson coupling of qubits; \\(X_{i},Y_{i},Z_{i}\\) denote the Pauli matrices \\(\\sigma^{x},\\sigma^{y},\\sigma^{z}\\) acting on the \\(i^{\\rm th}\\) qubit. It turns out that in this system _only one_ of the termscan be on at any given time [36; 37]. Moreover, turning on the bias or Josephson coupling is the only way to control the value of the tunneling matrix element. In the idle state \\(\\Delta_{i}\\) is non-zero and the qubit undergoes coherent tunneling. In the dGB proposal it is important to reduce the constraints on fabrication by removing the possibility of applying bias \\(b_{i}\\) on individual qubits [38]. This bias requires, e.g., the possibility of applying a local magnetic field on each qubit, and is experimentally very challenging to realize. The effective system Hamiltonian that we consider is therefore: \\(H_{S}^{\\prime}=H_{X}+H_{ZZ}\\), with continuous control over \\(J_{ij}\\). In [38] it was shown how UQC can be performed given this Hamiltonian, by encoding a logical qubit into two physical qubits, and using sequences of recoupling pulses. Here we show instead how to implement \\(Z_{i}\\) using measurements, which together with \\(H_{S}^{\\prime}\\) is sufficient for UQC. Suppose we start from an unknown state of qubit 1: \\(\\left|\\psi\\right\\rangle=a\\)\\(\\left|0\\right\\rangle+b\\)\\(\\left|1\\right\\rangle\\). By cooling in the idle state (only \\(H_{X}\\) on) we can prepare an ancilla qubit 2 in the state \\(\\left(\\left|0\\right\\rangle+\\left|1\\right\\rangle\\right)/\\sqrt{2}\\). Then the total state is: \\(a\\left|00\\right\\rangle+b\\left|10\\right\\rangle+a\\left|01\\right\\rangle+b\\left| 11\\right\\rangle\\). Letting the Josephson-gate \\(e^{-i\\phi Z_{1}Z_{2}/2}\\) act on this state, we obtain \\[a\\left|00\\right\\rangle+e^{i\\phi}b\\left|10\\right\\rangle+e^{i\\phi} a\\left|01\\right\\rangle+b\\left|11\\right\\rangle\\] \\[\\propto e^{-i\\phi Z_{1}/2}\\left|\\psi\\right\\rangle\\left|0\\right\\rangle+e ^{i\\phi Z_{1}/2}\\left|\\psi\\right\\rangle\\left|1\\right\\rangle\\] We then measure \\(Z_{2}\\). If we find \\(0\\) (with probablity \\(1/2\\)) then the state has collapsed to \\(e^{-i\\phi Z_{1}/2}\\left|\\psi\\right\\rangle\\left|0\\right\\rangle\\), which is the required operation on qubit 1. If we find \\(-1\\) then the state is \\(e^{i\\phi Z_{1}/2}\\left|\\psi\\right\\rangle\\left|1\\right\\rangle\\), which is an erred state. To correct it we apply the pulse \\(e^{-i\\phi Z_{1}Z_{2}}\\), which takes the erred state to the correct state \\(-e^{-i\\phi Z_{1}/2}\\left|\\psi\\right\\rangle\\left|1\\right\\rangle\\). We then reinitialize the ancilla qubit. This method for implementing \\(Z_{i}\\) succeeds with certainty after one measurement, possibly requiring (with probability \\(1/2\\)) one correction step. We now turn to QC-proposals based on exchange interactions [2; 3; 4; 5; 31; 32; 33; 34; 35]. In these systems, that are some of the more promising candidates for scalable QC, the qubit-qubit interaction can be written as an axially symmetric exchange interaction of the form: \\[H_{ij}^{\\rm ex}(t)=J_{ij}^{\\perp}(t)(X_{i}X_{j}+Y_{i}Y_{j})+J_{ij}^{z}(t)Z_{i }Z_{j}, \\tag{2}\\] where \\(J_{ij}^{\\alpha}(t)\\)\\((\\alpha=\\perp,z)\\) are controllable coupling constants. The XY (XXZ) model is the case when \\(J_{ij}^{z}=0\\) (\\(\ eq 0\\)). The Heisenberg interaction is the case when \\(J_{ij}^{z}(t)=J_{ij}^{\\perp}(t)\\). See [12] for a classification of various QC models by the type of exchange interaction. In agreement with the QC proposals [2; 3; 4; 5; 31; 32; 33; 34; 35], we assume here that \\(J_{ij}^{\\perp}(t)\\) is completely controllable and allow that the ratio between \\(J_{ij}^{\\perp}(t)\\) and \\(J_{ij}^{z}(t)\\) may not be controllable. The method we present here works equally well for all three types of exchange interactions, thus unifying all exchange-based proposals under a single universality framework. Since all terms in \\(H_{\\rm ex}(t)\\) commute it is simple to show that it generates a unitary two-qubit evolution operator of the form \\(U_{ij}(\\varphi^{\\perp},\\varphi^{z})=\\exp[-i\\int^{t}dt^{\\prime}H_{ij}^{\\rm ex}(t^ {\\prime})]=\\) \\[\\left(\\begin{array}{cc}e^{-i\\varphi^{z}}&\\\\ &e^{i\\varphi^{z}}\\cos 2\\varphi^{\\perp}&-ie^{i\\varphi^{z}}\\sin 2\\varphi^{ \\perp}\\\\ &-ie^{i\\varphi^{z}}\\sin 2\\varphi^{\\perp}&e^{i\\varphi^{z}}\\cos 2\\varphi^{ \\perp}\\\\ \\end{array}\\right) \\tag{3}\\] (we use units where \\(\\hbar=1\\)), where \\(\\varphi^{\\alpha}=\\int^{t}dt^{\\prime}J^{\\alpha}(t^{\\prime})\\), and we have suppressed the qubit indices for clarity. In preparation of our main result, we first prove: _Proposition_. The set \\(\\mathcal{G}=\\{U_{ij}(\\varphi^{\\perp},\\varphi^{z}),R_{j\\beta}\\equiv\\exp(i\\frac{ \\pi}{4}\\sigma_{j}^{\\beta})\\}\\)\\((\\beta=x,z)\\) is universal for quantum computation. _Proof_: A set of continuous one-qubit unitary gates and any two-body Hamiltonian entangling qubits are universal for quantum computation [39]. The exchange Hamiltonian \\(H_{ij}^{\\rm ex}\\) clearly can generate entanglement, so it suffices to show that we can generate all single-qubit transformations using \\(\\mathcal{G}\\). Two of the Pauli matrices are given simply by \\(\\sigma_{j}^{\\beta}=-iR_{j\\beta}^{2}\\). Now, let \\(C_{A}^{\\theta}\\circ\\exp(i\\varphi B)\\equiv\\exp(-i\\theta A)\\exp(i\\varphi B)\\exp( +i\\theta A)\\); two useful identities for anticommuting \\(A,B\\) with \\(A^{2}=I\\) (the identity) are [16]: \\[C_{A}^{\\pi/2}\\circ e^{-i\\varphi B}=e^{i\\varphi B},\\quad C_{A}^{\\pi/4}\\circ e^{- i\\varphi B}=e^{\\varphi AB}.\\] Using this, we first generate \\(e^{-i\\varphi X_{1}X_{2}}=U_{12}(\\varphi/2,\\varphi^{z})C_{X_{1}}^{\\pi/2}\\circ U_{ 12}(\\varphi/2,\\varphi^{z})\\), which takes six elementary steps (where an elementary step is defined as one of the operations \\(U_{ij}(\\varphi^{\\perp},\\varphi^{z}),R_{j\\beta}\\)). Second, as we show below, our gate teleportation procedure can prepare \\(R_{j\\beta}^{\\dagger}\\) just as efficiently as \\(R_{j\\beta}^{\\dagger}\\) (also note that \\(R_{j\\beta}^{\\dagger}=-(R_{j\\beta})^{3}\\)), so that with two additional steps we have \\(e^{-i\\varphi Y_{1}X_{2}}=C_{Z_{1}}^{-\\pi/4}\\circ e^{-i\\varphi X_{1}X_{2}}\\). Finally, with a total of \\(8+6+8=22\\) elementary steps we have \\(e^{-i\\varphi Z_{1}}=C_{Y_{1}X_{2}}^{\\pi/4}\\circ e^{-i\\varphi X_{1}X_{2}}\\), where \\(\\varphi\\) is arbitrary. Similarly, we can generate \\(e^{-i\\varphi Y_{1}}\\) in 22 steps using \\(C_{X_{1}}^{\\pi/4}\\) instead of \\(C_{Z_{1}}^{-\\pi/4}\\). Using a standard Euler angle construction we can generate arbitrary single-qubit operations by composing \\(e^{-i\\varphi Z_{1}}\\) and \\(e^{-i\\varphi Y_{1}}\\)[1]. It is important to note that optimization of the number of steps given in the proof above may be possible. We now show that the single qubit gates \\(R_{j\\beta}\\) can be implemented using cooling, weak spin measurements, and evolution under exchange Hamiltonians of the Heisenberg, XY, or XXZ type. Our method is inspired by the gate teleportation idea [20; 21; 22; 23; 24; 25; 26; 27; 28; 29], which we briefly review, along with state teleportation [40], in Fig. 1. We proceed in two cycles. In Cycle (i), consider a spin (our \"data qubit\") in an unknown state \\(\\left|\\psi\\right\\rangle=a\\left|0\\right\\rangle+b\\left|1\\right\\rangle\\), and two additional (\"ancilla\") spins, as shown in Fig. 2. Our task is to apply the one-qubit operation \\(R_{\\beta}\\) to the data qubit. As in gate teleportation, we require an entangled pair of ancilla spins. However, it turns out that rather than one of the Bell states we need an entangled state that has a phase of \\(i\\) between its components. To obtain this state, we first turn on the exchange interaction \\(H_{23}^{\\rm ex}\\) between the ancilla spins such that \\(J^{\\perp}>0\\). The eigenvalues (eigenstates) are \\(\\{-2J^{\\perp}-J^{z},2J^{\\perp}-J^{z},J^{z}\\}\\) (\\(\\left|S\\right\\rangle,\\left|T_{0}\\right\\rangle,\\left|00\\right\\rangle,\\left|11 \\right\\rangle\\)) where \\(\\left|S\\right\\rangle=\\frac{1}{\\sqrt{2}}(\\left|01\\right\\rangle-\\left|10\\right\\rangle)\\), \\(\\left|T_{0}\\right\\rangle=\\frac{1}{\\sqrt{2}}(\\left|01\\right\\rangle+\\left|10 \\right\\rangle)\\) are the singlet and one of the triplet states. Provided \\(J^{\\perp}>-J^{z}\\) [which is the case for all QC proposals of interest, in which either \\(\\text{sign}(J^{\\perp})=\\text{sign}(J^{z})\\), or \\(J^{z}=0\\)] and we cool the system significantly below \\(-2J^{\\perp}-J^{z}\\), the resulting ground state is \\(\\left|S\\right\\rangle\\). We then perform a single-spin measurement of the observable \\(\\sigma_{i}^{z}\\) on one or both of the ancillas, which will yield either \\(\\left|01\\right\\rangle\\) or \\(\\left|10\\right\\rangle\\). For definiteness assume the outcome was \\(\\left|01\\right\\rangle\\). We then immediately apply a \\(\\pi/8\\) exchange pulse to the ancilla spins [Fig. 2(a)]: \\(U(\\pi/8,\\varphi_{0}^{z})\\left|10\\right\\rangle=\\frac{e^{i\\varphi^{z}}}{\\sqrt{ 2}}(\\left|01\\right\\rangle-i\\left|10\\right\\rangle)\\)] [as follows from Eq. (3)]. The total state of the three spins then reads (neglecting an overall phase \\(e^{i\\varphi^{z}}\\)): \\[\\left|\\psi\\right\\rangle_{1}U_{23}(\\pi/8,\\varphi_{0}^{z})\\left|10 \\right\\rangle_{23}=\\frac{1}{\\sqrt{2}}(a\\left|001\\right\\rangle-ib\\left|110 \\right\\rangle)\\] \\[+\\frac{1}{2}r\\left|T_{0}\\right\\rangle_{12}R_{3z}^{\\dagger}\\left| \\psi\\right\\rangle_{3}-\\frac{1}{2}r^{*}\\left|S\\right\\rangle_{12}R_{3z}\\left| \\psi\\right\\rangle_{3} \\tag{4}\\] where \\(r=\\exp(-i\\pi/4)\\) and the subscripts denote the spin index. At this point Alice makes a weak measurement of her spins [Fig. 2(b)]. Let \\(\\overrightarrow{S}_{ij}=\\frac{1}{2}(\\vec{\\sigma}_{i}+\\vec{\\sigma}_{j})\\) be the total spin of qubits \\(i,j\\); Alice measures \\(\\vec{S}_{12}^{2}\\), with eigenvalues \\(S(S+1)\\). Since only for the singlet state \\(\\left|S\\right\\rangle_{12}\\) do we have \\(S(S+1)=0\\), it follows that if the measurement yields \\(0\\), then the state has collapsed to \\(\\left|S\\right\\rangle_{12}R_{3z}\\left|\\psi\\right\\rangle_{3}\\). In this case, which occurs with probability \\(1/4\\), Bob has \\(R_{3z}\\left|\\psi\\right\\rangle_{3}\\), and we are done [Fig. 2(c), bottom]. If, on the other hand, Alice finds \\(S=1\\), then the normalized post-measurement state is \\[\\frac{1}{\\sqrt{3}}[r\\left|T_{0}\\right\\rangle_{12}R_{3z}^{\\dagger}\\left|\\psi \\right\\rangle_{3}+a\\sqrt{2}(\\left|001\\right\\rangle-ib\\left|110\\right\\rangle)]. \\tag{5}\\] Similar to the gate teleportation protocol [27; 28; 29] shown in Fig. 1(b), Alice and Bob now need to engage in a series of correction steps. In the next step Alice measures \\(S_{z}^{2}=\\frac{1}{4}(\\sigma_{1}^{z}+\\sigma_{2}^{z})^{2}=\\frac{1}{2}(I+\\sigma _{1}^{z}\\sigma_{2}^{z})\\) [Fig. 2 (c), top]. Measurement of the observable \\(\\sigma_{1}^{z}\\sigma_{2}^{z}\\) is discussed in [1]. If Alice finds \\(S_{z}^{2}=0\\) then with probability \\(1/3\\) the state collapses to \\(\\left|T_{0}\\right\\rangle_{12}R_{3z}^{\\dagger}\\left|\\psi\\right\\rangle_{3}\\) and Bob ends up with the opposite of the desired operation, namely \\(R_{z}^{\\dagger}\\left|\\psi\\right\\rangle\\) [Fig. 2(d), bottom]. We describe the required corrective action below, in Cycle (ii). If Alice finds \\(S_{z}^{2}=1\\), then the state is: \\[a\\left|001\\right\\rangle-ib\\left|110\\right\\rangle=\\frac{1}{\\sqrt{2}}(r^{*}R_{1z }^{\\dagger}\\left|\\psi\\right\\rangle_{1}\\left|S\\right\\rangle_{23}+rR_{1z}\\left| \\psi\\right\\rangle_{1}\\left|T_{0}\\right\\rangle_{23}).\\] Bob now measures \\(\\vec{S}_{23}^{2}\\). If he finds \\(S=0\\) then the state has collapsed to \\(R_{1z}^{\\dagger}\\left|\\psi\\right\\rangle_{1}\\left|S\\right\\rangle_{23}\\), while if \\(S=1\\) then the outcome is \\(R_{1z}\\left|\\psi\\right\\rangle_{1}\\left|T_{0}\\right\\rangle_{23}\\), equiprobably. In the latter case Alice ends up with the desired operation [Fig. 2(e)]. In a similar manner one can generate \\(R_{x}\\) or \\(R_{x}^{\\dagger}\\) acting on an arbitrary qubit state \\(\\left|\\psi\\right\\rangle\\). Let \\(\\left|\\pm\\right\\rangle\\) denote the \\(\\pm 1\\) eigenstates of the Pauli operator \\(\\sigma^{x}\\). As in the \\(R_{z}\\) case above, first prepare a singlet state \\(\\left|S\\right\\rangle=\\frac{1}{\\sqrt{2}}(\\left|-+\\right\\rangle-\\left|+-\\right\\rangle)\\) on the ancilla spins \\(2,3\\) by cooling. Then perform a Figure 1: Teleporation [40] is a method for transmitting an unknown quantum state \\(\\left|\\psi\\right\\rangle\\) with the help of prior entanglement and classical communication. A state teleportation circuit is shown in (a), where time proceeds from left to right, and \\(<\\) denotes the entangled (Bell) state \\(\\frac{1}{\\sqrt{2}}(\\left|00\\right\\rangle_{23}+\\left|11\\right\\rangle_{23})\\). Alice has \\(\\left|\\psi\\right\\rangle_{1}\\) and qubit \\(2\\) from the Bell state. Bob has qubit \\(3\\) from the Bell state. Alice measures \\(\\left|\\psi\\right\\rangle_{1}\\) and qubit \\(2\\) in the Bell basis, obtaining one of \\(4\\) possible outcomes labeled \\(\\alpha\\). She communicates her result to Bob (double wires), who applies \\(\\sigma^{\\alpha}\\) to his qubit, where \\(\\sigma^{\\alpha}\\) are the four Pauli matrices \\(I,\\sigma^{x},\\sigma^{y},\\sigma^{z}\\). Bob then has \\(\\left|\\psi\\right\\rangle_{3}\\). A gate teleportation circuit is shown in (b), following [27]. To teleport the single-qubit operation \\(U\\), the state \\(\\left|U_{\\beta}\\right\\rangle\\equiv(I\\otimes U\\sigma^{\\beta})\\frac{1}{\\sqrt{2}}( \\left|00\\right\\rangle+\\left|11\\right\\rangle)\\) is prepared offline, by first preparing the state \\(\\left|00\\right\\rangle\\) and then measuring in the orthonormal basis of states \\(\\left|U_{\\beta}\\right\\rangle\\). Alice and Bob now repeat the state teleportation protocol. With probability \\(1/4\\) Alice finds \\(\\alpha=\\beta\\), in which case Bob now has \\(U|\\psi\\rangle_{3}\\). With probability \\(3/4\\) she finds \\(\\alpha\ eq\\beta\\) and Bob needs to apply a correction \\(M_{\\alpha\\beta}=U\\sigma^{\\beta}\\sigma^{\\alpha}U^{\\dagger}\\) in order to end up with \\(U|\\psi\\rangle_{3}\\). This is done by teleporting \\(M_{\\alpha\\beta}\\), i.e., the procedure is repeated recursively. It succeeds on average after \\(4\\) trials. Figure 2: Gate teleportation of single-qubit operation \\(R_{z}\\). Initially Alice has \\(\\left|\\psi\\right\\rangle_{1}\\) and \\(\\left|0\\right\\rangle\\). Bob has \\(\\left|1\\right\\rangle\\). Time proceeds from left to right. Starting from the \\(3\\)-qubit state \\(\\left|\\psi\\right\\rangle\\)\\(\\left|01\\right\\rangle\\), the task is to obtain \\(R_{z}\\)\\(\\left|\\psi\\right\\rangle\\). The protocol shown succeeds with probability \\(1/2\\). When it fails the operation \\(R_{z}^{\\dagger}\\) is applied instead. Fractions give the probability of a branch; \\(0\\) and \\(1\\) in a gray box are possible measurement outcomes of the observable in the preceding gray box. See text for full details. single-spin measurement of the observable \\(\\sigma_{j}^{x}\\) on each ancilla, which will yield either \\(\\left|+-\\right\\rangle\\) or \\(\\left|-+\\right\\rangle\\). For definiteness assume the outcome was \\(\\left|+-\\right\\rangle_{23}\\). Observing that in the \\(\\{\\left|+-\\right\\rangle,\\left|-+\\right\\rangle\\}\\) subspace, \\(H_{ij}^{\\mathrm{ex}}=-J_{ij}^{\\perp}I+(J_{ij}^{\\perp}+J_{ij}^{z})\\tilde{X}\\), where \\(\\tilde{X}:\\left|+-\\right\\rangle\\leftrightarrow\\left|-+\\right\\rangle\\), it follows that \\(U(\\pi/4-\\varphi_{0}^{z},\\varphi_{0}^{z})\\left|+-\\right\\rangle=\\frac{e^{-i \\varphi}}{\\sqrt{2}}(\\left|+-\\right\\rangle-i\\left|-+\\right\\rangle)\\), so that we have a means of generating an entangled initial state. The unknown state \\(\\left|\\psi\\right\\rangle_{1}\\) of the data qubit can be expressed as \\(\\left|\\psi\\right\\rangle=a_{x}\\left|+\\right\\rangle+b_{x}\\left|-\\right\\rangle\\), where \\(a_{x}=(a+b)/\\sqrt{2}\\) and \\(b_{x}=(a-b)/\\sqrt{2}.\\) Then (neglecting the overall phase \\(e^{-i\\varphi^{\\perp}}\\)): \\[\\left|\\psi\\right\\rangle_{1}U_{23}(\\pi/4-\\varphi_{0}^{z},\\varphi_{ 0}^{z})\\left|+-\\right\\rangle_{23}=\\frac{1}{2}r^{*}\\left|S\\right\\rangle_{12}R_{ 3x}\\left|\\psi\\right\\rangle_{3}+\\] \\[\\frac{1}{2}r\\left|T_{0}^{x}\\right\\rangle_{12}R_{3x}^{\\dagger} \\left|\\psi\\right\\rangle_{3}+\\frac{1}{\\sqrt{2}}(a_{x}\\left|++-\\right\\rangle- ib_{x}\\left|--+\\right\\rangle)\\] where \\(\\left|T_{0}^{x}\\right\\rangle=\\frac{1}{\\sqrt{2}}(\\left|+-\\right\\rangle+\\left|-+ \\right\\rangle)\\) is a triplet state, a zero eigenstate of the observable \\(\\sigma_{1}^{x}+\\sigma_{2}^{x}\\). The gate teleportation procedure is now repeated to yield \\(R_{x}\\) or \\(R_{x}^{\\dagger}\\). First, Alice measures the total spin \\(\\tilde{S}_{12}^{2}\\). If she find \\(S=0\\) (with probability \\(1/4\\)) Bob has spin \\(3\\) in the desired state \\(R_{3x}\\left|\\psi\\right\\rangle_{3}\\). If she finds \\(S=1\\) then she proceeds to measure the total length of the \\(x\\) component \\(S_{x}^{2}=\\frac{1}{4}(\\sigma_{1}^{x}+\\sigma_{2}^{x})^{2}\\), yielding, provided she finds \\(S_{x}^{2}=0\\), the state \\(\\left|T_{0}^{x}\\right\\rangle_{12}R_{3x}^{\\dagger}\\left|\\psi\\right\\rangle_{3}\\) with probability \\(1/3\\). If, on the other hand, she finds \\(S_{x}^{2}=1\\), i.e., the state is \\(a_{x}\\left|++-\\right\\rangle-ib_{x}\\left|--+\\right\\rangle\\), then by letting Bob measure \\(\\tilde{S}_{23}^{2}\\), the states \\(R_{1x}^{\\dagger}\\left|\\psi\\right\\rangle_{1}\\left|S\\right\\rangle_{23}\\) or \\(R_{1x}\\left|\\psi\\right\\rangle_{1}\\left|T_{0}^{x}\\right\\rangle_{23}\\) are obtained, with equal probabilities. Fig. 2 summarizes the protocol we have described thus far. The overall effect is to transform the input state \\(\\left|\\psi\\right\\rangle\\) to either the output state \\(R_{\\beta}\\left|\\psi\\right\\rangle\\) or \\(R_{\\beta}^{\\dagger}\\left|\\psi\\right\\rangle\\), equiprobably. We have now arrived at Cycle (ii), in which we must fix the erred state \\(R_{j\\beta}^{\\dagger}\\left|\\psi\\right\\rangle_{j}\\) (\\(j=1\\) or \\(3\\)). To do so we essentially repeat the procedure shown in Fig. 2. We explicitly discuss one example; all other cases are similar. Suppose that we obtain the erred state \\(R_{1z}^{\\dagger}\\left|\\psi\\right\\rangle_{1}\\left|S\\right\\rangle_{23}\\) [Fig. 2(e)]. It can be rewritten as \\[rR_{1z}^{\\dagger}\\left|\\psi\\right\\rangle_{1}\\left|S\\right\\rangle _{23}=-\\frac{i}{\\sqrt{2}}(a\\left|001\\right\\rangle-ib\\left|110\\right\\rangle)\\] \\[-\\frac{1}{2}r\\left|S\\right\\rangle_{12}R_{3z}^{\\dagger}\\left| \\psi\\right\\rangle_{3}+\\frac{1}{2}r^{*}\\left|T_{0}\\right\\rangle_{12}R_{3z}\\left| \\psi\\right\\rangle_{3},\\] which up to unimportant phases is identical to Eq. (4), except that the position of \\(R_{3z}^{\\dagger}\\) and \\(R_{3z}\\) has flipped. Correspondingly flipping the decision pathway in Fig. 2 will therefore lead to the correct action \\(R_{\\beta}\\left|\\psi\\right\\rangle\\) with probability \\(1/2\\), while the overall probability of obtaining the faulty outcome \\(R_{\\beta}^{\\dagger}\\left|\\psi\\right\\rangle\\) after the second cycle of measurements is \\(1/4.\\) Clearly, after \\(n\\) measurement cycles as shown in Fig. 2, the probability for the correct outcome is \\(1-2^{-n}\\). The expected number of measurements per cycle is \\(1\\frac{1}{4}+3\\frac{3}{4}\\frac{2}{3}\\frac{1}{2}=1\\), and the expected number of measurement cycles needed is \\(\\sum_{n=1}^{\\infty}n2^{-n}=2\\). We note that in the case of the erred state \\(R_{jz}^{\\dagger}\\left|\\psi\\right\\rangle_{j}\\) (\\(j=1\\) or \\(3\\)) there is an alternative that is potentially simpler than repeating the measurement scheme of Fig. 2. Provided the exchange Hamiltonian is of the XY type, or of the XXZ type with a tunable \\(J^{z}\\) exchange parameter, one can simply apply the correction operator \\(U_{j2}(\\frac{\\pi}{2},0)=Z_{j}Z_{2}\\) to \\(R_{jz}^{\\dagger}\\left|\\psi\\right\\rangle_{j}\\), yielding \\(R_{jz}\\left|\\psi\\right\\rangle_{j}\\) as required. Finally, we note that Nielsen [27] has discussed the conditions for making a gate teleportation procedure of the type we have proposed here, fault tolerant. To conclude, we have proposed a gate-teleportation method for universal quantum computation that is uniformly applicable to Heisenberg, XY and XXZ-type exchange interaction-based quantum computer (QC) proposals. Such exchange interactions characterize almost all solid-state QC proposals, as well as several quantum optics based proposals [12]. In a number of these QC proposals, e.g., quantum dots [2], exchange interactions are significantly easier to control than single-qubit operations [8; 12]. Therefore it is advantageous to replace, where possible, single-qubit operations by measurements. Moreover, spin measurements are necessary for state read-out, both at the end of a computation and at intermediate stages during an error-correction procedure, and often play an important role in initial-state preparation. Our method combines measurements of single- and two-spin observables, and a tunable exchange interaction. In a similar spirit we have shown how to replace with measurements certain difficult single-qubit operations in a QC-proposal involving superconducting phase qubits. We hope that the flexibility offered by this approach will provide a useful alternative route towards the realization of universal quantum computation. _Acknowledgments.--_ The present study was sponsored by the DARPA-QuIST program (managed by AFOSR under agreement No. F49620-01-1-0468) and by DWave Systems, Inc. ## References * (1) M.A. Nielsen and I.L. Chuang, _Quantum Computation and Quantum Information_ (Cambridge University Press, Cambridge, UK, 2000). * (2) D. Loss, D.P. DiVincenzo, _Phys. Rev. A_**57**, 120 (1998). * (3) J. Levy, _Phys. Rev. A_**64**, 052306 (2001). * (4) B.E. Kane, _Nature_**393**, 133 (1998). * (5) D. Mozyrsky, V. Privman, M.L. Glasser, _Phys. Rev. Lett._**86**, 5112 (2001). * (6) D. Bacon, J. Kempe, D.A. Lidar, K.B. Whaley, _Phys. Rev. Lett._**85**, 1758 (2000). * (7) J. Kempe, D. Bacon, D.A. Lidar, K.B. Whaley, _Phys. Rev. A_**63**, 042307 (2001). * (8) D.P. DiVincenzo, D. Bacon, J. Kempe, G. Burkard, K.B. Whaley, _Nature_**408**, 339 (2000). * (9) D. Bacon, J. Kempe, D.P. DiVincenzo, D.A. Lidar, K.B. Whaley, _Proceedings of the 1st International Conference on Experimental Implementations of Quantum Computation, Sydney, Australia_, R. Clark, ed. (Rinton, Princeton, NJ, 2001), p. 257. * (10) J. Levy, Eprint quant-ph/0101057. * (11) S.C. Benjamin, _Phys. Rev. A_**64**, 054303 (2001). * (12) D.A. Lidar, L.-A. Wu, _Phys. Rev. Lett._**88**, 017905 (2002). * (13) L.-A. Wu, D.A. Lidar, _Phys. Rev. A_**65**, 042318 (2002). * (14) J. Kempe, D. Bacon, D.P. DiVincenzo, K.B. Whaley, _Quant. Inf. Comp._**1**, 33 (2001). * (15) J. Kempe, K.B. Whaley, _Phys. Rev. A_**65**, 052330 (2001). * (16) L.-A. Wu, D.A. Lidar, _J. Math. Phys._, in press. Eprint quant-ph/0109078. * (17) L.-A. Wu, D.A. Lidar, Eprint quant-ph/0202135. * (18) J. Vala, K.B. Whaley, Eprint quant-ph/0204016. * (19) A.J. Skinner, M.E. Davenport, B.E. Kane, Eprint quant-ph/0206159. * (20) E. Knill, R. Laflamme, G.J. Milburn, _Nature_**409**, 46 (2001). * (21) P.W. Shor, _Proceedings of the 37th Symposium on Foundations of Computing_ (IEEE Computer Society Press, Los Alamitos, CA, 1996), p. 56. * (22) J. Preskill, _Proc. Roy. Soc. London Ser. A_**454**, 385 (1998). * (23) D. Gottesman, I.L. Chuang, _Nature_**402**, 390 (1999). * (24) X. Zhou, D.W. Leung, I.L. Chuang, _Phys. Rev. A_**62**, 052316 (2000). * (25) M.A. Nielsen, I.L. Chuang, _Phys. Rev. Lett._**79**, 321 (1997). * (26) G. Vidal, L. Masanes, J.I. Cirac, _Phys. Rev. Lett._**88**, 047905 (2002). * (27) M.A. Nielsen, Eprint quant-ph/0108020. * (28) S.A. Fenner, Y. Zhang, Eprint quant-ph/0111077. * (29) D.W. Leung, Eprint quant-ph/0111122. * (30) R. Raussendorf, H.J. Briegel, _Phys. Rev. Lett._**86**, 5188 (2001). * (31) R. Vrijen _et al._, _Phys. Rev. A_**62**, 012306 (2000). * (32) A. Imamoglu _et al._, _Phys. Rev. Lett._**83**, 4204 (1999). * (33) J.I. Cirac, P. Zoller, _Phys. Rev. Lett._**74**, 4091 (1995). * (34) P.M. Platzman, M.I. Dykman, _Science_**284**, 1967 (1999). * (35) K.R. Brown, D.A. Lidar, K.B. Whaley, _Phys. Rev. A_**65**, 012307 (2002). * (36) A.M. Zagoskin, Eprint cond-mat/9903170. * (37) A. Blais, A.M. Zagoskin, _Phys. Rev. A_**61**, 042308 (2000). * (38) D.A. Lidar, L.-A. Wu, A. Blais, _Quant. Inf. Proc._, in press. Eprint cond-mat/0204153. * (39) J.L. Dodd, M.A. Nielsen, M.J. Bremner, R.T. Thew, Eprint quant-ph/0106064. * (40) C.H. Bennett _et al._, _Phys. Rev. Lett._**70**, 1895 (1993).
We show how to construct a universal set of quantum logic gates using control over exchange interactions and single- and two-spin measurements only. Single-spin unitary operations are teleported instead of being executed directly, thus eliminating a major difficulty in the construction of several of the most promising proposals for solid-state quantum computation, such as spin-coupled quantum dots, donor-atom nuclear spins in silicon, and electrons on helium. Contrary to previous proposals dealing with this difficulty, our scheme requires no encoding redundancy. We also discuss an application to superconducting phase qubits.
Give a concise overview of the text below.
arxiv-format/0208481v1.md
# On the Equation of State for Scalar Field Alexander S. Silbergleit Gravity Probe B, W.W.Hansen Experimental Physics Laboratory, Stanford University, Stanford, CA 940305-4085, USA. e-mail: [email protected]_ # Introduction Quintessence is a dynamic energy with negative pressure-to-density ratio [1]. The idea of quintessence provides new 'degrees of freedom' in cosmology [1, 2], extends the variety of modern field models to include extreme forms of energy [1, 3], and may stimulate a better understanding of the fundamental problem of interplay between gravity and field theories [4, 5]. Quintessence is also shown to give rise to spacetimes with new interesting properties [6, 7]. However, one needs to find out what kind of physical substance obeys the quintessence EOS, that is, to model quintessence, in some way. It was common understanding from the very beginning that a scalar field could act as quintessence; nevertheless, it should be clearly demosntrated. The EOS for the scalar field has been intensively studied in [8] for the so called tracking cosmological solutions introduced in [9], and some classes of potentials allowing for the field EOS were described. In this letter we examine this question for any cosmological solutions. We first give an explicit description of all potentials allowing for solutions with scalar field satifying the linear EOS with constant parameter \\(w_{f}\\). Then, by way of generalization, we derive the conditions on the potential which provide the EOS where the parameter varies slowly with the field, \\(w_{f}=w_{f}(\\varphi)\\). **2. Exact solutions with scalar field satisfying linear EOS** We consider Friedmann universe described by the Friedmann-Robertson-Walker metric, \\[ds^{2}=-dt^{2}+a^{2}(t)\\left(\\frac{dr^{2}}{1-kr^{2}}+r^{2}d\\Omega^{2}\\right)\\;, \\tag{1}\\]where \\(k=-1,0\\), or \\(1\\), according to whether the universe is open, flat, or closed (\\(G=c=1\\)). In the presence of matter of the density, \\(\\rho\\), and pressure, \\(p\\), and a minimally coupled scalar field, \\(\\varphi\\), with the potential \\(V(\\varphi)\\geq 0\\), the evolution equations can be written in the form: \\[3\\,\\frac{\\dot{a}^{2}}{a^{2}}=8\\pi\\rho+8\\pi\\rho_{f}-3\\,\\frac{k}{a^ {2}}\\;; \\tag{2}\\] \\[\\dot{\\rho}=-\\,3\\,\\frac{\\dot{a}}{a}\\,(\\rho+p)\\;;\\] (3) \\[\\dot{\\rho_{f}}=-\\,3\\,\\frac{\\dot{a}}{a}\\,(\\rho_{f}+p_{f})\\;. \\tag{4}\\] The proper definitions of the scalar field energy density, \\(\\rho_{f}\\), and pressure, \\(p_{f}\\), are: \\[8\\pi\\rho_{f}\\equiv\\dot{\\varphi}^{2}+V,\\qquad 8\\pi p_{f}\\equiv\\dot{\\varphi}^{2}-V\\;. \\tag{5}\\] The first of them is implied by equation (2), the second one is validated by the very equation (4), written as the field energy conservation equation, identical in its form with the conservation law (3) for matter. Also, the expression for the acceleration, following from eqs. (2)-(3) and (5), reads: \\[3\\,\\ddot{a}/a=-4\\pi\\,(\\rho+3p)+V-2\\dot{\\varphi}^{2}\\equiv-4\\pi\\,[(\\rho+3p)+( \\rho_{f}+3p_{f}))]\\;; \\tag{6}\\] the density and pressure of both matter and field are involved here uniformly. We assume the linear equation of state for the matter, \\(p=w\\rho,\\quad w=\\mbox{const}\\). Our main interest is concerned with the 'normal' matter with \\(w\\geq 0\\); however, all the following results hold for any \\(w\\geq-1\\). Using the EOS, equation (3) immediately integrates to express the density via the scale factor (\\(C>0\\) is an arbitrary constant): \\[8\\pi\\rho=C/a^{3(1+w)}\\;. \\tag{7}\\] Global dynamics of cosmological expansion governed by the system (2)-(4) was described in [10]. In this paper we are looking for special solutions subject to an additional constraint, namely, a linear EOS for the scalar field, \\(p_{f}=w_{f}\\rho_{f}\\). First, we assume \\(w_{f}\\) constant, and look for potentials which would allow solutions satisfying such EOS. Introducing expressions (5) converts the EOS into an equivalent relation, \\[\\dot{\\varphi}^{2}=\\frac{1+w_{f}}{1-w_{f}}\\,V(\\varphi)\\;, \\tag{8}\\] showing that the acceptable range of the EOS parameter is \\(|w_{f}|<1\\). Evidently, the EOS of the Zel'dovich superstiff fluid, \\(w_{f}=1\\), holds only for the free field, \\(V(\\varphi)\\equiv 0\\); the vacuum EOS (\\(w_{f}=-1\\)) is only valid for the constant field \\(\\varphi_{c}\\) such that, by (4), \\(V^{{}^{\\prime}}(\\varphi_{c})=0\\). This is the classical case of a universe with the cosmological constant, studied recently in detail in [11]. For any given \\(V(\\varphi)\\) and \\(w_{f}\\), equation (8) completely specifies the time evolution of the scalar field. On the other hand, the EOS of the scalar field allows for an immediate integration of the field energy conservation equation (4), in exactly the same fashion as with the equation (3) for the matter, so that \\[8\\pi\\rho_{f}=C_{f}/a^{3(1+w_{f})}\\;; \\tag{9}\\] here \\(C_{f}>0\\) is another arbitrary constant of motion. Combining this and (8), one relates the potential (as a function of time for the solution under investigation) to the scale factor, \\[V=\\left(1-w_{f}\\right)C_{f}/2a^{3(1+w_{f})}\\;. \\tag{10}\\] At this point, only the equation (2), of the whole system (2)-(4), remains to be satisfied. By (7) and (9), it transforms into the first order equation for the scale factor (the chosen positive sign corresponds to expansion): \\[\\dot{a}=\\left(C/3a^{1+3w}+C_{f}/3a^{1+3w_{f}}-k\\right)^{1/2}\\;. \\tag{11}\\] This, in fact, completes the solution: integration of equation (11) gives the scale factor as a function of time, next, relation (10) provides the potential, also as a function of time. Scalar field \\(\\varphi(t)\\) is then determined by integrating a known function of time: \\[\\dot{\\varphi}(t)=\\pm\\sqrt{\\frac{1+w_{f}}{1-w_{f}}\\,V}=\\pm\\sqrt{\\frac{\\left(1 +w_{f}\\right)C_{f}}{2\\,[a(t)]^{3(1+w_{f})}}}\\;. \\tag{12}\\] Formula (12) shows that \\(\\varphi=\\varphi(t)\\) is a monotonic function of time, so that the inverse to function \\(t=t(\\varphi)\\) is well defined. Hence, the potential can be finally determined as the function of the field, \\(V(\\varphi)=\\left(1-w_{f}\\right)C_{f}/2\\,[a(t(\\varphi))]^{3(1+w_{f})}\\), according to the expression (10). Using the EOS for both matter and scalar field, and then (7) and (9), the acceleration equation (6) for the found solution becomes \\[3\\,\\ddot{a}/a=-(1/2)\\left[(1+3w)C/a^{3(1+w)}+(1+3w_{f})C_{f}/a^{3(1+w_{f})} \\right]\\;. \\tag{13}\\] Even if the matter is 'normal', \\(w\\geq 0\\), but the field acts as quintessence with \\(-1<w_{f}<-1/3\\), the expansion clearly accelerates, at least at large enoguh values of \\(a\\). Also, if \\(w_{f}\\approx 0\\), then the scalar field can play the role of dark matter in the universe (whether the scalar field can clamp to 'normal' matter, as the dark matter seemingly does, requires separate investigation). The found solution contains, along with \\(C\\), \\(C_{f}\\), two more arbitrary constants coming from integration of (11) and (12). Both of them are not significant, since they just shift the values of time and scalar field. Thus we have demonstrated that, given any \\(|w_{f}|<1\\) (and \\(w>-1\\)), a two-parameter family of potentials \\(V=V(\\varphi,C,C_{f})\\) and the corresponding cosmological solution exists, for which the EOS of the scalar field holds throughout the whole evolution. If \\(-1<w_{f}<0\\), then the scalar field acts exactly as quintessence, and, consequently, for \\(-1<w_{f}<-1/3\\) the expansion of the universe accelerates, at least at its late stages. ## 3 Explicit description of the potential The above derivation lacks a direct formula relating the potential with the scalar field, \\(V=V(\\varphi)\\). The formula is not difficult to establish: one just writes the evolution equation (2) not in terms of the scale factor, as in (11), but rather in terms of the potential. Indeed, from (10), one has \\(a=[(1-w_{f})\\,C_{f}/2V]^{1/3(1+w_{f})}\\); differentiating this in time using (8), one obtains also a formula for \\(\\dot{a}/a\\) through \\(V\\) and \\(dV/d\\varphi\\). Replacing all the terms in the evolution equation (11) with their ready expressions via \\(V\\) results in the first order differential equation for \\(V(\\varphi)\\): \\[Q(V,w_{f})\\,dV/\\,V=\\mp\\sqrt{6\\,(1+w_{f})}\\,d\\varphi\\;, \\tag{14}\\] where \\[Q(V,w_{f})=\\left[1+A\\,V^{\\frac{w-w_{f}}{1+w_{f}}}\\,-k\\,B\\,V^{- \\frac{1+3w_{f}}{3(1+w_{f})}}\\right]^{-1/2} \\tag{15}\\] \\[A=(C/C_{f})\\,\\left[(1-w_{f})C_{f}/2\\right]^{\\frac{w_{f}-w}{1+w_ {f}}},\\quad B=(3/C_{f})\\,\\left[(1-w_{f})C_{f}/2\\right]^{\\frac{1+3w_{f}}{3(1+w_ {f})}}\\;, \\tag{16}\\] and the signs correspond to the signs in (12). After a straightforward integration of (14) (that leads, in many cases, to an elementary result, see the next section), we arrive at the desired expression for the potential as function of the scalar field, \\(V=V(\\varphi)\\). The dynamics of the field, i. e., the function \\(\\varphi=\\varphi(t)\\), is then determined by integrating the first expression in (12), and the scale factor evolution is found by its expression through \\(V(\\varphi(t))\\) above. This expression shows that always \\(V=\\infty\\) at the Big Bang (\\(a=0\\)), and \\(V\\to 0\\) when \\(a\\to\\infty\\); it also demonstrates that the potential is a monotonic function of scalar field. Moreover, equation (14) allows us to specify the behavior of the potential in both limits; the most interesting are the late stages of the cosmological expansion. If the EOS parameter satisfies \\[w_{f}\\leq\\min\\{w,\\,-1/3\\}\\;, \\tag{17}\\] then the potential drops exponentially, \\(V(\\varphi)\\sim\\exp(\\mp\\alpha\\varphi)\\). The constant \\(\\alpha>0\\) equals to \\(\\sqrt{6(1+w_{f})}\\) when no equality holds in (17), and is expressed through all the parameters otherwise. So, under the condition (17) the found solutions implement the runaway scenario [5], with \\(\\varphi\\to\\pm\\infty,\\quad V(\\varphi)\\to 0\\) at large times. Remarkably, all this happens exactly in the case when the scalar field behaves as a'strong' quintessence with the pressuere equally or more negative than the one of the Einstein quitessence, for which \\(w_{f}=-1/3\\). When condition (17) is invalid, the potential behaves as a power of the scalar field when it is small, \\(V(\\varphi)\\sim\\varphi^{\ u}\\); here \\(\ u>0\\) depends only on \\(w\\) and \\(w_{f}\\). Note, however, that the large-time behavior of the scale factor \\(a(t)\\) and the potential \\(V(\\varphi(t))\\) - as fucntions of time - is qualitatively the same for all possible parameter values: these functions are, respectively, a positive and negative power of time related by (10). It is only the scalar field, \\(\\varphi(t)\\), that evolves differently at large times, depending on whether condition (17) is true or not: in the first case, the field is logarithmic, in the second case it has a power asymptotics, same as the other two dynamic variables. ## 4. Examples of closed-form solutions Several classes of physically meaningful closed-form solutions are described by the same formula for the potential, differing just by the values of the parameters involved. The reason for this is the structure of the key equation (14): there are two terms added to unity under the square root in \\(Q(V,w_{f})\\) [see (15)]. In all the cases when either of the terms is a constant, the integral of (14) is elementary and, of course, of the same functional form. We give the expression only for the potential [if \\(V(\\varphi)\\) is known, \\(\\varphi(t)\\) and \\(a(t)\\) are found as described in the previous section]. This potential, allowing for solutions with scalar field satisfying a linear EOS, is \\[V(\\varphi)=V_{0}\\left[\\sinh^{2}m(\\varphi-\\varphi_{*})\\right]^{\\mu},\\qquad \\varphi_{*}={\\rm const}\\;. \\tag{18}\\] The expression is valid in the following five cases : 1) flat universe, \\(k=0\\); 2) universe with scalar field only, \\(C=0\\); 3) same EOS for both matter and field, \\(w=w_{f}\\); 4) scalar field as Einstein quintessence, \\(w_{f}=-1/3\\); 5) Einstein quintessence as matter, \\(w=-1/3\\). The cases differ only by the values of \\(\\mu\\), \\(V_{0}\\), and \\(m\\), whose expressions are cumbersome. Thus we show just one example, for the first case of flat universe: \\[\\mu=\\frac{1+w_{f}}{w_{f}-w},\\qquad V_{0}=\\frac{(1-w_{f})C_{f}}{2}\\,\\left( \\frac{C}{C_{f}}\\right)^{\\mu},\\qquad m=|w_{f}-w|\\,\\left[\\frac{3}{2\\,(1+w_{f})} \\right]^{1/2}\\;.\\] In each of the above cases it is taken that only one of the five assumptions designating the cases is fulfilled (for instance, in the first case \\(C\ eq 0\\), \\(w_{f}\ eq w\\), \\(w_{f}\ eq-1/3\\), \\(w\ eq-1/3\\), etc.). On the other hand, many combinations of two of those assumptions (such as \\(k=0\\) and \\(C=0\\), \\(k=0\\) and \\(w=w_{f}\\), etc.) imply that \\(Q(V,w_{f})\\) in the equation (14) becomes a constant. Therefore, the potential proves to be exactly an exponent in every such case, \\[V(\\varphi)=V_{0}\\,\\exp(\\mp\\alpha\\varphi)\\;, \\tag{19}\\] with arbitrary \\(V_{0}\\) and \\(\\alpha>0\\) depending on the parameters. The scalar field then is a logarithmic function of time, which results in the potential proportional to the inverse square of time, \\(V(\\varphi(t))\\,\\sim\\,t^{-2}\\); the scale factor is also a power, \\(a(t)\\,\\sim\\,t^{2/3(1+w_{f})}\\). Many other closed-form solutions are not related to the degeneracy of \\(Q(V,w_{f})\\). To give one example, for pressureless matter (\\(w=0\\)) and scalar field obeying the EOS of quintessence with \\(w_{f}=-2/3\\) in the open universe (\\(k=-1\\)), the potential is: \\[V(\\varphi)=4\\,\\bar{e}(\\varphi)\\,\\left\\{\\left[\\bar{e}(\\varphi)-B\\right]^{2}-4A \\right\\}^{-1},\\qquad\\bar{e}(\\varphi)\\equiv\\exp\\left[\\pm\\sqrt{6(1+w_{f})}\\,( \\varphi-\\varphi_{*})\\right]\\;, \\tag{20}\\] where \\(A=6C/5C_{f}^{3},\\ B=2/5\\), according to (16) for this particular case. The stability of cosmological solutions driven by quitenssence with the pressure to density ratio (\\(-2/3\\)) has been recently studied in [12]. **5. Generalizations. Linear EOS with slowly varying parameter, \\(w_{f}(\\varphi)\\)** Generalization of the above results to the case of arbitrary number, \\(N\\), of non-interacting matter species and arbitrary number, \\(N_{f}\\), of non-interacting scalar fields \\(\\varphi_{n}\\) satisfying (different) linear EOS, is straightforward. Such solutions can, in particular, fit theconcordant cosmological data, if \\(N=N_{f}=2\\) and \\(w^{(1)}=0\\) for baryonic matter, \\(w^{(2)}=1/3\\) for radiation, scalar field \\(\\varphi_{1}\\) plays the role of the dark matter with \\(w^{(1)}_{f}=0\\), and scalar field \\(\\varphi_{2}\\) acts as quintessence with \\(w^{(2)}_{f}=-1\\,\\div\\,-0.6\\)[13]. Also, the EOS of the field holds approximately for long periods of time for the potentials which are small perturbations of those found in the previous sections. However, the scalar field can satisfy the EOS approximately in a much broader range of situations. Suppose the parameter in the EOS for the field is not a true constant, but a function of the field, \\(w_{f}=w_{f}(\\varphi)\\), so that the EOS, or the equivalent equation (8), is no longer a constraint, rather, a definition of a new dynamic variable. The only constraint we now impose is that this variable, \\(w_{f}(\\varphi)\\), varies slowly with the field as compared to its density \\(\\rho_{f}\\), so that the integration of the field energy conservation equatuion (4) resulting in (9) is still approximately valid. By (8), the condition of slow variation proves to be \\[\\frac{2}{1-w_{f}}\\,\\left|\\frac{dw_{f}}{d\\varphi}\\right|\\ll\\frac{1}{V}\\,\\left| \\frac{dV}{d\\varphi}\\right|,\\quad\\mbox{or just}\\quad\\left|\\frac{dw_{f}}{d \\varphi}\\right|\\ll\\left|\\frac{d\\ln V}{d\\varphi}\\right| \\tag{21}\\] provided that \\(w_{f}(\\varphi)\\) is not too close to unity; particularly, the second form of the condition is true for all the negative values of \\(w_{f}(\\varphi)\\). Under the condition (21), all calculations of sections 2 nad 3 hold to lowest order in the slow variation. The key equality (14), which we now write as \\[w_{f}=\\frac{Q^{2}(V,w_{f})}{6}\\,\\left(\\frac{d\\ln V}{d\\varphi}\\right)^{2}\\,-\\, 1\\, \\tag{22}\\] is no longer a differential equation for \\(V(\\varphi)\\), but, for a given potential, a transcendetal equation for \\(w_{f}(\\varphi)\\), instead. If its proper solution exists, then condition (21) becomes an inequality which limits the class of potentials in question. Since this is an important general result, we formulate it accurately. Suppose that for some values of \\(w\\), \\(k=0,\\pm 1\\) and constants \\(C,C_{f}\\) involved in \\(Q\\), equation (22) has a solution \\(w_{f}=W(V,d\\ln V/d\\varphi)\\). Then the inequalities \\[W\\left(V,\\frac{d\\ln V}{d\\varphi}\\right)<1,\\qquad\\left|\\frac{d}{d\\varphi}\\,W \\left(V,\\frac{d\\ln V}{d\\varphi}\\right)\\right|\\ll\\left|\\frac{d\\ln V}{d\\varphi}\\right| \\tag{23}\\] describe all the potentials for which cosmological scalar field satisfies a linear EOS with the parameter varying slowly with the field. To lowest order in this slow variation, the dynamics of cosmological expansion, i. e., the functions \\(\\varphi(t)\\) and \\(a(t)\\), are found successively from (12) and (10). If the solution \\(w_{f}=W(V,d\\ln V/d\\varphi)\\) and conditions (23) are valid not for all the values of the field, but only for some range of it, then, accordingly, the EOS holds only for the corresponding period of cosmological evolution, with the values of \\(\\varphi\\) within this range. Equation (22) is generally very complicated [recall that \\(w_{f}(\\varphi)\\) is involved in the powers of \\(V\\), as well as in the 'constants' \\(A\\) and \\(B\\), see (15), (16)] and should be studied in detail separately. However, it dramatically simplifies for later stages of cosmological expansion under the condition (17), which includes the most interesting case \\(w\\geq 0,\\ w_{f}<-1/3\\) ('normal' matter and'strong' quintessence). Indeed, at the later stages the scale factor is large, \\(a\\gg 1\\), and, by (10), the potential is small, \\(V/C_{f}\\ll 1\\). Due to this and condition (17), the factor \\(Q\\) gets close to unity, so that (22) turns into an approximate explicit expression for \\(w_{f}(\\varphi)\\) through the potential, \\[w_{f}=\\frac{1}{6}\\,\\left(\\frac{d\\ln V}{d\\varphi}\\right)^{2}\\,-\\,1\\;, \\tag{24}\\] and conditions (23) also become explicit and rather simple: \\[\\left|\\frac{d\\ln V}{d\\varphi}\\right|<2,\\qquad\\left|\\frac{d^{2}\\ln V}{d\\varphi^ {2}}\\right|\\ll 3\\qquad. \\tag{25}\\] The set of potentials satisfying (25) is rather wide. For instance, \\(V(\\varphi)=V_{0}(\\varphi^{2}+b^{2})\\exp(\\alpha\\varphi)\\) with \\(b^{2}\\gg 1\\), \\(|\\alpha|<2\\),and \\(\\alpha\ eq 0\\) (to make the potential small at large \\(|\\varphi|\\) of the proper sign), satisfies (25) for all values of \\(\\varphi\\), with the varying part of \\(w_{f}\\) of the order of \\(1/b^{2}\\), \\[w_{f}=-1+\\alpha^{2}/6+O(1/b^{2})\\;.\\] The factor \\((\\varphi^{2}+b^{2})\\) in this example can evidently be replaced with any polynomial of an even degree and with no real roots, under a single additional constraint on its coefficients guaranteeing the smallness of its second logarithmic derivative. Next, a ratio of two such polynomials can be taken, in which case the parameter \\(\\alpha\\) in the exponent could even be zero, if the degree of the polynomial in the denominator is larger than in the numerator, etc. All the above examples correspond to the runaway scenario \\(\\varphi(t)\\to\\infty,\\ V(\\varphi(t))\\to 0\\) at \\(t\\to+\\infty\\). This is true in general for the solution (22), (25) at \\(V/C_{f}\\ll 1\\). Indeed, if the scalar field goes to a finite value \\(\\varphi_{0}\\) at large times, then, by (10), \\(V(\\varphi_{0})=0\\). Since the potential is nonnegative, \\(\\varphi_{0}\\) is the point of its minimum, and \\(V^{{}^{\\prime}}(\\varphi_{0})=0\\). But then the logarithmic derivative \\((\\ln V)^{{}^{\\prime}}\\) tends to infinity when \\(\\varphi(t)\\to\\varphi_{0}\\), and equation (23) cannot be satisfied in the vicinity of \\(\\varphi_{0}\\). In fact, the EOS of the scalar field proves to be nonlinear in this vicinity. Thus we have shown that if matter satifies the EOS with \\(w>-1/3\\), and the potential drops at large enough values of the scalar field in such way that inequalities (25) hold, then the runaway scenario is possible with the field obeying the EOS of a'strong' quintessence, \\(-1<w_{f}(\\varphi)<-1/3\\), at later stages of expansion. This regime takes on the earlier in the expansion, the larger the abundance of the field (the ratio \\(C_{f}/C\\)), and the closer its EOS to the EOS of vacuum [\\(w_{f}(\\varphi)\\) to \\(-1\\)] is. So, under conditions (25) on the potential going to zero at large values of \\(\\varphi\\), scalar field quintessence dominates the expansion of the universe at its later stages in all the runaway solutions, and the expansion is then accelerating. **Acknowledgments** This work was supported by NASA grant NAS 8-39225 to Gravity Probe B. The author is grateful to A.D.Chernin, D.I.Santiago, and R.V.Wagoner for their valuable remarks. ## References * [1] R.R. Caldwell, R. Dave, P.J. Steinhard, Phys.Rev.Lett. **80**, 1582 (1998). * [2] R.R. Caldwell, R. Dave, P.J. Steinhard, Ap. Space Sci. **261**, 303 (1998); L. Wang, P.J. Steinhard, Astrophys. J. **508**, 483 (1998); C.-P. Ma, R.R. Caldwell, P. Bode, L. Wang, Astrophys. J. **521**, L1 (1999); A.R. Cooray, D. Huterer, Astrophys. J. **513**, L95 (1999); J.S. Alcaniz, J.A.S. Lima, Astron. Astrophys. **349**, 729 (1999); I.S. Zlatev, P.J. Steinhard, Phys. Lett. B **459**, 570 (1999); L. Hui, Astrophys. J. **519**, L9 (1999); I. Zlatev, Wang L., P.J. Steinhard, Phys. Rev. Lett. **82**, 896 (1999); P.J.E. Peebles, A. Vilenkin, Phys. Rev. D **590**, 811 (1999); M. Giovannini, Phys. Rev. D **601**, 277 (1999); L. Wang, R.R. Caldwell, J.P. Ostriker, P.J. Steinhard, Astrophys. J. **550**, 17 (200); G. Efstathiou, MN RAS **310**, 842 (2000); P.F. Gonzalez-Diaz, Phys. Lett. B **481**, 353 (2000); J.D. Barrow, R. Bean, J. Magueijo, MN RAS **316**, L41 (2000) * [3] S.M. Carroll, Phys. Rev. Lett. **81**, 3067 (1998); S.M. Barr, Phys. Lett. B **454**, 92 (1999); R.S. Kalyana, Phys. Lett. B **457**, 268 (1999); Ch. Kolda, D.H. Lyth, Phys. Lett. B **459**, 570 (1999); P. Binetruy, Phys. Rev. D **600**, 80 (1999); R. Horvat, Mod. Phys. Lett. A **14**, 2245 (1999); T. Chiba, Phys. Rev. D **601**, 4634 (1999); P.H. Brax, J. Martin, Phys. Lett. B 468, **40** (1999); A.B. Kaganovich, Nuc. Phys. B Proc.Suppl. **87**, 496 (1999); O. Bertolami, P.J. Martins, Phys. Rev. D **610**, 7 (2000); Y. Nomura, T. Watari, T. Yanagida, Phys. Lett. B **484**, 103 (2000); S.C.C. Ng, Phys. Lett. B **485**, 1 (2000); Dynnikova I. G. Phys. Lett. B **472**, 33 (2000); A. Hebecker, C. Wetterich, Phys. Rev. Lett. **85**, 3339 (2000); N. Arkani-Hamed, L.J. Hall, C. Colda, H. Murayama, Phys. Rev. Lett. **85**, 4434 (2000); C. Armendariz-Picon, V. Mukhanov, P.J. Steinhard, Phys. Rev. Lett. **85**, 4438 (2000) * [4] S. Weinberg, Rev. Mod. Phys. **61**, 1 (1989) * [5] E.Witten, In \"Sources and Detection of Dark Matter and Dark Energy in the Universe\", Ed. David B. Cline.(Springer Verlag, Heidelberg, 2001). * [6] P.F.Gonzalez-Diaz, Phys. Lett. B **522**, 211 (2001). * [7] A.D.Chernin, D.I. Santiago, A.S.Silbergleit, Phys. Lett. A **294**, 79 (2002). * [8] I.Zlatev, L.Wang, P.J.Steinhardt, Phys.Rev. D **59**, 123504 (1999). * [9] P.J.Steinhardt, L.Wang, I.Zlatev, Phys.Lett. B **459**, 570 (1999). * [10] D.I. Santiago, A.S.Silbergleit, Phys. Lett. A **268**, 69 (2000). * [11] A.S.Silbergleit, Astron.& Astrophys. Trans., (2002) (in press). * [12] P.F.Gonzalez-Diaz, Phys. Rev. D **65**, 104035 (2002). * [13] L.Wang, R.R.Caldwell, J.P.Ostriker, P.J.Steinhardt, Astrophys.J., **530**, 17 (2000).
We consider Friedmann cosmologies with minimally coupled scalar field. Exact solutions are found, many of them elementary, for which the scalar field energy density, \\(\\rho_{f}\\), and pressure, \\(p_{f}\\), obey the equation of state (EOS) \\(p_{f}=w_{f}\\rho_{f}\\). For any constant \\(|w_{f}|<1\\) there exists a two-parameter family of potentials allowing for such solutions; the range includes, in particular, the quintessence (\\(-1<w_{f}<0\\)) and 'dust' (\\(w_{f}=0\\)). The potentials are monotonic and behave either as a power or as an exponent for large values of the field. For a class of potentials satisfying certain inequalities involving their first and second logarithmic derivatives, the EOS holds in which \\(w_{f}=w_{f}(\\varphi)\\) varies with the field slowly, as compared to the potential.
Condense the content of the following passage.
arxiv-format/0209183v2.md
## 1 Introduction Universality of QCD means that predictions are independent of the details of the microscopic interactions. This is crucial for predictivity, since the precise form of the fundamental interactions at very short distance scales is not known. In a large parameter space characterizing possible fundamental interactions, the QCD universality class corresponds, however, only to a certain domain. For other domains in parameter space, the color symmetry may be \"spontaneously broken\" by the Higgs mechanism, or all quarks may acquire a large mass due to spontaneous chiral symmetry breaking. We are interested here in the transition from one domain to another and in the question of what happens at the boundary of the \"QCD domain\". Looking at QCD from a microscopic scale - say a unification scale \\(10^{15}\\)GeV - its universality class is characterized by eight massless gluons and a certain number of masslessfermions. Perturbatively, the masses are protected by the gauge symmetry and chiral symmetries. At a much smaller scale around 1GeV, nonperturbative effects induce masses for all physical particles. In particular, the fermions become massive owing to chiral symmetry breaking (\\(\\chi\\)SB). This may be described by a nonzero expectation value \\(\\sigma\\sim\\langle\\bar{\\psi}\\psi\\rangle\\) of a \"composite\" scalar field. In order to keep the discussion simple, we concentrate here on the case of one quark flavor - generalizations to several flavors are straightforward. Let us now consider a class of microscopic theories with a complex fundamental \"chiral scalar field\" \\(\\phi\\) which has the same transformation properties as \\(\\bar{\\psi}\\psi\\) and a classical potential \\[V=m^{2}\\phi^{*}\\phi+\\frac{1}{2}\\lambda_{\\phi}(\\phi^{*}\\phi)^{2}. \\tag{1}\\] The symmetries also allow for a Yukawa coupling between \\(\\phi\\) and the quarks. For nonzero \\(\\langle\\phi\\rangle\\), the chiral symmetry is broken and the quarks become massive. In the case of large enough positive \\(m^{2}\\) (in units of some unification scale, say \\(10^{15}\\)GeV), the scalar field is super-heavy and decouples from the low-energy theory. This range of \\(m^{2}\\) obviously corresponds to the universality class of QCD. All effects of the scalar field are suppressed by \\(p^{2}/m^{2}\\), with \\(p\\) a characteristic momentum. For QCD predictions, they can be completely ignored. On the other hand, for large enough negative \\(m^{2}\\), we expect the perturbative picture of spontaneous symmetry breaking to hold. The scalar field gets a vacuum expectation value (VEV) \\[\\langle\\phi\\rangle=\\sigma=|m_{\\rm R}^{2}/\\lambda_{\\phi,{\\rm R}}|^{1/2}, \\tag{2}\\] with \\(m_{\\rm R}\\) and \\(\\lambda_{\\phi,{\\rm R}}\\) related to \\(m\\) and \\(\\lambda_{\\phi}\\) by renormalization corrections. Both \\(\\sigma\\) and the quark masses are of the order of the unification scale in this domain. The universality class now corresponds to gluodynamics without light quarks. In the chiral limit of a vanishing current quark mass, spontaneous \\(\\chi\\)SB also generates a very light pseudo-Goldstone boson in addition to the gluonic degrees of freedom. Varying the microscopic scalar mass term \\(m^{2}\\) from large negative to large positive values should lead us from the universality class with perturbative spontaneous chiral symmetry breaking (P\\(\\chi\\)SB ) to the universality class of one-flavor QCD. One of the aims of this note is to understand the qualitative features of this transition in the vicinity of a critical value \\(m_{\\rm c}^{2}\\). This is clearly a nonperturbative problem, since on the QCD side of the transition the effective gauge coupling grows large. Our investigation is based on a nonperturbative flow equation which is obtained by a truncation of the exact renormalization group equation for the effective average action [1]. A crucial ingredient is the \"bosonization\" of effective multi-fermion interactions at every scale [2]. This provides for a description of fundamental scalar fields and bound states in a unified framework. A theoretical method with this feature is actually required for our problem, since the scalar quark-antiquark bound states in the QCD description (e.g., the pseudo-Goldstone eta meson and the sigma meson) are expected to become associated with the fundamental scalar in the P\\(\\chi\\)SB description. In this framework, we see also how one relevant and two marginal parameters in the P\\(\\chi\\)SB universality class, namely the onescorresponding to the mass and quartic self-interaction of the scalar field and the Yukawa coupling, become irrelevant for the QCD universality class. This remarkable change of the number of relevant parameters at the transition between the two universality classes is connected with the appearance of a bound-state fixed point for the flow of the scalar mass and self-interaction in the range of microscopic parameters corresponding to QCD. This bound-state fixed point is infrared attractive for all couplings except for the gauge coupling. Under the influence of this fixed point, all memory of the details of the microscopic interactions in the scalar sector is lost. This is exactly what is required for the QCD universality class which has the gauge coupling as the only marginal parameter (for a massless quark). In order to see the appearance of the bound state, it is crucial to re-incorporate the effective multi-fermion interactions generated by the flow into the effective bosonic interactions. This avoids an unwanted redundancy of the description. It also solves an old problem in the investigation of gauged Nambu-Jona-Lasinio models [3]; namely, how the presence of apparent relevant parameters in a too naive treatment of these models can be reconciled with QCD, where no such relevant parameters are present. In our approach, the flow towards the bound-state fixed point solves this generic problem. As a result of our investigation, we find a qualitatively convincing picture of the transition between the two universality classes investigated. We have kept the truncation simple in order to illustrate the change in the number of relevant and marginal parameters in a simple way. The price to be paid is a limited accuracy in the quantitative description for parameter regions where the effective gauge coupling grows large. In our setting, this concerns primarily the quantitative details of the flow of the instanton-mediated interactions and the running of the strong gauge coupling. We emphasize that the qualitative picture does not require a detailed understanding of strong interactions in the momentum range where the gauge coupling is large. All decisive features are determined by the flow in a momentum range substantially larger than 1GeV. In the same spirit, we also have neglected other effective bosonic degrees of freedom which may correspond to additional bound states. We keep only the composite scalars and the gluons. When we proceed with our analysis to the strongly coupled gauge sector, we do not attempt to compute the gluodynamics, but simply model the strong interactions with an increasing gauge coupling; for the latter, we use various examples discussed in appendix B. We do not claim that our truncation of the gauge sector is sufficient in order to establish chiral symmetry breaking in QCD. A much more elaborate analysis would be needed for this purpose. We rather take the spontaneous symmetry breaking in the QCD universality class as a fact (established by other methods and observation). We only require that a reasonable truncation should describe chiral symmetry breaking. Beyond this, the details of the truncation in the gauge sector are not relevant for our discussion of universality classes. Despite these shortcomings, we expect that our quantitative results describe the right order of magnitude of one-flavor QCD. An impression of the size of uncertainties can be gained from Table. 1 in appendix B. In order to illustrate our points, we compute the scalar condensate, i.e., the renormalized minimum of the effective potential, \\(\\sigma_{\\rm R}=\\sqrt{Z_{\\phi}|\\phi_{0}|^{2}}\\), for a broad range of initial scalar mass values \\(\\bar{m}_{\\Lambda}^{2}\\). We note that \\(\\sigma_{\\rm R}\\) is directly connected with the decay constant of the eta meson and sets the scale for the quark mass generated by \\(\\chi\\)SB. We first neglect the anomalous U\\({}_{\\rm A}(1)\\) violating contributions from instanton effects which only affect the physics at scales around 1GeV. (They will be considered in Sect. 5.) We parametrize the microscopic interactions by the initial values of the renormalization flow at a GUT-like scale \\(\\Lambda=10^{15}\\)GeV. As can be read off from Fig. 1, a critical mass \\(\\bar{m}_{\\rm c}^{2}\\) exists. For initial scalar masses below this critical mass, \\(\\bar{m}_{\\Lambda}^{2}<\\bar{m}_{\\rm c}^{2}\\), the naive expectation is fulfilled, and we find scalar condensates of the order of the cutoff, \\(\\sigma_{\\rm R}\\sim 10^{13}\\ldots 10^{15}\\)GeV. It is remarkable that the value of the critical mass is negative and typically of the order of the cutoff or only a few orders of magnitude below the cutoff; for example, we find \\(\\bar{m}_{\\rm c}^{2}\\simeq-0.35\\Lambda^{2}\\) for the initial values \\(\\bar{h}^{2}=1\\) and \\(\\bar{\\lambda}_{\\phi}=100\\) at \\(\\Lambda=10^{15}\\) (Fig. 1 (left panel)). For a perturbatively accessible set of initial parameters \\(\\bar{h}^{2}=0.1\\) and \\(\\bar{\\lambda}_{\\phi}=1\\) at \\(\\Lambda=10^{15}\\), we find \\(\\bar{m}_{\\rm c}^{2}\\simeq-0.0043\\Lambda^{2}\\) (Fig. 1 (right panel)). In the latter case, we find a linear dependence of the condensate on the mass parameter, \\(\\sigma_{\\rm R}^{2}\\sim-(\\bar{m}_{\\Lambda}^{2}-\\bar{m}_{\\rm c}^{2})\\), as expected from perturbation theory (cf. Eq. (2)). However, for initial scalar masses above this critical mass, \\(\\bar{m}_{\\Lambda}^{2}>\\bar{m}_{\\rm c}^{2}\\), the scalar condensate is 16 orders of magnitude smaller (not visible in the linear plot in Fig. 1 (right panel)). In this case, symmetry breaking is triggered by the fermion and gauge sectors and not by the scalar sector, i.e., \\(\\sigma_{\\rm R}\\) is roughly of the order of \\(\\Lambda_{\\rm QCD}\\). Therefore, even if we start the flow deep in the broken regime with \\(\\bar{m}_{\\Lambda}^{2}<0\\) but above the critical mass, the scalar fluctuations drive the system first into the symmetric regime where it will be attracted by the same IR fixed point as a QCD-like system. It should be stressed that no fine-tuning of the initial parameters is needed, neither to put the system into the domain of attraction of the QCD universality class nor to separate the UV scale from the scale of chiral symmetry breaking. Only for \\(\\bar{m}_{\\Lambda}^{2}<\\bar{m}_{\\rm c}^{2}\\) is the effective coupling between the scalars and the fermions strong enough to induce P\\(\\chi\\)SB with a magnitude determined by the initial parameters of the scalar sector. In this case, we would have to fine-tune the initial condition for \\(\\bar{m}_{\\Lambda}^{2}\\) to lie extremely closely to \\(\\bar{m}_{\\rm c}^{2}\\), if we wanted to separate the UV scale from the scale of chiral symmetry breaking. This is the famous naturalness problem which is generic for models involving a fundamental scalar. Of course, theories without fundamental scalars such as QCD do not have this problem, although effective scalar degrees of freedom such as bound states can occur at low energies. It is one of our main observations that the mechanism of how \"QCD-like\" theories circumvent the naturalness problem can also be applied to models with a fundamental scalar. The details of our study are organized as follows: in Sect. 2, we introduce the class of models containing one-flavor QCD and derive the flow equations for a qualitatively reliable truncation including \"bosonization at all scales\". Section 3 is devoted to a discussion of the bound-state fixed point which governs the flow of the QCD domain for weak gauge coupling. In Sect. 4, we analyze the universal features of the QCD domain numerically and give estimates of IR observables in the nonperturbative strong-coupling regime. Instanton-mediated interactions are included in Sect. 5 where we also describe the fate of the pseudo-Goldstone boson. ## 2 Flow equations QCD with one massless Dirac fermion flavor coupled to an SU(\\(N_{\\rm c}\\)) gauge field is characterized by the classical (or bare) action \\[S_{\\rm QCD}=\\int d^{4}x\\,\\bar{\\psi}\\,{\\rm i}\ ot{D}[A]\\,\\psi+\\frac{1}{4}F^{a}_ {\\mu\ u}F^{a}_{\\mu\ u}, \\tag{3}\\] where \\(D^{ij}_{\\mu}[A]=\\partial_{\\mu}\\delta^{ij}-{\\rm i}\\bar{g}T^{ij}_{a}A^{a}_{\\mu}\\), and \\(T_{a}\\) denotes the (hermitean) generators of the gauge group in the fundamental representation. In this work, we embed one-flavor QCD in a larger class of chirally invariant theories including a color-singlet scalar field. For this, we consider the action \\[\\Gamma = \\int\\Biggl{\\{}Z_{\\psi}\\bar{\\psi}{\\rm i}\ ot{D}\\psi+\\frac{\\bar{ \\lambda}_{\\sigma}}{2}\\bigl{[}(\\bar{\\psi}\\psi)^{2}-(\\bar{\\psi}\\gamma_{5}\\psi)^{ 2}\\bigr{]} \\tag{4}\\] \\[\\qquad+Z_{\\phi}\\partial_{\\mu}\\phi^{*}\\partial_{\\mu}\\phi+U(\\phi)+ \\bar{h}\\bigl{[}(\\bar{\\psi}_{\\rm R}\\psi_{\\rm L})\\phi-(\\bar{\\psi}_{\\rm L}\\psi_{ \\rm R})\\phi^{*}\\bigr{]}\\] \\[\\qquad+\\frac{Z_{\\rm F}}{4}F^{a}_{\\mu\ u}F^{a}_{\\mu\ u}+\\frac{1}{2 \\xi}(\\bar{D}_{\\mu}a^{a}_{\\mu})^{2}\\Biggr{\\}},\\] which represents a simple truncation of the space of action functionals and serves as the basis of our approximations. Here we have used the shorthand \\((\\bar{\\psi}\\psi)=\\bar{\\psi}^{i}\\psi_{i}\\) for the color indices. We included a background gauge fixing term with parameter \\(\\xi\\), and \\(A_{\\mu}=\\bar{A}_{\\mu}+a_{\\mu}\\), \\(\\bar{A}_{\\mu}\\) being the background and \\(a_{\\mu}\\) the fluctuation field, \\(\\bar{D}_{\\mu}\\equiv D_{\\mu}[\\bar{A}]\\). Furthermore, we do not display the ghost sector for simplicity. Equation (4) reduces to one-flavor QCD if we set the four-fermion and the Yukawa interaction equal to zero, \\(\\bar{\\lambda}_{\\sigma}=\\bar{h}=0\\), let the scalar field be auxiliary, \\(Z_{\\phi}=0\\), and set \\(Z_{F}=1=Z_{\\psi}\\) (the scalar potential is of no importance then). Furthermore, there is a redundancy in Eq. (4): we can compensate for a shift in \\(\\bar{\\lambda}_{\\sigma}\\) by readjusting the Yukawa coupling and the scalar potential corresponding to a Hubbard-Stratonovich transformation (partial bosonization). But apart from this redundancy, which will be removed later on by \"re-bosonization\", different initial values for the various parameters in Eq. (4) generally correspond to different quantum theories. Some of these theories will belong to the same universality class sharing the same low-energy properties, which makes them indistinguishable from a low-energy physicist's point of view. We analyze this class of theories in a Wilsonian spirit upon integrating out quantum fluctuations momentum shell by momentum shell. For this we employ the formalism based on the exact renormalization group flow equation for the effective average action [1], [4], \\[\\partial_{t}\\Gamma_{k}=\\frac{1}{2}\\,{\\rm STr}\\,\\Big{[}\\partial_{t}R_{k}\\big{(} \\Gamma_{k}^{(2)}+R_{k}\\big{)}^{-1}\\Big{]}, \\tag{5}\\] where \\(\\Gamma_{k}^{(2)}\\) denotes the second functional derivative of the effective average action \\(\\Gamma_{k}\\) that governs the dynamics of the system at a momentum scale \\(k\\). The logarithmic scale parameter \\(t\\) is given by \\(t=\\ln k/\\Lambda\\), \\(\\partial_{t}=k(d/dk)\\), where \\(\\Lambda\\) denotes the ultraviolet (UV) scale at which we define the bare action \\(\\Gamma_{\\Lambda}\\). The cutoff function \\(R_{k}\\) is to some extent arbitrary and obeys a few restrictions [4] which ensure that the flow is well defined and interpolates between the bare action in the UV and the full quantum effective action \\(\\Gamma_{k\\to 0}\\) in the infrared (IR). We solve the flow equation (5) by using Eq. (4) as a truncation of the space of all possible action functionals. As a consequence, we promote all couplings and wave function renormalizations occurring in Eq. (4) to \\(k\\)-dependent quantities. Although the truncation (4) represents only a small subclass of possible operators generated by quantum fluctuations, it is able to capture many physical features of QCD-like systems. Let us elucidate the single components in detail: for the scalar potential, we use the simple truncation \\[U(\\phi) = \\bar{m}^{2}\\,\\rho+\\frac{1}{2}\\bar{\\lambda}_{\\phi}\\,\\rho^{2}- \\frac{1}{2}\\bar{\ u}\\,\\zeta,\\qquad\\rho=\\phi^{*}\\phi,\\quad\\zeta=\\phi+\\phi^{*}. \\tag{6}\\] Already the \\(\\rho\\)-dependent first two terms of the potential are capable of describing spontaneous \\(\\chi\\)SB of the system which we are aiming at. Indeed, the order parameter \\(\\sigma\\) denotes the minimum of the scale-dependent effective potential \\(U_{k}\\) for \\(k\\to 0\\). The term \\(\\sim\\zeta=\\phi+\\phi^{*}\\) breaks the U\\({}_{\\rm A}\\)(1) symmetry of simultaneous axial phase rotations of scalars and fermions; it accounts for the effects of the axial anomaly. However, the presence of the axial anomaly is not relevant for universality of spontaneous \\(\\chi\\)SB, although it has, of course, a strong quantitative impact on resulting low-energy parameters such as condensates and constituent quark masses. Therefore, we postpone the discussion of this quantitative influence to Sect. 5 and set \\(\\bar{\ u}=0\\) in the following for the sake of clarity. In the gauge sector, we do not attempt to calculate the full nonperturbative flow of \\(Z_{\\rm F}\\), or alternatively the gauge coupling \\(g\\), here, but study various possibilities for these flowsand take over nonperturbative results from the literature. The most important features of the universality classes involve only the perturbative running of \\(g\\).1 Footnote 1: The running of g is universal in two-loop order. In the framework of the exact renormalization group, this has been computed in [8]. As discussed in the introduction, the one-loop running is actually sufficient to generate the main qualitative features needed for our argument. We will define the quantum theories by fixing the initial conditions for the renormalization flow at the UV scale \\(\\Lambda\\). In the gauge and fermion sectors, we choose \\[Z_{\\rm F}\\big{|}_{k=\\Lambda}=1,\\quad Z_{\\psi}\\big{|}_{k=\\Lambda}=1,\\quad\\bar{ \\lambda}_{\\sigma}\\big{|}_{k=\\Lambda}=0. \\tag{7}\\] The first two conditions normalize the gauge and fermion fields and imply that \\(\\bar{g}\\) denotes the bare gauge coupling. The last condition states that four-fermion interactions either have been partially bosonized into the scalar sector or are completely absent at the UV cutoff scale \\(\\Lambda\\). The choice of the scalar couplings at the UV cutoff will finally determine whether we are in or beyond the QCD domain. In order to describe standard QCD in our picture, a natural choice is given by \\[\\left.\\bar{m}^{2}\\right|_{k=\\Lambda}=+{\\cal O}(\\Lambda^{2}),\\quad\\bar{\\lambda} _{\\phi}\\big{|}_{k=\\Lambda}=0,\\quad(Z_{\\phi},\\bar{h})\\big{|}_{k=\\Lambda}\\to 0, \\tag{8}\\] implying that the scalar fields are nondynamic, noninteracting and heavy at \\(\\Lambda\\) and decouple from the fermion sector. They could be integrated out without any effect on the fermion sector and therefore are completely auxiliary. However, we will demonstrate below that the infrared physics including \\(\\chi\\)SB is to a large extent independent of the initial values in the scalar sector; in other words, the QCD universality class is actually much bigger than the restrictive choice of initial conditions of Eq. (8).2 Footnote 2: Already at this point, it is clear that \\(\\lambda_{\\phi,\\Lambda}\\) could also be chosen nonzero, which would only result in an unimportant change of the normalization of the functional integral. For a concise presentation of the RG flow equations of the single couplings, it is convenient to introduce the dimensionless, renormalized and \\(k\\)-dependent quantities, \\[\\epsilon=\\frac{\\bar{m}^{2}}{Z_{\\phi}k^{2}},\\quad\\lambda_{\\phi}=\\frac{\\bar{ \\lambda}_{\\phi}}{Z_{\\phi}^{2}},\\quad h=\\frac{\\bar{h}}{Z_{\\phi}^{1/2}Z_{\\psi}}, \\tag{9}\\] in the symmetric regime of the system. In the \\(\\chi\\)SB regime, the mass term becomes negative, and we replace this coupling by the minimum of the potential \\(\\rho_{0}\\) and its corresponding dimensionless variable \\(\\kappa\\) defined by \\[0=\\frac{\\partial}{\\partial\\rho}U_{k}(\\rho=\\rho_{0}),\\quad\\kappa=\\frac{Z_{\\phi }\\,\\rho_{0}}{k^{2}}. \\tag{10}\\] Similarly, we define \\(\\bar{\\lambda}_{\\phi}\\) as the second \\(\\rho\\)-derivative of the potential at the minimum in the \\(\\chi\\)SB regime. The running of the wave function renormalizations is studied using the associated anomalous dimensions, \\[\\eta_{\\phi}=-\\partial_{t}\\ln Z_{\\phi},\\quad\\eta_{\\psi}=-\\partial_{t}\\ln Z_{ \\psi},\\quad\\eta_{\\rm F}=-\\partial_{t}\\ln Z_{\\rm F}, \\tag{11}\\]where \\(\\eta_{\\rm F}\\) represents the major piece of information from the gauge sector in our truncation. Here, the use of the background-field method for this gauge sector has two advantages: first, it represents a book-keeping device to set up consistent gauge-invariant approximations within a certain order of truncation. Second, the physical idea of the background field is that it accommodates the true ground state of the system around which the quantum fluctuations are integrated out. In this spirit, we deduce the running gauge coupling from the RG behavior of the background field. Owing to background gauge invariance, the product of gauge coupling and background gauge field is renormalization-group invariant [5], so that the beta function for the renormalized running gauge coupling \\(g\\) is related to \\(\\eta_{\\rm F}\\) by \\[\\beta_{g^{2}}\\equiv\\partial_{t}g^{2}\\ =\\ \\eta_{\\rm F}\\,g^{2},\\quad g^{2}=\\frac{ \\bar{g}^{2}}{Z_{\\rm F}}. \\tag{12}\\] Actually, the effective action depends on both the background and the fluctuating gauge field, and the \\(n\\)-point functions can only be extracted from the functional depending on both fields [6]. Nevertheless, once all fluctuations are integrated out, the fluctuating field can be set to zero and the resulting effective action is gauge invariant. In general, the dependence of the effective action on both fields is needed for the RG flow. With the help of background-field identities, the dependence of the effective action on the fluctuating gauge field and the background field are related. A detailed record of the flow equations and results in the background field formalism, including the role of the gauge symmetry and Slavnov-Taylor identities, can be found in [6, 7]. In the present work, we neglect possible differences between the RG flow for gauge couplings defined from the background-field effective action and from vertices of the fluctuating field [8, 9]. This is perfectly justified in the limit of small gauge coupling which is of primary importance for this work. Here the lowest-order running is universal. By contrast, in the region of large coupling, our truncation of the gauge sector would anyway not be reliable if taken at face value, so that we abstain from resolving the gauge field running and \\(\\eta_{\\rm F}\\) in this regime. In this region, we simply model the running of the gauge coupling in order to obtain a first glance at the \\(\\chi\\)SB regime. Thereby, we assume that the influence of higher gluonic operators can be effectively accounted for by the increase of the gauge coupling. Although this certainly represents an oversimplification, let us stress that the details of the flow in the gauge sector are only of secondary importance for the issue addressed in this paper. Inserting the truncation (4) into the exact RG flow equation for the effective average action, we find the following results. The scalar and fermion anomalous dimensions can be written as \\[\\eta_{\\phi} = 4v_{4}\\,\\kappa\\lambda_{\\phi}^{2}\\,m_{2,2}^{4}(0,2\\kappa\\lambda_ {\\phi};\\eta_{\\phi}) \\tag{13}\\] \\[+4N_{\\rm c}v_{4}\\,h^{2}\\,\\Big{[}m_{4}^{({\\rm F}),4}(\\kappa h^{2}; \\eta_{\\psi})+\\kappa h^{2}\\,m_{2}^{({\\rm F}),4}(\\kappa h^{2};\\eta_{\\psi})\\Big{]}\\,,\\] \\[\\eta_{\\psi} = 2C_{2}(N_{\\rm c})v_{4}\\,g^{2}\\Big{[}(3-\\xi)\\,m_{1,2}^{({\\rm FB}),4}(\\kappa h^{2},0;\\eta_{\\psi},\\eta_{\\rm F})-3(1-\\xi)\\,\\widetilde{m}_{1,1}^{( {\\rm FB}),4}(\\kappa h^{2},0;\\eta_{\\psi},\\eta_{\\rm F})\\Big{]}\\] (14) \\[+v_{4}\\,h^{2}\\big{[}m_{1,2}^{({\\rm FB}),4}(\\kappa h^{2},\\epsilon +2\\kappa\\lambda_{\\phi};\\eta_{\\psi},\\eta_{\\phi})+m_{1,2}^{({\\rm FB}),4}(\\kappa h ^{2},\\epsilon;\\eta_{\\psi},\\eta_{\\phi})\\big{]},\\]where \\(v_{4}=1/(32\\pi^{2})\\) and \\(C_{2}(N_{\\rm c})=(N_{\\rm c}^{2}-1)/(2N_{\\rm c})\\). This representation is valid in the symmetric as well as in the \\(\\chi\\)SB regime. In the former, \\(\\kappa\\) has to be set equal to zero, whereas \\(\\epsilon=0\\) has to be chosen in the latter. The various quantities denoted by \\(m\\) are threshold functions which control the decoupling of massive modes for decreasing \\(k\\); they also contain all dependencies on the precise choice of the cutoff function \\(R_{k}\\). Their definitions and explicit representations can be found in App. A or in [4]. Equation (13) agrees with [4] and [11]. We also find agreement for the second line of Eq. (14), whereas the first line arises from the gauge-field sector (which has not been dealt with in [4],[11]). As a further check, we note that in the perturbative small-coupling limit, where the threshold functions \\(m\\) occurring above universally reduce to 1, we obtain \\[\\eta_{\\phi}\\big{|}_{\\rm pert.}=\\frac{N_{\\rm c}}{8\\pi^{2}}\\,h^{2},\\quad\\eta_{ \\psi}\\big{|}_{\\rm pert.}=\\xi\\,\\frac{C_{2}(N_{\\rm c})}{8\\pi^{2}}\\,g^{2}+\\frac{ 1}{16\\pi^{2}}\\,h^{2}, \\tag{15}\\] which agrees with the literature [12]. In the symmetric regime, the flow of the purely scalar sector can be summarized by \\[\\partial_{t}\\epsilon = -(2-\\eta_{\\phi})\\epsilon-8v_{4}\\,\\lambda_{\\phi}\\,l_{1}^{4}( \\epsilon;\\eta_{\\phi})+8N_{\\rm c}v_{4}\\,h^{2}\\,l_{1}^{\\rm(F),4}(0;\\eta_{\\psi}), \\tag{16}\\] \\[\\partial_{t}\\lambda_{\\phi} = 2\\eta_{\\phi}\\,\\lambda_{\\phi}+20v_{4}\\,\\lambda_{\\phi}^{2}\\,l_{2} ^{4}(\\epsilon;\\eta_{\\phi})-8N_{\\rm c}v_{4}\\,h^{4}\\,l_{2}^{\\rm(F),4}(0;\\eta_{ \\psi}), \\tag{17}\\] whereas in the \\(\\chi\\)SB regime, we find \\[\\partial_{t}\\kappa = -(2+\\eta_{\\phi})\\kappa+2v_{4}\\,l_{1}^{4}(0;\\eta_{\\phi})+6v_{4}\\,l _{1}^{4}(2\\kappa\\lambda_{\\phi};\\eta_{\\phi})-8N_{\\rm c}v_{4}\\,\\frac{h^{2}}{ \\lambda_{\\phi}}\\,l_{1}^{\\rm(F),4}(\\kappa h^{2};\\eta_{\\psi}), \\tag{18}\\] \\[\\partial_{t}\\lambda_{\\phi} = 2\\eta_{\\phi}\\,\\lambda_{\\phi}+2v_{4}\\,\\lambda_{\\phi}^{2}\\,l_{2} ^{4}(0;\\eta_{\\phi})+18v_{4}\\,\\lambda_{\\phi}^{2}\\,l_{2}^{4}(2\\kappa\\lambda_{ \\phi};\\eta_{\\phi})-8N_{\\rm c}v_{4}\\,h^{4}\\,l_{2}^{\\rm(F),4}(\\kappa h^{2};\\eta _{\\psi}), \\tag{19}\\] in complete agreement with the results of [11]. Again, the quantities denoted by \\(l\\) are threshold functions [4], [13]. Now we turn to the flow of the Yukawa coupling, which is driven by all sectors of the system: \\[\\partial_{t}h^{2} = (2\\eta_{\\psi}+\\eta_{\\phi})\\,h^{2}-4v_{4}\\,h^{4}\\big{[}l_{1,1}^{ \\rm(FB),4}(\\kappa h^{2},\\epsilon;\\eta_{\\psi},\\eta_{\\phi})-l_{1,1}^{\\rm(FB),4}( \\kappa h^{2},\\epsilon+2\\kappa\\lambda_{\\phi};\\eta_{\\psi},\\eta_{\\phi})\\big{]} \\tag{20}\\] \\[-8(3+\\xi)C_{2}(N_{\\rm c})v_{4}\\,g^{2}h^{2}\\,l_{1,1}^{\\rm(FB),4}( \\kappa h^{2},0;\\eta_{\\psi},\\eta_{\\rm F}),\\] where we have to set \\(\\kappa=0\\) (\\(\\epsilon=0\\)) in the symmetric (\\(\\chi\\)SB) regime. As a check, we take a look at the perturbative limit, \\[\\partial_{t}h^{2}\\big{|}_{\\rm pert.}=\\frac{N_{\\rm c}+1}{8\\pi^{2}}\\,h^{4}- \\frac{3C_{2}(N_{\\rm c})}{4\\pi^{2}}\\,g^{2}h^{2}, \\tag{21}\\] where we rediscover known results and also observe that the gauge-parameter \\(\\xi\\)-dependence has dropped out as it should. A crucial ingredient is the flow of the fermion self-interaction, which - in dimensionfulrepresentation - can be written as \\[\\partial_{t}\\bar{\\lambda}_{\\sigma} = \\frac{Z_{\\psi}^{2}}{k^{2}}\\big{[}\\beta_{\\lambda_{\\sigma}}^{g^{4}}\\, g^{4}+\\beta_{\\lambda_{\\sigma}}^{h^{4}}\\,h^{4}\\big{]},\\] \\[\\beta_{\\bar{\\lambda}_{\\sigma}}^{g^{4}}:=-6\\,\\frac{(N_{\\rm c}+2)(N_ {\\rm c}-1)}{N_{\\rm c}^{2}}\\,C_{2}(N_{\\rm c})\\,v_{4}\\,\\tilde{l}_{1,2}^{({\\rm FB }),4}(\\kappa h^{2},0;\\eta_{\\psi},\\eta_{\\rm F}),\\] \\[\\beta_{\\bar{\\lambda}_{\\sigma}}^{h^{4}}:=\\left(\\frac{2}{N_{\\rm c}} +1\\right)\\,v_{4}\\,\\tilde{l}_{1,1,1}^{({\\rm FBB}),4}(\\kappa h^{2},\\epsilon, \\epsilon+2\\kappa\\lambda_{\\phi};\\eta_{\\psi},\\eta_{\\phi}).\\] Here we neglected terms \\(\\sim\\kappa\\) which arise only in the broken regime but are suppressed therein owing to simultaneously occurring threshold functions (these terms are similar to the last term in square brackets in Eq. (13), which has hardly any effect on the results either). In Eq. (22) as well as in all equations above, we neglected terms of order \\(\\bar{\\lambda}_{\\sigma}\\) on the RHS, because \\(\\bar{\\lambda}_{\\sigma}=0\\) will finally be guaranteed on all scales as discussed below. Furthermore, we have chosen the same Fierz transformations in the Dirac algebra as in [2] and decomposed the possible color structures of the four-fermion interaction into a color singlet (S-P)\\({}_{\\rm S}\\) and color \\(N_{\\rm c}^{2}-1\\)-plets (S-P)\\({}_{N_{\\rm c}^{2}-1}\\), (V)\\({}_{N_{\\rm c}^{2}-1}\\). In the present work, we focus on the (S-P)\\({}_{\\rm S}\\) term; in principle, the (V)\\({}_{N_{\\rm c}^{2}-1}\\) term could be absorbed into a \\(k\\)-dependent transformation of the nonabelian gauge field in the same way as suggested in [2] for the abelian case.3 Footnote 3: By neglecting some of the four-fermion interactions, our quantitative result will depend slightly on the choice of the Fierz decomposition. Using “fermion-boson translation” to be described in the following, this dependence can be removed in a larger truncation, as was recently shown in [10]. However, we checked explicitly that quantitative results in another natural Fierz decomposition involving (S–P)\\({}_{\\rm S}\\), (V)\\({}_{\\rm S}\\) and (V)\\({}_{N_{\\rm c}^{2}-1}\\) differ from the present ones only on the 1% level. As mentioned above, there is a certain redundancy in the parametrization of the effective action \\(\\Gamma_{k}\\) owing to possible different choices of partial bosonization of the four-fermion interaction. From a different viewpoint, this redundancy corresponds to the possible mixing of fields or composite operators with identical quantum numbers. We remove this redundancy in the present truncation with the aid of the following \\(k\\)-dependent transformation of the scalar field (\"fermion-boson translation\"): \\[\\partial_{t}\\phi_{k}(q) = -(\\bar{\\psi}_{\\rm L}\\psi_{\\rm R})(q)\\,\\partial_{t}\\alpha_{k}(q)+ \\phi_{k}(q)\\,\\partial_{t}\\beta_{k}(q),\\] \\[\\partial_{t}\\phi_{k}^{*}(q) = (\\bar{\\psi}_{\\rm R}\\psi_{\\rm L})(-q)\\,\\partial_{t}\\alpha_{k}(q)+ \\phi_{k}^{*}(q)\\,\\partial_{t}\\beta_{k}(q), \\tag{23}\\] with a priori arbitrary functions \\(\\alpha_{k}(q)\\) and \\(\\beta_{k}(q)\\). Upon this transformation, the flow equations given above receive additional contributions \\(\\sim\\alpha_{k}(q),\\beta_{k}(q)\\) according to \\[\\partial_{t}\\Gamma_{k}=\\partial_{t}\\Gamma_{k\\,|\\phi_{k},\\phi_{k}^{*}}+\\int \\frac{\\delta\\Gamma_{k}}{\\delta\\phi_{k}}\\,\\partial_{t}\\phi_{k}+\\int\\frac{\\delta \\Gamma_{k}}{\\delta\\phi_{k}^{*}}\\,\\partial_{t}\\phi_{k}^{*}. \\tag{24}\\] As described in more detail in [2], these functions can be uniquely determined by demanding for (i) \\(\\partial_{t}\\bar{\\lambda}_{\\sigma}(q^{2})\\) to vanish for all \\(k\\) and \\(q^{2}\\), where the momentum dependence of \\(\\bar{\\lambda}_{\\sigma}\\) has been studied in the \\(s\\) channel for simplicity, \\(\\bar{\\lambda}_{\\sigma}(q^{2})\\equiv\\bar{\\lambda}_{\\sigma}(s=q^{2})\\), (ii) the Yukawa coupling \\(\\bar{h}\\)to be momentum independent, and (iii) \\(\\partial_{t}Z_{\\phi}(q^{2}=k^{2})=-\\eta_{\\phi}Z_{\\phi}\\) in order to render the approximation of a momentum-independent \\(Z_{\\phi}\\) self-consistent. Condition (i) together with the initial condition (7) guarantees that no four-fermion interaction of this type is generated under the flow; this interaction is bosonized into the scalar sector at all scales \\(k\\). Condition (ii) guarantees the fermion mass generated by \\(\\chi\\)SB is also momentum independent, so that the couplings in the \\(\\chi\\)SB regime have a direct physical interpretation. The field transformation (23) affects also the scalar couplings, and we obtain in the symmetric regime: \\[\\partial_{t}\\epsilon = \\partial_{t}\\epsilon\\big{|}_{\\phi_{k}}+2\\frac{\\epsilon(1+\\epsilon )}{h^{2}}\\big{(}1+(1+\\epsilon)Q_{\\sigma}\\big{)}\\big{(}\\beta_{\\lambda_{\\sigma} }^{g^{4}}\\,g^{4}+\\beta_{\\bar{\\lambda}_{\\sigma}}^{h^{4}}\\,h^{4}\\big{)},\\] \\[\\partial_{t}h^{2} = \\partial_{t}h^{2}\\big{|}_{\\phi_{k}}+2\\big{(}1+2\\epsilon+Q_{ \\sigma}(1+\\epsilon)^{2}\\big{)}\\big{(}\\beta_{\\bar{\\lambda}_{\\sigma}}^{g^{4}}\\, g^{4}+\\beta_{\\bar{\\lambda}_{\\sigma}}^{h^{4}}\\,h^{4}\\big{)}, \\tag{25}\\] where the corresponding first terms on the right-hand sides denote the flow equations for fixed fields as given above in Eqs. (16) and (20). In the \\(\\chi\\)SB regime, we find similarly \\[\\partial_{t}\\kappa = \\partial_{t}\\kappa\\big{|}_{\\phi_{k}}+2\\frac{\\kappa(1-\\kappa \\lambda_{\\phi})}{h^{2}}\\big{(}1+(1-\\kappa\\lambda_{\\phi})Q_{\\sigma}\\big{)} \\big{(}\\beta_{\\bar{\\lambda}_{\\sigma}}^{g^{4}}\\,g^{4}+\\beta_{\\bar{\\lambda}_{ \\sigma}}^{h^{4}}\\,h^{4}\\big{)},\\] \\[\\partial_{t}h^{2} = \\partial_{t}h^{2}\\big{|}_{\\phi_{k}}+2\\big{(}1-2\\kappa\\lambda_{ \\phi}+Q_{\\sigma}(1-\\kappa\\lambda_{\\phi})^{2}\\big{)}\\big{(}\\beta_{\\bar{\\lambda }_{\\sigma}}^{g^{4}}\\,g^{4}+\\beta_{\\bar{\\lambda}_{\\sigma}}^{h^{4}}\\,h^{4}\\big{)}. \\tag{26}\\] Defining \\(\\Delta\\bar{\\lambda}_{\\sigma}:=\\bar{\\lambda}_{\\sigma}(k^{2})-\\bar{\\lambda}_{ \\sigma}(0)\\), the quantity \\(Q_{\\sigma}\\equiv\\partial_{t}\\Delta\\bar{\\lambda}_{\\sigma}/\\partial_{t}\\bar{ \\lambda}_{\\sigma}(0)\\) measures the suppression of \\(\\bar{\\lambda}_{\\sigma}(s)\\) for large external momenta. Without an explicit computation, we may conclude that this suppression implies \\(Q_{\\sigma}<0\\), in agreement with unitarity; furthermore, if the flow is in the \\(\\chi\\)SB regime, the fermions become massive, and non-pointlike four-fermion interactions in the \\(s\\) channel will be suppressed by the inverse fermion mass squared.4 Therefore, we model \\(Q_{\\sigma}\\) by the ansatz Footnote 4: This can be inferred from the heavy-fermion limit of the two-gluon/scalar-exchange box diagram where the internal fermion propagators become pointlike \\(\\sim 1/m_{t}\\). \\[Q_{\\sigma}=Q_{\\sigma}^{0}\\,m_{1,2}^{\\rm(FB),4}(\\kappa h^{2},0,\\eta_{\\psi}, \\eta_{F}),\\quad Q_{\\sigma}^{0}={\\rm const.}<0, \\tag{27}\\] where we have introduced a threshold function with the appropriate decoupling properties for massive fermions. The qualitative results are independent of the precise choice of \\(Q_{\\sigma}\\), and it is reassuring to observe a quantitative independence of the IR observables on the precise value for \\(Q_{\\sigma}^{0}\\) (e.g., \\(Q_{\\sigma}^{0}\\simeq-0.1\\)). The field transformations (23) also modify the equation for \\(\\lambda_{\\phi}\\) via the terms \\(\\sim\\partial_{t}\\beta_{k}\\). In the pointlike limit (\\(q^{2}=0\\)), the modified running is given by \\[\\partial_{t}\\lambda_{\\phi}=\\partial_{t}\\lambda_{\\phi}\\big{|}_{\\phi_{k}}+4\\frac {\\lambda_{\\phi}}{h^{2}}(1+\\epsilon)\\big{(}1+(1+\\epsilon)Q_{\\sigma}\\big{)} \\big{(}\\beta_{\\bar{\\lambda}_{\\sigma}}^{g^{4}}\\,g^{4}+\\beta_{\\bar{\\lambda}_{ \\sigma}}^{h^{4}}\\,h^{4}\\big{)}. \\tag{28}\\] It will turn out that the modification of the flow of \\(\\lambda_{\\phi}\\) is also quantitatively irrelevant, whereas the the modifications displayed in Eqs. (25) and (26) are of crucial importance. Bound-state fixed point The universal features of spontaneous \\(\\chi\\)SB in the QCD domain that will be quantitatively analyzed in the next section can be traced back to the occurrence of a fixed point for the scalar couplings. This fixed point is infrared attractive as long as the gauge coupling remains weak and can be associated with a bound state [2]. The fixed-point structure can conveniently be analyzed with the help of the coupling \\[\\tilde{\\epsilon}=\\frac{\\epsilon}{h^{2}}=\\frac{Z_{\\psi}^{2}\\bar{m}^{2}}{k^{2} \\bar{h}^{2}}. \\tag{29}\\] Since we are interested in the domain of weak gauge coupling, for simplicity we can neglect the anomalous dimensions in the following. In this approximation and choosing the gauge parameter \\(\\xi=0\\) (background Landau gauge), the flow of \\(\\tilde{\\epsilon}\\) yields: \\[\\partial_{t}\\tilde{\\epsilon} = 8N_{\\rm c}v_{4}l_{1}^{({\\rm F}),4}-8v_{4}l_{1}^{4}(\\epsilon)\\, \\frac{\\lambda_{\\phi}}{h^{2}}-(2-24C_{2}(N_{\\rm c})v_{4}l_{1,1}^{({\\rm FB}),4}\\, g^{2})\\,\\tilde{\\epsilon} \\tag{30}\\] \\[-2\\big{(}\\beta_{\\tilde{\\lambda}_{\\sigma}}^{g^{4}}\\,g^{4}+\\beta_{ \\lambda_{\\sigma}}^{h^{4}}\\,h^{4}\\big{)}\\,\\tilde{\\epsilon}^{2}.\\] (Here, all arguments of the threshold functions which are not displayed are assumed to be equal to zero; therefore, threshold functions without any argument are simply numbers which depend on the details of the regularization). If the scalar field is auxiliary at the UV scale as in the QCD context, its wave function renormalization is very small initially, \\(Z_{\\phi}\\ll 1\\), so that the dimensionless renormalized mass is very large, \\(\\epsilon\\gg 1\\). In this case, scalar fluctuations are suppressed and the threshold functions depending on \\(\\epsilon\\) vanish; the right-hand side of Eq. (30) describes a parabola in the variable \\(\\tilde{\\epsilon}\\), and we find two positive fixed points, \\(0<\\tilde{\\epsilon}_{1}^{*}<\\tilde{\\epsilon}_{2}^{*}\\), where \\(\\tilde{\\epsilon}_{1}^{*}\\) is UV attractive but IR unstable, and \\(\\tilde{\\epsilon}_{2}^{*}\\) is an IR stable fixed point (see Fig. 2, solid line). It can be shown that \\(\\tilde{\\epsilon}_{1}^{*}\\) corresponds to the inverse of the critical coupling of the NJL model, so that our flow describes a model with strong four-fermion interaction if we choose UV initial conditions with \\(\\tilde{\\epsilon}_{\\Lambda}<\\tilde{\\epsilon}_{1}^{*}\\) to the left of the first fixed point (see, e.g, [14] for a detailed analysis of the phase structure in the abelian case). For this choice, the system is not in the QCD domain but approaches chiral symmetry breaking (\\(\\tilde{\\epsilon}<0\\)) in a perturbatively accessible way (P\\(\\chi\\)SB). In this section, we concentrate on those initial values which release the system to the right of the first fixed point, \\(\\tilde{\\epsilon}_{\\Lambda}>\\tilde{\\epsilon}_{1}^{*}\\), i.e., which are weakly coupled in the NJL language. This will be the range of the QCD universality class. As the system evolves, it flows towards the second fixed point \\(\\tilde{\\epsilon}_{2}^{*}\\), which then governs the evolution over many scales. Here, the system \"loses its memory\" of the initial conditions; in particular, it is of no relevance whether we start with \\(\\tilde{\\epsilon}_{1}^{*}<\\tilde{\\epsilon}_{\\Lambda}<\\tilde{\\epsilon}_{2}^{*}\\) or \\(\\tilde{\\epsilon}_{\\Lambda}>\\tilde{\\epsilon}_{2}^{*}\\). The evolution towards and in the IR is universally governed by this fixed point \\(\\tilde{\\epsilon}_{2}^{*}\\), which can be shown to be associated with a fermion-antifermion bound state; e.g., in QED, the properties of the scalar field at this fixed point correspond to those of positronium [2]. Before we elucidate the fixed-point properties further, let us briefly mention that its existence can be generalized to the case of a scalar field describing a fundamental particle in the UV (a Yukawa model with gauged fermions rather than QCD). In this case, we have \\(Z_{\\phi}=1\\) and \\(\\epsilon\\simeq{\\cal O}(1)\\) at the UV scale. Now the second term in Eq. (30) can become important, in particular for a large \\(\\phi^{4}\\) coupling \\(\\lambda_{\\phi}\\) and/or small \\(h^{2}\\). When discussing the RHS of Eq. (30) for fixed \\(g\\), \\(h\\), \\(\\lambda_{\\phi}\\), one should keep in mind that these couplings may change with \\(k\\). For large \\(\\lambda_{\\phi}/h^{2}\\), the \\(\\tilde{\\epsilon}\\) parabola is lowered and the first fixed point can move to negative values, \\(\\tilde{\\epsilon}_{1}^{*}<0\\) (see Fig. 2, dashed line). In this case, we can release the system even in the broken regime at the UV scale, \\(\\tilde{\\epsilon},\\epsilon<0\\), but it still evolves towards the bound-state fixed point \\(\\tilde{\\epsilon}_{2}^{*}\\). In comparison with Fig. 1, this corresponds to initial values \\(\\bar{m}_{\\rm c}^{2}<\\bar{m}_{\\Lambda}^{2}<0\\). Physically, such a scenario describes a system involving fundamental scalars, fermions and gauge fields, where the scalar sector is initially weakly coupled to the fermions. If we start in the broken regime, scalar fluctuations will drive the system towards the symmetric regime before the fermion-gauge-field interactions induce sizable bound-state effects which can exert an influence on the scalar sector. In this scenario, the first fixed point \\(\\tilde{\\epsilon}_{1}^{*}<0\\) is a measure of the strength of the initial effective coupling between scalars and fermions. For strong effective coupling, \\(\\tilde{\\epsilon}_{\\Lambda}<\\tilde{\\epsilon}_{1}^{*}\\), an initial negative scalar mass of the order of the cutoff, \\(\\left.\\bar{m}^{2}\\right|_{k=\\Lambda}\\simeq-{\\cal O}(\\Lambda^{2})\\) will induce a vacuum expectation value and a fermion mass of the same order, in agreement with naive expectations. But at weak effective coupling, e.g., \\(h^{2}\\sim{\\cal O}(1)\\), \\(\\lambda_{\\phi}\\simeq 100\\) and \\(\\tilde{\\epsilon}_{1}^{*}<\\tilde{\\epsilon}_{\\Lambda}<0\\), the system can still start with an initial negative scalar mass \\(\\left.\\bar{m}^{2}\\right|_{k=\\Lambda}\\simeq-{\\cal O}(\\Lambda^{2})\\), but finally run into the bound-state fixed point. As an important result, the vacuum expectation value and the fermion mass after symmetry breaking can easily be orders of magnitude smaller than the UV scale, as exhibited in Fig. (1) in the Introduction. We conclude that all systems with \\(\\tilde{\\epsilon}_{\\Lambda}>\\tilde{\\epsilon}_{1}^{*}\\) belong to the QCD universality class. Let us now turn to the properties of the system at the bound-state fixed point. The crucial observation is that not only \\(\\tilde{\\epsilon}\\) but also all dimensionless scalar couplings approach Figure 2: Flow of \\(\\tilde{\\epsilon}\\) according to Eq. (30) (schematic plot): the solid line corresponds to a QCD scenario at weak gauge coupling; the arrows indicate the direction of the flow towards the infrared. The dashed line corresponds to a system with fundamental scalar, \\(Z_{\\phi}|_{k=\\Lambda}=1\\), \\(\\epsilon\\lesssim 1\\), and strong scalar self-interaction. The dotted lines exhibit the destabilization of the bound-state fixed point by the increasing gauge coupling. fixed points. In the general case, the fixed-point values depend in a complicated form on all parameters of the system. However, in the limit \\(\\epsilon\\gg 1\\) (QCD-like), we can find analytic expressions that satisfy the fixed-point conditions \\(\\partial_{t}(\\epsilon,h^{2},\\lambda_{\\phi})=0\\) to leading order: \\[\\epsilon^{*} \\simeq \\frac{2}{|Q_{\\sigma}|}, \\tag{31}\\] \\[(h^{*})^{2} \\simeq \\frac{2|\\beta_{\\sigma}^{g^{4}}|^{g}}{|Q_{\\sigma}|}=\\frac{12}{|Q_{ \\sigma}|}\\frac{C_{2}(N_{\\rm c})(N_{\\rm c}+2)(N_{\\rm c}-1)}{N_{\\rm c}^{2}}\\,v_{ 4}l_{1,2}^{\\rm(FB),4}\\,g^{4}\\] \\[\\lambda_{\\phi}^{*} \\simeq \\frac{N_{\\rm c}\\,(h^{*})^{4}}{6\\,C_{2}(N_{\\rm c})\\,g^{2}}.\\] From the first equation, we read off that the approximation \\(\\epsilon\\gg 1\\) is equivalent to assuming \\(|Q_{\\sigma}|\\ll 1\\), which is roughly fulfilled in our numerical study with our choice of \\(Q_{\\sigma}^{0}=-0.1\\). The remarkable properties of the IR fixed point become apparent when considering the renormalized scalar mass, \\(m^{2}=\\epsilon k^{2}\\). Since \\(\\epsilon\\to\\epsilon^{*}\\), the scalar mass simply decreases with the scale \\(k\\), so that it is only _natural_ to obtain small masses \\(m^{2}\\ll\\bar{m}_{\\Lambda}^{2}\\) for small scale ratios \\(k\\ll\\Lambda\\). In other words, even if we start with a scalar mass of the order of the cutoff, \\(\\left.\\bar{m}^{2}\\right|_{k=\\Lambda}\\sim\\Lambda^{2}\\), no fine-tuning will be necessary to obtain small mass values at low-energy scales, as long as the running is controlled by the bound-state fixed point. In order to approach the \\(\\chi\\)SB regime, the bound-state fixed point has to be destabilized; otherwise, the system will remain in the symmetric regime as is the case in QED. In QCD, this destabilization arises from the increase of the gauge coupling towards the infrared [15]. From the third and last term of Eq. (30), it is obvious that an increasing gauge coupling lifts the \\(\\partial_{t}\\tilde{\\epsilon}\\) parabola (see Fig. 2, dotted lines). For some value \\(g_{\\rm D}^{2}\\) of the gauge coupling, the two fixed-points in \\(\\tilde{\\epsilon}\\) will be degenerate, so that there is no fixed point at all for all \\(g^{2}>g_{\\rm D}^{2}\\). The beta function \\(\\partial_{t}\\tilde{\\epsilon}\\) is then strictly positive, which drives the system towards the \\(\\chi\\)SB regime. In the limit \\(\\epsilon\\gg 1\\), the critical gauge coupling of fixed-point degeneracy \\(g_{\\rm D}^{2}\\) can be computed analytically, and we find: \\[g_{\\rm D}^{2}\\simeq\\frac{16}{3}\\pi^{2}\\frac{N_{\\rm c}}{N_{\\rm c}-1}\\left( \\sqrt{1+\\frac{1}{N_{\\rm c}+1}}-1\\right)\\simeq\\frac{4}{3}\\pi^{2}\\frac{1}{C_{2} (N_{\\rm c})}, \\tag{32}\\] where we have used linear cutoff functions [16] for which \\(l_{1,2}^{\\rm(FB),4}=3/2\\). For instance, for SU(3) we get \\(\\alpha_{\\rm D}=\\frac{g_{\\rm D}^{2}}{4\\pi}\\simeq\\frac{\\pi}{4}\\), which is in the nonperturbative domain, as expected.5 As soon as \\(g^{2}\\) exceeds \\(g_{\\rm D}^{2}\\), the running of the scalar couplings is no longer protected by the bound-state fixed point. Here all couplings are expected to run fast, being strongly influenced by the details of the increase of the gauge coupling. Of course, owing to strong coupling, many higher-order operators can acquire large anomalous dimensions and contribute to the dynamics of the symmetry-breaking transition. Our truncation should be understood as the minimal lowest-order approximation in this regime, but gives already a remarkably consistent (but not necessarily complete) picture. Once chiral symmetry is broken, the fermions decouple and the fermionic and (most of the) scalar flow essentially stops. The scenario discussed here finally explains why the IR values of the scalar and fermionic couplings inherit their order of magnitude from the QCD scale \\(\\Lambda_{\\rm QCD}\\) as they should, whereas particularly the details of the scalar sector at the UV scale are of no relevance, owing to the fixed-point structure inducing QCD universality. ## 4 Numerical results In the following, we concentrate on the set of theories that belong to the QCD universality class. In order to illustrate how universality arises from the presence of the bound-state fixed point, we initiate our flows at a GUT-like scale of \\(\\Lambda=10^{15}\\)GeV, where the gauge coupling is weak and increases only logarithmically towards the infrared. Therefore, the bound-state fixed point exists over a wide range of scales. As discussed before, hardly any dependence on the specific initial values for the scalar potential and the Yukawa coupling remains because of the fixed point, as we will demonstrate quantitatively in the following. For illustrative purposes, we concentrate here on QCD-like scenarios where the scalar is auxiliary at the UV scale, and explore this parameter space using the natural choice given by Eq. (8) as a reference; to be precise, we use the reference set, \\[\\left.\\bar{m}^{2}\\right|_{k=\\Lambda}=\\Lambda^{2},\\quad\\bar{\\lambda }_{\\phi}\\big{|}_{k=\\Lambda}=0,\\quad Z_{\\phi}\\big{|}_{k=\\Lambda}=10^{-8},\\quad \\bar{h}^{2}\\big{|}_{k=\\Lambda}=10^{-12},\\] \\[\\Leftrightarrow \\epsilon\\big{|}_{\\Lambda}=10^{8},\\quad\\lambda_{\\phi}|_{\\Lambda} =0,\\quad h|_{\\Lambda}=10^{-2}, \\tag{33}\\] in our numerical studies. In all computations, we use linear cutoff functions proposed in [16] for which the threshold functions can be determined analytically (see App. A). We plot the flows of the renormalized dimensionless couplings \\(\\epsilon\\), \\(h\\) and \\(\\lambda_{\\phi}\\) in Fig. 3 for the symmetric regime. The reference set (33) is depicted as solid lines, whereas the dashed and dotted lines correspond to initial values which deviate from the reference set (33) by many orders of magnitude for the corresponding couplings. As long as we start in the range of attraction of the bound-state fixed point, we can obviously vary the initial values for the scalar couplings over many orders of magnitude without any appreciable effect. The system quickly approaches the bound-state fixed point, where the initial values of the couplings become unimportant. In particular, the scalar mass, which is allowed to be of the order of the cutoff or even much larger at \\(k=\\Lambda\\), runs to small values \\(\\sim k\\) while the system is governed by the bound-state fixed point. No fine-tuning is necessary for this.6 Let us stress once more that these features of universality are not restricted to the reference set (33) and the variations thereof. They can also be found in Yukawa models with a fundamental scalar (\\(Z_{\\phi}|_{k=\\Lambda}=1\\)) and even if we start in the broken regime at the UV scale (see Fig. 1). At the bound-state fixed point, the couplings are modulated only by the logarithmically slow increase of the gauge coupling. Incidentally, the modulation of \\(\\tilde{\\epsilon}=\\epsilon/h^{2}\\) is completely carried by \\(h\\), whereas \\(\\epsilon\\) stays fixed. This agrees with our analytical fixed-point values found in Eq. (31). A rapid change for the couplings in Fig. 3 is visible after \\(g^{2}\\) exceeds \\(g_{\\rm D}^{2}\\) and the bound-state fixed point has disappeared (\\(t_{10}\\lesssim 1\\)). The behavior of the system changes rapidly after the gauge coupling has grown large. For \\(g^{2}>g_{\\rm D}^{2}\\), the bound-state fixed point vanishes and all couplings start to run fast. The system necessarily runs into the \\(\\chi\\)SB regime where the scalars develop a vacuum expectation value and the fermions acquire a mass \\[m_{\\rm f}^{2}=\\lim_{k\\to 0}k^{2}\\,\\kappa h^{2}\\equiv(h\\sigma_{\\rm R})^{2}, \\tag{34}\\] where \\(\\sigma_{\\rm R}=\\lim_{k\\to 0}\\sqrt{Z_{\\phi}\\,\\rho_{0}}\\) denotes the renormalized expectation value of the scalar field. This leads to a decoupling of the fermions, and, consequently, fermion-boson translation is \"switched off\". Also the flow of the Yukawa coupling stops, the scalar and fermion anomalous dimensions approach zero, and \\(\\kappa\\) runs according to its trivial mass scaling, \\(\\kappa\\sim 1/k^{2}\\), so that \\(m_{\\rm f}\\) approaches a constant value. Whereas the qualitative picture is rather independent of the details of the running gauge coupling, quantitative results are highly sensitive to the flow of the gauge sector. This is because a finite amount of \"RG time\" passes from the disappearance of the bound-state fixed point to the transition into the \\(\\chi\\)SB regime. In between, the running of the gauge coupling exerts a strong influence on all other couplings which are no longer protected by any fixed point. A purely perturbative running of the gauge coupling turns out to be insufficient for the present purpose, since the (unphysical) Landau pole destabilizes the system in the infrared. Figure 3: Flow of \\(\\epsilon\\), \\(h\\) and \\(\\lambda_{\\phi}\\) in the symmetric regime according to Eqs. (16), (17), (20), and (25). The solid lines correspond to the reference set (33), whereas the dotted and dashed lines represent the flows for strongly differing initial values as indicated. The insensitivity with respect to the choice of initial conditions is clearly visible. On the horizontal axis, the exponent \\(t_{10}\\) is used for the scale \\(k=10^{t_{10}}\\)GeV. For definiteness, let us consider a running coupling governed by the beta function \\[\\partial_{t}g^{2}=\\beta_{g^{2}}=\\eta_{\\rm F}g^{2} = -2\\left(b_{0}\\,\\frac{g^{4}}{16\\pi^{2}}+b_{1}\\,\\frac{g^{6}}{(16\\pi^{ 2})^{2}}\\right)\\left[1-\\exp\\left(\\frac{1}{\\alpha_{*}}-\\frac{1}{\\frac{g^{2}}{4 \\pi}}\\right)\\right]^{s}, \\tag{35}\\] \\[b_{0} = \\frac{11}{3}N_{\\rm c}-\\frac{2}{3}N_{\\rm f},\\quad b_{1}=\\frac{34}{ 3}N_{\\rm c}^{2}-\\frac{10}{3}N_{\\rm c}N_{\\rm f}-2C_{2}(N_{\\rm c})N_{\\rm f}\\] for our numerical studies. In the UV, this beta function exhibits an accurate two-loop perturbative behavior, whereas the coupling runs to a fixed point \\(\\alpha_{\\rm s}\\equiv g^{2}/(4\\pi)\\to\\alpha_{*}\\) in the IR for \\(k\\to 0\\). In the first place, the infrared fixed point is convenient for numerical purposes, since it does not lead to artificial IR instabilities. Moreover, an infrared fixed point for a mass-scale-dependent running coupling is compatible with the expected mass gap in Yang-Mills theory. Below this mass gap, all gauge field fluctuations decouple from the flow and can no longer drive the flow of the coupling. Different beta functions with and without infrared fixed points are studied in Appendix B. It turns out that, though the infrared properties such as the constituent quark mass depend quantitatively on the choice of the beta function as expected, the universal features discussed in the following remain untouched. This underlines our observation that the detailed understanding of the flow for the region of strong gauge coupling is not essential for the overall picture. In combination with Eq. (35), the system of flow equations is now closed and provides us with an answer for the (truncated) quantum effective action, once we specify all parameters and initial values. We have investigated SU(\\(N_{\\rm c}=3\\)) gauge theory with initial value \\(g(\\Lambda)\\) chosen such that \\(\\alpha_{\\rm s}\\) acquires its physical value at the \\(Z\\)-boson mass, \\(\\alpha_{\\rm s}(M_{Z})\\simeq 0.117\\). We work in the background Landau gauge, \\(\\xi=0\\), which is known to be a fixed point of the renormalization flow in the gauge sector [17],[18]. If we had an exact flow equation at our disposal this choice would fix the system completely. In our truncation, however, we have the parameter \\(Q_{\\sigma}^{0}\\), in addition to the Yang-Mills beta function, which characterizes our ignorance of the exact flow. The quantity \\(Q_{\\sigma}^{0}\\) measuring the momentum suppression of the four-fermion interaction will be set to \\(Q_{\\sigma}^{0}=-0.1\\), in agreement with our considerations given above. It turns out that the infrared properties of the system are only weakly dependent on this parameter and on \\(\\xi\\) (see below), which substantiates our truncation. Furthermore, we choose \\(\\alpha_{*}\\) to be of order 1, but not too close to \\(g_{\\rm D}^{2}/(4\\pi)\\) in order to avoid pathologies: \\(\\alpha_{*}=2.5\\). For this concrete scenario, the transition to the \\(\\chi\\)SB regime occurs at \\(k_{\\chi\\rm SB}\\ \\simeq 423\\)MeV. The renormalized scalar mass slightly above \\(k_{\\chi\\rm SB}\\) and the VEV of the scalar field below \\(k_{\\chi\\rm SB}\\) are depicted in Fig. 4 (left panel). According to Eq. (34), we find a constituent quark mass of \\(m_{\\rm f}\\simeq 371\\)MeV as shown in Fig. 4 (right panel). Of course, these numbers depend strongly on the details of the Yang-Mills beta function for strong coupling \\(\\alpha_{\\rm s}\\sim 1\\); various other examples are discussed in Appendix B. Finally, the running of \\(\\lambda_{\\phi}\\), \\(h^{2}\\) and the scalar and fermionic wave function renormalizations is collected in Fig. 5. Focusing on low-energy QCD-like aspects of our truncated system, it is also remarkable that (apart form the scalar couplings) the choice of \\(Q_{\\sigma}^{0}\\) has little effect on infrared properties of the system: varying \\(Q_{\\sigma}^{0}\\) between \\(-0.5\\ldots 0.001\\) changes \\(k_{\\chi\\rm SB}\\) or \\(m_{\\rm f}\\) only at the level of less than 10%. This is reassuring and in contrast to the strong \\(Q_{\\sigma}^{0}\\)-dependence of the bound-state fixed-point values of \\(\\epsilon_{*}\\) and \\(h_{*}\\). The variations of the infrared properties are similarly small for changes in the gauge parameter in the interval \\(\\xi=0\\ldots 2\\). To summarize, a large class of QCD-like theories including a scalar degree of freedom belong to the QCD universality class owing to an attractive infrared fixed point present for weak gauge coupling. Even before the gauge coupling becomes strong, all theories in this universality class are indistinguishable at low energies. They exhibit an identical approach to \\(\\chi\\)SB which is triggered and quantitatively determined by the increase of the gauge coupling. ## 5 Instanton-mediated interactions, axial anomaly and the fate of the eta boson Up to now, we have considered only that part of the model which has a global U\\({}_{\\rm A}\\)(1) symmetry corresponding to simultaneous axial phase rotations of the scalars and fermions. In QCD, this symmetry is anomalously broken by the presence of gauge-field configurations of nontrivial topology. For instance, instantons induce fermion interactions which break this symmetry. In an instanton-anti-instanton background, the \\(N_{\\rm f}=1\\) interaction is mass-like and can be expressed as [19] \\[{\\cal L}_{\\rm I+A} = \\int_{0}^{\\bar{f}_{\\rm c}(k,m_{\\rm f})}\\frac{d\\varrho}{\\varrho^{ 5}}\\,d_{0}^{N_{\\rm c}}(\\varrho)\\,C_{\\rm E}(N_{\\rm c})\\,(2\\pi^{2}\\varrho^{3})\\, \\left(\\frac{\\alpha(1/\\rho)}{\\alpha(\\bar{\\mu})}\\right)^{-4/b_{0}}\\,(\\bar{\\psi}_ {\\rm R}\\psi_{\\rm L}-\\bar{\\psi}_{\\rm L}\\psi_{\\rm R}),\\] \\[d_{0}^{N_{\\rm c}}(\\varrho):=\\frac{4.6\\,e^{-1.68N_{\\rm c}}}{\\pi^{2 }(N_{\\rm c}-1)!(N_{\\rm c}-2)!}\\left(\\frac{2\\pi}{\\alpha_{\\rm s}(1/\\varrho)} \\right)^{2N_{\\rm c}}e^{-\\frac{2\\pi}{\\alpha_{\\rm s}(1/\\varrho)}},\\] where \\(C_{\\rm E}(N_{\\rm c})\\) is a color factor that arises from averaging over all possible embeddings of \\(SU(2)\\) into \\(SU(N_{\\rm c})\\), e.g., \\(C_{\\rm E}(2)=1\\), \\(C_{\\rm E}(3)=2/3\\), and \\(\\bar{\\mu}\\)=1GeV is the renormalization Figure 4: Flow of the scalar mass \\(m\\), the scalar VEV \\(\\sigma_{\\rm R}\\), and the constituent quark mass \\(m_{\\rm f}\\) close to and in the \\(\\chi\\)SB regime, using the reference set (33). For the particular choice for the running of the gauge coupling according to Eq. (35) with \\(\\alpha_{*}=2.5\\), the transition occurs at \\(k_{\\chi{\\rm SB}}\\,\\simeq 423\\)MeV. scale for the fermion fields. Note that we introduced an IR cutoff function \\(\\bar{f}_{\\rm c}(k,m_{\\rm f})\\) in the upper bound of the instanton radius \\(\\varrho\\) integration. This function should cut off the contribution from all modes with momenta either below \\(k\\) or the generated fermion mass \\(m_{\\rm f}\\), and thereby implements the renormalization group formulation of this interaction in a simple manner. The \\(\\varrho\\) integration is UV finite for \\(\\varrho\\to 0\\) owing to asymptotic freedom, and the infrared (\\(\\varrho\\to\\infty\\)) is controlled by the cutoff \\(\\bar{f}_{\\rm c}\\) and by the increase of the coupling. In the following, we intend to include this interaction as it is, being an example for a \\(\\rm U_{A}(1)\\) violating term. Contrary to standard instanton based models [20], we do not employ further information about, e.g., average instanton sizes and separations or other assumptions about the vacuum state of the gauge field. For this, we note that Eq. (36) already corresponds to an integrated flow, \\({\\cal L}_{\\rm I+A}=\\bar{m}_{\\rm I+A}\\left(\\bar{\\psi}_{\\rm R}\\psi_{\\rm L}-\\bar{ \\psi}_{\\rm L}\\psi_{\\rm R}\\right)\\), where the flow of the induced mass \\(\\bar{m}_{\\rm I+A}\\) is given by7 Footnote 7: A more rigorous treatment of anomalous \\(\\rm U_{A}(1)\\) breaking within the flow equation formalism has been suggested in [21]. \\[\\partial_{t}\\bar{m}_{\\rm I+A}=2\\pi^{2}Z_{\\psi}\\left[d_{0}^{N_{\\rm c}}(\\varrho) \\,C_{\\rm E}(N_{\\rm c})\\,\\left(\\frac{\\alpha(1/\\rho)}{\\alpha(\\bar{\\mu})}\\right) ^{-4/b_{0}}\\right]_{\\varrho=\\bar{f}_{\\rm c}(k,m_{\\rm f})}\\partial_{t}\\bar{f}_{ \\rm c}(k,m_{\\rm f}), \\tag{37}\\] Figure 5: Flow of \\(\\lambda_{\\phi}\\), \\(h^{2}\\), and the wave function renormalizations \\(Z_{\\phi}\\) and \\(Z_{\\psi}\\) over the complete range of scales for the reference set (33). The rapid change of all couplings near \\(t_{10}=\\log_{10}k_{\\chi\\rm SB}\\,/\\Lambda\\simeq-0.5\\) is visible. Whereas \\(h^{2}\\), \\(Z_{\\phi}\\) and \\(Z_{\\psi}\\) approach fixed points in the deep infrared owing to decoupling, \\(\\lambda_{\\phi}\\) decreases logarithmically owing to a massless “eta” in absence of the axial anomaly. with the initial condition \\(\\bar{m}_{\\rm l+A}(k=\\Lambda\\to\\infty)\\to 0\\). For consistency, we also included here the fermion wave function renormalization \\(Z_{\\psi}\\), which was not taken into account in Eq. (36) as derived in [19]. Since \\(\\bar{f}_{\\rm c}\\) has mass dimension -1, an appropriate choice is given by \\[\\bar{f}_{\\rm c}(k,m_{\\rm f})=\\frac{1}{k}\\,f_{\\rm c}(\\kappa h^{2}),\\quad\\mbox{ with }f_{\\rm c}(0)=1,\\,\\,f_{\\rm c}(\\kappa h^{2})\\big{|}_{\\kappa h^{2}\\to\\infty}\\!\\!\\to \\frac{1}{\\sqrt{\\kappa h^{2}}}, \\tag{38}\\] such that \\(\\bar{f}_{\\rm c}(0,m_{\\rm f})=1/m_{\\rm f}\\). For our numerical solutions, we will use \\(f_{\\rm c}(x)=(1+x)^{-1/2}\\) for simplicity. With these definitions, we can rewrite Eq. (37) as \\[\\partial_{t}\\bar{m}_{\\rm l+A}=-2\\pi^{2}Z_{\\psi}\\,\\frac{k}{f_{\\rm c}}\\,d_{0}^{ N_{\\rm c}}(f_{\\rm c}/k)\\,C_{\\rm E}(N_{\\rm c})\\,\\left(\\frac{\\alpha(k/f_{\\rm c})}{ \\alpha(\\bar{\\mu})}\\right)^{-4/b_{0}}\\left(1+\\frac{(-f_{\\rm c}^{\\prime})}{f_{ \\rm c}}\\,\\partial_{t}(\\kappa h^{2})\\right), \\tag{39}\\] where \\(f_{\\rm c}=f_{\\rm c}(\\kappa h^{2})\\), and the prime denotes a derivative. Now we could repeat the calculation of the flow equations of Sect.2 including this fermion mass term in the propagator. In this way, however, we would induce a number of \\({\\rm U_{A}}(1)\\) noninvariant fermion-fermion and fermion-scalar couplings which complicate the calculation unnecessarily. Instead, we propose a generalization of the field transformation (23) which serves to translate the instanton-induced interaction into the scalar sector: \\[\\partial_{t}\\phi_{k}(q) = -(\\bar{\\psi}_{\\rm L}\\psi_{\\rm R})(q)\\,\\partial_{t}\\alpha_{k}(q)+ \\phi_{k}(q)\\,\\partial_{t}\\beta_{k}(q)+\\partial_{t}\\gamma_{k}+(\\phi_{k}^{*} \\phi_{k})\\phi_{k}\\,\\partial_{t}\\delta_{k},\\] \\[\\partial_{t}\\phi_{k}^{*}(q) = (\\bar{\\psi}_{\\rm R}\\psi_{\\rm L})(-q)\\,\\partial_{t}\\alpha_{k}(q)+ \\phi_{k}^{*}(q)\\,\\partial_{t}\\beta_{k}(q)+\\partial_{t}\\gamma_{k}+(\\phi_{k}^{* }\\phi_{k})\\phi_{k}^{*}\\,\\partial_{t}\\delta_{k}, \\tag{40}\\] with additional a priori arbitrary functions \\(\\gamma_{k}\\) and \\(\\delta_{k}\\), whereas \\(\\alpha_{k}\\) and \\(\\beta_{k}\\) are those of Sect. 2. The term \\(\\sim\\partial_{t}\\gamma_{k}\\) corresponds to a \\({\\rm U_{A}}(1)\\) violating shift of the scalar field which can compensate for the instanton-induced fermion mass. The flow of \\(\\bar{m}_{\\rm l+A}\\) is now given by \\[\\partial_{t}\\bar{m}_{\\rm l+A}=\\partial_{t}\\bar{m}_{\\rm l+A}\\big{|}_{\\phi_{k}}+ \\bar{h}\\,\\partial_{t}\\gamma_{k}-\\frac{1}{2}\\bar{\ u}\\,\\partial_{t}\\alpha_{k}, \\tag{41}\\] where the second and third terms arise from the transformation of the Yukawa interaction and the last term in Eq. (6), respectively. Now we can determine \\(\\gamma_{k}\\) such that \\(\\partial_{t}\\bar{m}_{\\rm l+A}=0\\) holds on all scales. In this way, the instanton interaction does not affect \\(\\bar{m}_{\\rm l+A}\\) (which vanishes on all scales), but is translated into the scalar sector and contributes to the running of \\(\\bar{\ u}\\). In the point-like limit (\\(q^{2}=0\\)), we find: \\[\\partial_{t}\\bar{\ u}=-2\\bar{m}^{2}\\,\\partial_{t}\\gamma_{k}+\\bar{\ u}\\, \\partial_{t}\\beta_{k}. \\tag{42}\\] Introducing the dimensionless renormalized quantity \\[\ u=\\frac{\\bar{\ u}}{Z_{\\phi}^{1/2}k^{3}},\\quad\\Rightarrow\\quad\ u_{\\rm R}=k^ {3}\\,\ u, \\tag{43}\\] where \\(\ u_{\\rm R}\\) denotes the renormalized (dimensionful) value, we finally arrive at \\[\\partial_{t}\ u = -\\left(3-\\frac{\\eta_{\\phi}}{2}\\right)\ u-4\\pi^{2}\\frac{\\epsilon }{h}\\,d_{0}^{N_{\\rm c}}(f_{\\rm c}/k)\\,C_{\\rm E}(N_{\\rm c})\\,\\left(\\frac{\\alpha( k/f_{\\rm c})}{\\alpha(\\bar{\\mu})}\\right)^{-4/b_{0}}\\frac{1}{f_{\\rm c}}\\left(1+ \\frac{(-f_{\\rm c}^{\\prime})}{f_{\\rm c}}\\,\\partial_{t}(\\kappa h^{2})\\right) \\tag{44}\\] \\[+\\frac{\ u}{h^{2}}\\big{(}1+(1+\\epsilon)^{2}Q_{\\sigma}\\big{)}\\big{(} \\beta_{\\lambda_{\\sigma}}^{g^{4}}\\,g^{4}+\\beta_{\\lambda_{\\sigma}}^{h^{4}}\\,h^{4 }\\big{)},\\]which describes the running of the axial anomaly in the instanton approximation. The shift \\(\\sim\\partial_{t}\\gamma_{k}\\) induces another \\(\\mathrm{U_{A}}(1)\\) violating term \\((\\phi^{*}\\phi)(\\phi^{*}+\\phi)\\) via the transformation of the \\(\\lambda_{\\phi}(\\phi^{*}\\phi)^{2}\\) term. This can be cancelled by an appropriate choice of the last transformation function \\(\\delta_{k}\\) in Eq. (40), which has to satisfy \\[\\bar{\\lambda}_{\\phi}\\,\\partial_{t}\\gamma_{k}-\\frac{1}{2}\\bar{\ u}\\,\\partial_{t }\\delta_{k}=0. \\tag{45}\\] Finally, the terms \\(\\sim\\delta_{k}\\) in Eq. (40) influence the running of \\(\\lambda_{\\phi}\\) via the transformation of the scalar mass term. The modified flow equation for \\(\\lambda_{\\phi}\\) reads: \\[\\partial_{t}\\lambda_{\\phi} = \\partial_{t}\\lambda_{\\phi}\\big{|}_{\\phi_{k}}+4\\frac{\\lambda_{ \\phi}}{h^{2}}\\big{(}1+2\\epsilon+(1+\\epsilon)^{2}Q_{\\sigma}\\big{)}\\big{(}\\beta_ {\\bar{\\lambda}_{\\sigma}}^{g^{4}}\\,g^{4}+\\beta_{\\bar{\\lambda}_{\\sigma}}^{h^{4} }\\,h^{4}\\big{)} \\tag{46}\\] \\[+16\\pi^{2}\\frac{\\epsilon\\lambda_{\\phi}}{\ u h}\\,d_{0}^{N_{\\rm c} }(f_{\\rm c}/k)\\,C_{\\rm E}(N_{\\rm c})\\,\\left(\\frac{\\alpha(k/f_{\\rm c})}{\\alpha (\\bar{\\mu})}\\right)^{-4/b_{0}}\\,\\frac{1}{f_{\\rm c}}\\left(1+\\frac{(-f_{\\rm c}^{ \\prime})}{f_{\\rm c}}\\,\\partial_{t}(\\kappa h^{2})\\right).\\] These equations are valid in the symmetric regime with similar equations for the \\(\\chi\\)SB regime displayed in appendix C. Strictly speaking, the system is never in the symmetric regime, since chiral symmetry is always broken implicitly by a nonzero \\(\ u\\) term which induces a nonzero VEV \\(\\sigma_{0}\\) for the scalar field. For instance, rotating the VEV into the real component, \\(\\phi=\\sigma_{0}=\\phi^{*}\\), \\(\\sigma_{0}=\\sqrt{\\rho_{0}}\\), the location of the minimum obeys \\[0=U^{\\prime}(\\rho_{0})=\\bar{m}^{2}+\\bar{\\lambda}_{\\phi}\\rho_{0}-\\frac{\\bar{\ u }}{2\\sqrt{\\rho_{0}}}\\quad\\Rightarrow\\quad 0=\\epsilon+\\kappa\\lambda_{\\phi}- \\frac{1}{2}\\frac{\ u}{\\sqrt{\\kappa}}. \\tag{47}\\] Obviously, \\(\\kappa=0\\) is not allowed if \\(\ u\ eq 0\\), owing to the linear term in \\(\\phi\\) in Eq. (6). The running of the minimum can be inferred from \\[0=\\partial_{t}U^{\\prime}(\\rho_{0})\\big{|}_{\\rho}=U^{\\prime\\prime}(\\rho_{0})\\, \\partial_{t}\\rho_{0}+\\partial_{t}U^{\\prime}(\\rho_{0})\\big{|}_{\\rho_{0}}\\quad \\Rightarrow\\quad\\partial_{t}\\rho_{0}=-\\frac{1}{\\bar{\\lambda}_{\\phi}+\\frac{ \\bar{\ u}}{4}\\rho_{0}^{-3/2}}\\,\\partial_{t}U^{\\prime}(\\rho_{0})\\big{|}_{\\rho_{ 0}}. \\tag{48}\\] Since the instanton-induced terms are exponentially small for the major part of the flow, the minimum of the potential is actually very close to zero, and the equations for the symmetric regime of Sect. 2 can be used up to tiny corrections. The solution of the flow equations is numerically difficult with an exponentially small \\(\\kappa\\) in the broken regime. Therefore, we decide to solve the flow equations for large enough \\(k\\) in the symmetric-regime formulation. In this regime, \\(\ u\\) evolves according to Eq. (44) with only a subdominant coupling to the other flow equations via Eq. (46). Then we switch to the broken-regime description at that scale where the instanton-induced fermion mass \\(m_{\\rm f}\\) is of the order of a few MeV; this procedure induces an error only at the per-mille level and turns out to be insensitive to the details of the switching scale. We have analyzed the flow equations including the instanton-mediated interaction numerically and used the reference set of initial conditions as defined in Sect. 4 (see Eq. (33)) for a direct comparison. As expected, most properties of the system are unaffected by the instantons, while the system is governed by the bound-state fixed point. Here the instanton-induced effects are exponentially suppressed, since the coupling is small. In particular, the running of the scalar mass \\(\\epsilon\\) and the Yukawa coupling are identical to the ones displayed in Fig. 3, and the universality properties discussed in Sect. 4 remain unaffected. The renormalized axial anomaly \\(\ u_{\\rm R}\\) is plotted in Fig. 6 (left panel). It remains exponentially small for a large part of the flow and becomes of order \\(({\\rm GeV})^{3}\\) and larger only in the strong-gauge-coupling regime. Here, however, it contributes strongly to the VEV of the scalar field and consequently to the constituent quark mass which leads to the decoupling of the fermions. We observe a rather smooth onset of fermion-mass generation. Furthermore, the constituent quark mass is strongly enhanced by the instanton interactions. For the reference set, we find \\(m_{\\rm f}=1765{\\rm MeV}\\) in the infrared limit \\(k\\to 0\\). Again, this number depends strongly on the precise choice of the running gauge coupling in the infrared, and a number of other possibilities including instanton effects is listed in Appendix B. Let us finally discuss the fate of the \"would-be\" Goldstone boson, which we may call the eta boson in the style of real QCD. Neglecting the axial anomaly, this boson arises from spontaneous breakdown of the global \\({\\rm U}_{\\rm A}(1)\\) as a true massless Goldstone boson; its effects on the scalar sector even after \\(\\chi\\)SB are visible in the logarithmic running of the scalar \\(\\phi^{4}\\) coupling \\(\\lambda_{\\phi}\\) as can be seen in Fig. 5. The \\({\\rm U}_{\\rm A}(1)\\) anomaly, however, generates a mass of the eta boson. In the present formulation, the \\({\\rm U}_{\\rm A}(1)\\) anomaly occurs as the \\(\\bar{\ u}\\) term in the scalar potential (6). Its contribution to the renormalized eta mass can be computed as \\[m_{\\eta}^{2}=\\frac{\ u_{\\rm R}}{2\\sigma_{\\rm R}}. \\tag{49}\\] Within the above-given framework of instanton-mediated interaction, we find for the eta boson mass in the QCD universality class a value of \\(m_{\\eta}\\simeq 4440{\\rm MeV}\\). Of course, this value Figure 6: Left panel: axial anomaly \\(\ u_{\\rm R}\\) in the vicinity of the scale of fermion decoupling. Right panel: instanton-induced fermion mass. Both plots refer to the reference set (33) and the particular choice for the running of the gauge coupling according to Eq. (35) with \\(\\alpha_{*}=2.5\\). Comparison with Fig. 4 shows that the fermion mass is dominated by instanton effects. also strongly depends on the choice of the running of the gauge coupling and should be used only for comparison with other masses computed for the same running gauge coupling. In particular, we find roughly the ratio \\(m_{\\eta}/m_{\\rm f}\\simeq 3\\). This scenario giving rise to a heavy mass of a would-be Goldstone boson is familiar from three-flavor QCD. By contrast, the fate of the eta boson is more spectacular if we go beyond the border of the QCD domain to that of \\({\\rm P}\\chi{\\rm SB}\\), corresponding to a choice of \\(\\bar{m}_{\\Lambda}^{2}<\\bar{m}_{\\rm c}^{2}\\) in Fig. 1 or \\(\\tilde{\\epsilon}_{\\Lambda}<\\tilde{\\epsilon}_{1}^{*}\\) in Fig. 2. Here, the VEV of the scalar field is generically of the order of the cutoff \\(\\Lambda=10^{15}\\)GeV. At the same time, the fermions rapidly become massive and decouple from the flow only a little below \\(\\Lambda\\). As a consequence, instanton contributions or other long-distance topological properties have little effect on the fermion sector and thus the axial anomaly exerts hardly any influence on the scalars. As a result, the contributions to the eta mass are strongly suppressed - powerlike in the denominator and exponentially in the numerator. For instance, for the set of initial parameters corresponding to Fig. 1 (right panel) with \\(\\bar{m}_{\\Lambda}^{2}\\) slightly below \\(\\bar{m}_{\\rm c}^{2}\\), we find an extremely small eta mass, \\(m_{\\eta}\\simeq 2\\cdot 10^{-30}\\)eV. For smaller \\(\\bar{m}_{\\Lambda}^{2}\\), the eta mass decreases even further, and larger eta masses require a tremendous fine-tuning of \\(\\bar{m}_{\\Lambda}^{2}\\) close to \\(\\bar{m}_{\\rm c}^{2}\\). In this scenario beyond the QCD universality class, we have thus found a mechanism to generate extremely small masses without any fine-tuning. From another perspective, this mechanism exploits the fundamentally different RG properties of scalars and chiral gauge theories. For systems in the universality class of \\({\\rm P}\\chi{\\rm SB}\\), the \\(\\chi{\\rm SB}\\) scale of the scalar sector is generically of the order of the UV scale, whereas the nonperturbative scale of the gauge sector can be much smaller. Now the mass of the would-be Goldstone boson is generated by the nonperturbative sector of the gauge theory which is exponentially suppressed at the UV scale. This interplay finally leads to the generation of the extremely small mass. ## 6 Conclusions In this work, we studied a class of theories involving one-flavor massless QCD and a chiral color-singlet scalar field. Our model is parametrized by the gauge coupling and a number of scalar couplings. In this framework, we identified the QCD universality class of theories which share the same physics at low energies, namely spontaneous breaking of chiral symmetry triggered by the strongly interacting gauge sector at the QCD scale. As a remarkable result, the QCD universality class contains theories with fundamental scalars where the microscopic scalar potential has its minimum at nonzero field (\\(\\bar{m}_{\\Lambda}^{2}>\\bar{m}_{\\rm c}^{2}\\sim-{\\cal O}(\\Lambda^{2})\\)). For these the theories, the scalar fluctuations drive the system first into the symmetric regime with a large positive scalar mass, and the remaining flow is governed by the QCD sector. We checked explicitly that this is in accord with perturbative expectations for weak couplings (cf. Fig. 1, right panel). The mechanism that establishes QCD universality is the occurrence of an infrared attractive bound-state fixed point in the scalar couplings which persists over a wide range of scales as long as the gauge coupling is weak. At this fixed point, the scalar field exhibits quark-antiquark bound-state behavior and the RG running of the scalar couplings is governed by the RG behavior of QCD. All memory of the scalar initial conditions is lost by the system. As a remarkable consequence, the scalar mass is not a relevant operator at this fixed point. For increasing gauge coupling, the bound-state fixed point is destabilized and the system runs towards the \\(\\chi\\)SB regime. Here the role of the scalar field changes and it can characterize (quark) condensates and (mesonic) excitations on top of the condensate. At strong coupling, the simple overall picture of \\(\\chi\\)SB arising from our truncation can, of course, be modified quantitatively as well as qualitatively by the influence of higher-order operators. In particular, mixed non-minimal fermion-gluon and scalar-gluon operators might add new features to \\(\\chi\\)SB by providing a coupling to the nontrivial gluonic vacuum structure. Beyond the QCD universality class, we find the class of theories exhibiting perturbative spontaneous chiral symmetry breaking (P\\(\\chi\\)SB). In this class, the system is mainly driven by the scalar sector, and IR properties such as condensates and generated fermion masses depend strongly on the initial scalar parameters. The gauge sector exerts hardly any influence on the fermions in this class unless the scalar parameters are fine-tuned to a high precision. In the deep IR, pure gluodynamics without dynamical quarks remains. The flow of the scalar couplings is never in the attractive domain of the bound-state fixed point, but is governed by a fundamental-particle fixed point. Small deviations from this fixed point have an infrared unstable component which corresponds to the RG relevant scalar-mass operator. In both universality classes, we found interesting implications. Our setup of the QCD universality class admits a resolution of an old puzzle: whereas QCD has no fine-tuning problem and is completely determined by fixing the coupling at a certain scale, low-energy QCD models based on NJL-type fermion self-interactions depend strongly on additional parameters such as an intrinsic UV cutoff. In the context of partial bosonization, this cutoff-dependence corresponds to a strong dependence of IR observables on the bosonization scale (or the value of the scalar mass at this scale). In our approach with scale-dependent field transformations, partial bosonization occurs at all scales, and no artificial dependence on unphysical scales is introduced. In our truncation, QCD flows continuously from a high scale with quarks and gluons as the relevant degrees of freedom to intermediate scales with quarks, gluons and quark bound states and further to low scales with constituent quarks, condensates and mesons. In the P\\(\\chi\\)SB universality class with one fermion flavor, we identified a natural mechanism for the generation of extremely small scalar masses without fine-tuning. The mechanism exploits the fact that the spontaneous breaking of the U\\({}_{\\rm A}(1)\\) symmetry would lead to an exactly massless Goldstone boson in absence of the gauge interactions. The axial anomaly in the gauge sector then endows a small mass to this boson. Owing to the highly different RG behavior of the scalar and the gauge sector, the scale of P\\(\\chi\\)SB differs generically from the scale of nonperturbative gauge effects by many orders of magnitude. This leads to an exponential suppression of the influence of the axial anomaly and thus to an exponentially small but nonzero scalar mass. For theories with a fundamental scalar, the question arises as to whether our technique of fermion-boson translation is capable of describing all possible mesonic degrees of freedom. Let us first look at two extreme situations. For a large negative renormalized scalar mass term, perturbation theory applies: there is a fundamental scalar, and separately propagating meson states may not exist - similar to a very heavy top quark. For a positive renormalized mass term, the fundamental scalar decouples from the low-energy sector in perturbation theory. The low-energy sector then is QCD without scalars, as in our picture. The transition is less obvious: in the region where the fundamental scalar mass would perturbatively be of the order of the strong interaction scale, there is a strong mixing between operators corresponding to fundamental and composite scalars. In principle, in a situation with mixing the propagator in the scalar sector may have one or several pole-like structures that can be associated with particle excitations. Our truncation cannot fully resolve this issue, since, by construction, it follows the flow of only one pole in the propagator. Our investigation shows the consistency of a picture with only one pole. If the true physical situation had two poles our truncation would follow the flow of the lowest mass. We see, however, no indication that a second pole actually exists. Nevertheless, it seems worthwhile to discuss the possible implications of a second scalar \"pole\" for the issue of universality classes. First of all, for \\(\\bar{m}_{\\Lambda}^{2}\\) larger than but not in the immediate vicinity of \\(\\bar{m}_{\\rm c}^{2}\\), a \"second pole\" could correspond only to an additional heavy scalar particle. This would decay at a high rate into the QCD mesons, since no quantum numbers forbid such a decay. (One expects at most a resonance rather than a true pole.) Furthermore, effects from the exchange of such a heavy scalar resonance would be suppressed by inverse powers of the mass and therefore play no role for the low-energy theory. This is what one usually understands by \"QCD universality class\", (a notion that is not thought to resolve the detailed short-distance physics). This issue becomes more interesting when \\(\\bar{m}_{\\Lambda}^{2}\\) is fine-tuned to the immediately vicinity of \\(\\bar{m}_{\\rm c}^{2}\\). In this case, we approach the _boundary_ of the QCD universality class. We emphasize that this boundary is not uniquely defined in terms of the symmetries and particles characteristic for the QCD universality class. Considering the QCD universality class from the viewpoint of a larger space of models or parameters, the spectrum of excitations that are relevant at the boundary can depend on the direction in parameter space from which the QCD universality class is approached. Different directions may yields a differend \"number of poles\" in the boundary region. For this reason, a future more detailed investigation of this issue would be quite interesting. We stress that all of our main conclusions can be drawn from a mere perturbative knowledge of the gauge sector which is well under control. In a broader sense, the pure QCD sector in our work can be regarded as a particular example for possible other (nonperturbatively) renormalizable theories leading to fermionic self-interactions in scalar channels. Let us finally discuss our findings from a different perspective, concentrating on the scalar sector. Scalar fields are known to lead to profound problems in quantum field theory for two reasons: triviality and (un-)naturalness. Triviality tells us that an interacting scalar theory requires a UV cutoff which cannot be removed without switching off the interaction. Therefore, whenever we see a scalar quantum field at some low scale, we know that there must be new physics at a higher scale. The problem of naturalness tells us that it is difficult to achieve a large separation of scales for models with interacting scalar fieldswithout fine-tuning. Our formulation has the potential to solve both problems. A first example can be given within the QCD universality class. Although from a QCD perspective, the scalar field could be regarded as purely auxiliary, nothing prevents us from considering it as fundamental, since the concepts of compositeness and fundamentality are interchangeable from the viewpoint of our flow equation with field transformations. We showed in detail that \"standard\" QCD at low energies is indistinguishable from QCD with a fundamental scalar, as long as the latter system is in the QCD universality class. In this way, we can circumvent triviality by starting in the UV from a scalar field theory without self-interaction and Yukawa coupling for which the continuum limit can be taken trivially. The scalar interactions are induced by quantum fluctuations. In this construction, the system is always in the QCD universality class, and therefore inherits the number of relevant and marginal operators form QCD. In particular, the scalar mass term is not a relevant operator, so that no naturalness or fine-tuning problems arise in and from the scalar sector. Alternatively, we could also follow the bound-state fixed point to \\(k\\to\\infty\\), where it presumably becomes an exact fixed point even beyond our truncation. (The \\(\\beta\\) function for the running gauge coupling vanishes, owing to asymptotic freedom in this limit.) Perhaps more interesting is a second possibility in the P\\(\\chi\\)SB class. Let us consider for \\(k\\to\\infty\\) a scalar model with \\(Z_{\\phi}\\to 0\\), \\(\\lambda_{\\phi}\\to 0\\) and \\(\\bar{m}^{2}\\), \\(\\bar{h}\\) chosen such that \\(\\tilde{\\epsilon}\\) corresponds to the fixed point \\(\\tilde{\\epsilon}_{1}^{*}\\). This model has an alternative interpretation as a model with four-fermion interactions (and no scalar field). Both the gauge coupling and the critical four fermion coupling \\[\\bar{\\lambda}_{\\sigma}^{*}=\\frac{1}{2\\tilde{\\epsilon}_{1}^{*}k^{2}} \\tag{50}\\] vanish for \\(k\\to\\infty\\). If this fixed point persists beyond our truncation, it defines a nonperturbatively renormalizable theory [22]. For lower \\(k\\) a nonzero \\(\\lambda_{\\phi}\\) is generated by the flow and we end up with a theory that effectively looks like a model with an interacting fundamental scalar field. This scalar field can give mass to the quarks by P\\(\\chi\\)SB independently of the strong interactions, in analogy to the Higgs scalar. The triviality problem could be solved in this case - but not the naturalness problem, since we expect a relevant parameter corresponding to the scalar mass term. This discussion sheds new light on the continuous transition between the P\\(\\chi\\)SB and QCD universality classes. In the language of statistical physics, it can be considered as a type of crossover between the \"fundamental fixed point\" \\(\\tilde{\\epsilon}_{1}^{*}\\) and the \"bound-state fixed point\" \\(\\tilde{\\epsilon}_{2}^{*}\\). As a particularity, the gauge coupling is a marginal parameter for both fixed points. The scale where it becomes strong sets the lowest possible scale for the effective fermion masses. Quite generally, the existence of a bound-state-like fixed point leads to a mechanism with a naturally small scalar mass. In a sense, this is a realization of earlier ideas of a large anomalous mass dimension for the scalar field or \"self-organized criticality\" [23]. It would be interesting to know if a similar mechanism could contribute to an understanding of electroweak symmetry breaking which occurs at a characteristic scale hundreds of times bigger as compared to QCD. ## Acknowledgement The authors thank J. Jaeckel for valuable discussions. H.G. acknowledges financial support by the Deutsche Forschungsgemeinschaft under contract Gi 328/1-1. ## Appendix A Threshold functions The regularization scheme dependence induced by the cutoff function \\(R_{k}\\) is carried by the threshold functions \\(l\\) and \\(m\\). Let us represent the cutoff functions in the scalar, fermion and gauge sector by \\[R_{k}^{\\phi}(q^{2})=Z_{\\phi}\\,q^{2}r(y),\\quad R_{k}^{\\psi}(q)=-Z_{\\psi}\\hbox to 0.0pt{/}{d}\\,r_{\\rm F}(y),\\quad\\bigl{(}R_{k}^{A}(q)\\bigr{)}_{\\mu\ u}=Z_{\\rm F }q^{2}r(y)\\,\\biggl{(}g_{\\mu\ u}-\\left(1-\\frac{1}{\\xi}\\right)\\frac{q_{\\mu}q_{ \ u}}{q^{2}}\\biggr{)},\\] (A.1) where \\(y=q^{2}/k^{2}\\), and \\(r\\) and \\(r_{\\rm F}\\) denote dimensionless cutoff shape functions. Furthermore, it is useful to introduce the inverse average propagators \\(P(x)=x(1+r(x/k^{2}))\\) and \\(P_{\\rm F}(x)=x(1+r_{\\rm F}(x/k^{2}))^{2}\\), where \\(x=q^{2}\\). Most of the threshold functions given above are defined in Appendix A of [4]. The ones which cannot be found therein are marked with a tilde. These can be defined as follows: \\[\\tilde{m}_{1,1}^{({\\rm FB}),d}(w_{\\rm F},w_{\\rm B};\\eta_{\\psi}; \\eta_{\\phi})\\] \\[\\quad=-\\frac{1}{2}k^{4-d}\\int_{0}^{\\infty}dx\\,x^{d/2-1}\\,\\tilde{ \\partial}_{t}\\left[\\frac{1+r_{\\rm F}(x/k^{2})}{P_{\\rm F}(x)+k^{2}w_{\\rm F}} \\frac{1}{P(x)+k^{2}w_{\\rm B}}\\right],\\] (A.2) \\[\\tilde{l}_{1,2}^{({\\rm FB}),d}(w_{\\rm F},w_{\\rm B};\\eta_{\\psi}, \\eta_{\\rm B})\\] \\[\\quad=-\\frac{1}{2}k^{6-d}\\int_{0}^{\\infty}dx\\,x^{d/2-1}\\,\\tilde{ \\partial}_{t}\\left[\\frac{P_{\\rm F}(x)}{(P_{\\rm F}(x)+k^{2}w_{\\rm F})^{2}} \\frac{1}{(P(x)+k^{2}w_{\\rm B})^{2}}\\right],\\] (A.3) \\[\\tilde{l}_{1,1,1}^{({\\rm FBB}),d}(w_{\\rm F},w_{\\rm B1},w_{\\rm B2}; \\eta_{\\psi},\\eta_{\\rm B})\\] \\[\\quad=-\\frac{1}{2}k^{6-d}\\int_{0}^{\\infty}dx\\,x^{d/2-1}\\,\\tilde{ \\partial}_{t}\\left[\\frac{P_{\\rm F}(x)}{(P_{\\rm F}(x)+k^{2}w_{\\rm F})^{2}} \\frac{1}{P(x)+k^{2}w_{\\rm B1}}\\frac{1}{P(x)+k^{2}w_{\\rm B2}}\\right],\\] (A.4) where \\(\\eta_{\\rm B}\\) denotes one of the anomalous dimensions of the bosonic propagators under consideration, \\(\\eta_{\\phi}\\) or \\(\\eta_{\\rm F}\\) in our case. The derivative \\(\\tilde{\\partial}_{t}\\) acts on the \\(k\\) dependence of the cutoff function only (for an explicit representation of \\(\\tilde{\\partial}_{t}\\), see [4]). Some relations among the threshold functions are given by \\[\\tilde{l}_{1,1,1}^{({\\rm FBB}),d}(w_{\\rm F},w_{\\rm B},w_{\\rm B}; \\eta_{\\psi},\\eta_{\\rm B}) \\equiv \\tilde{l}_{1,2}^{({\\rm FB}),d}(w_{\\rm F},w_{\\rm B};\\eta_{\\psi}, \\eta_{\\rm B}),\\] \\[\\tilde{l}_{1,2}^{({\\rm FB}),d}(w_{\\rm F}=0,w_{\\rm B};\\eta_{\\psi}, \\eta_{\\rm B}) = l_{1,2}^{({\\rm FB}),d}(w_{\\rm F}=0,w_{\\rm B};\\eta_{\\psi},\\eta_{ \\rm B}).\\] (A.5)For our numerical computations, we use the linear cutoff functions proposed in [16] (\\(y=q^{2}/k^{2}\\)), \\[r(y)=\\left(\\frac{1}{y}-1\\right)\\theta(1-y),\\quad r_{\\rm F}(y)=\\left(\\frac{1}{ \\sqrt{y}}-1\\right)\\theta(1-y),\\] (A.6) for which all integrals listed above can be performed analytically, yielding in the present context: \\[\\tilde{m}_{1,1}^{({\\rm FB}),d}(w_{\\rm F},0;\\eta_{\\psi},\\eta_{F}) = \\frac{2}{d-1}\\,\\frac{1}{1+w_{\\rm F}}\\left[\\frac{1}{2}\\left(1+ \\frac{d}{2}\\eta_{\\psi}\\right)-\\frac{\\eta_{F}}{d+1}+\\frac{(1-\\frac{d}{2}\\eta_{ \\psi})}{1+w_{\\rm F}}\\right],\\] (A.7) \\[\\tilde{l}_{1,2}^{({\\rm FB}),d}(w_{\\rm F},0;\\eta_{\\psi},\\eta_{F}) = \\frac{2}{d}\\frac{1}{(1+w_{\\rm F})^{2}}\\left[\\left(1-\\frac{2\\eta_ {F}}{d+2}+\\frac{\\eta_{\\psi}}{d+1}\\right)+\\frac{2}{1+w_{\\rm F}}\\left(1-\\frac{ \\eta_{\\psi}}{d+1}\\right)\\right]\\!,\\] \\[\\tilde{l}_{1,1,1}^{({\\rm FB}),d}(w_{\\rm F},w_{1},w_{2};\\eta_{ \\psi},\\eta_{\\phi}) = \\frac{2}{d}\\frac{1}{(1+w_{\\rm F})^{2}(1+w_{1})(1+w_{2})}\\] \\[\\times\\!\\left[\\!\\left(\\!\\frac{1}{1\\!+\\!w_{1}}+\\frac{1}{1\\!+\\!w_{2 }}\\!\\right)\\!\\left(\\!1-\\frac{\\eta_{\\phi}}{d\\!+\\!2}\\!\\right)\\!+\\!\\left(\\!\\frac{ 2}{1\\!+\\!w_{\\rm F}}-1\\!\\right)\\!\\left(\\!1-\\frac{\\eta_{\\psi}}{d\\!+\\!1}\\!\\right) \\!\\right]\\!.\\] The representations of all other threshold functions for the linear cutoff can be looked up in [13]. ## Appendix B Nonperturbative running of the gauge coupling The infrared quantities serving as \"physical observables\" in the present work, such as the constituent quark mass or the eta boson mass, depend on the way we model the effective gauge coupling in the nonperturbative domain in our truncation. In order to gain more insight into this dependence, we study different gauge coupling \\(\\beta\\) functions proposed in the literature in this appendix. Here we focus on theories within the QCD universality class which are sensitive to the infrared physics of the gauge sector. In Sect. 4, we used a \\(\\beta\\) function with accurate two-loop behavior and an IR fixed-point at \\(\\alpha_{*}=2.5\\). We denote this \\(\\beta\\) function defined in Eq. (35) serving as a reference as \\(\\beta_{\\rm Ref}\\) in the following. Such \\(\\beta\\) functions with a fixed point of the gauge coupling in the infrared have a long tradition in the literature and have frequently been discussed from a phenomenological viewpoint [24]. Furthermore, some theoretical evidence for the existence of such a fixed point has been collected in certain nonperturbative approximation schemes. However, due to the lack of a unique nonperturbative definition of the gauge coupling and due to an inherent regularization scheme dependence of the \\(\\beta\\) function, a comparison of different theoretical approaches and a connection to phenomenology is difficult to make. Here we take a pragmatic point of view and use the various running couplings as effective ones which are implicitly defined by their use in our approach. Recently, an actual nonperturbative computation of the running coupling has been set up in the framework of truncated Schwinger-Dyson equations in Landau gauge [25], revealing an infrared fixed point; these results also receive some support from lattice calculations[26]. For our purposes, we use the representation given in [27] for the running coupling, \\[g_{\\rm SDE}^{2}(x) = \\frac{4\\pi\\,\\alpha_{*,{\\rm SDE}}}{\\ln(e+a_{1}x^{a_{2}}+c_{1}x^{c_{2} })},\\quad\\mbox{where $\\alpha_{*,{\\rm SDE}}=2.972$},\\] (B.8) and \\(a_{1}=5.292{\\rm GeV}^{-2a_{2}}\\), \\(a_{2}=2.324\\), \\(c_{1}=0.034{\\rm GeV}^{-2c_{2}}\\), \\(c_{2}=3.169\\). This coupling is also normalized to the standard value at the \\(Z\\) mass, and we identify \\(x=k^{2}/({\\rm GeV})^{2}\\). The \\(\\beta\\) function is given by \\(\\beta_{\\rm SDE}=\\partial_{t}g_{\\rm SDE}^{2}\\). As a second example, we use the running coupling arising from a scheme called \"Analytic Perturbation Theory\" [28] that has been devised for enforcing analyticity properties of the coupling in the time-like and space-like (Euclidean) region. For our numerical routine, we use the approximate (but two-loop accurate) representation \\[g_{\\rm APT}^{2}(x) = \\frac{(4\\pi)^{2}}{b_{0}}\\left(\\frac{1}{l_{2}(x)}+\\frac{1}{1-\\exp [l_{2}(x)]}\\right),\\] (B.9) \\[l_{2}(x)=\\ln x+\\frac{b_{1}}{b_{0}^{2}}\\ln\\sqrt{\\ln^{2}x+4\\pi^{2}},\\] (B.10) where we identify \\(x=k^{2}/(1349{\\rm MeV})^{2}\\), so that this coupling is also normalized at the \\(Z\\) mass. In the infrared \\(k\\to 0\\), the coupling tends to the fixed point \\(\\alpha_{*,{\\rm APT}}=\\frac{4\\pi}{b_{0}}\\simeq 1.22\\) for \\(N_{\\rm c}=3\\) and \\(N_{\\rm f}=1\\). As a third example, we use a calculation of the running coupling based on a truncated flow equation that also revealed an infrared fixed point \\(\\alpha_{*,{\\rm FE}}\\)[29]. The corresponding \\(\\beta_{\\rm FE}\\) function was obtained as an extensive multiple integral which we will not display here. Since this result holds for pure gauge theory, we incorporate one quark flavor in a \"quenched\" approximation by adding the fermionic part of the two-loop \\(\\beta\\) function to the pure gauge result. This leads to an infrared fixed-point value of \\(\\alpha_{*,{\\rm FE}}\\simeq 3.43\\pm 0.01\\), where the theoretical error arises from an incompletely resolved color structure in [29]. We would like to point out that the definition of the running coupling used in [29] agrees with the one of the present work. As a simple example for a running coupling which does not tend to an infrared fixed point, we employ a class of \\(\\beta\\) functions that correspond to anomalous dimensions of the gauge field \\(\\eta_{F}\\) which become constant for \\(k\\to 0\\). This is realized by the choice \\[\\beta_{\\eta_{*}}=-2\\left(b_{0}\\,\\frac{g^{4}}{16\\pi^{2}}+b_{1}\\,\\frac{g^{6}}{( 16\\pi^{2})^{2}}\\right)\\left[1-\\exp\\left(-\\frac{(16\\pi^{2})^{2}}{2b_{1}g^{4}}( -\\eta_{*})\\right)\\right],\\] (B.11) so that in fact \\(\\eta_{F}=\\beta_{\\eta_{*}}/g^{2}\\to\\eta_{*}\\) for \\(k\\to 0\\). For negative \\(\\eta_{*}\\), the running coupling increases \\(\\sim(1/k)^{|\\eta_{*}|}\\) for \\(k\\to 0\\). As explicit examples, we choose \\(\\eta_{*}=-0.1\\) and \\(\\eta_{*}=-0.5\\) for the numerical analysis. The results of the numerical integration of the flow equations are collected in Tab. B. For the various \\(\\beta\\) functions denoted in the first column, we listed the transition scale \\(k_{\\chi{\\rm SB}}\\) into the \\(\\chi{\\rm SB}\\) regime and the generated fermion mass \\(m_{\\rm f}\\) in the next two columns. These results refer to calculations without axial anomaly and instanton-mediated interactions,similarly to Sect. 4. In the last two columns the fermion mass with instanton contribution and the mass of the eta boson are given. Obviously, the quantities \\(k_{\\chi\\mathrm{SB}}\\,\\) and \\(m_{\\mathrm{f}}\\) in the calculation without axial anomaly are roughly correlated. Furthermore, \\(\\beta_{\\mathrm{APT}}\\) and \\(\\beta_{\\eta_{*}=-0.1}\\) lead to small values for \\(k_{\\chi\\mathrm{SB}}\\,\\) and \\(m_{\\mathrm{f}}\\), since both approach larger values of the coupling only very slowly. The fermion and eta boson masses including the axial anomaly are not strictly correlated with the former quantities. The running of the gauge coupling enters these quantities over a wider range of scales, since sizable instanton contributions can already arise while the bound-state fixed point is still present. Nevertheless, our main observation is that the overall qualitative picture of the approach to \\(\\chi\\mathrm{SB}\\), even in the nonperturbative domain, is rather independent of the details of the gauge sector in our truncation. On the one hand, a strong gauge coupling \\(g^{2}>g_{\\mathrm{D}}^{2}\\) is all that is needed to trigger \\(\\chi\\mathrm{SB}\\); on the other hand, fermion decoupling cuts off any strong influence of the running coupling in the deep infrared. Quantitative results, of course, depend strongly on the flow of the coupling between the scale at which \\(g^{2}=g_{\\mathrm{D}}^{2}\\) and the scale of fermion decoupling. This mainly concerns the overall scale, whereas mass ratios like \\(m_{\\eta}/m_{\\mathrm{f}}\\) turn out to be more robust. ## Appendix C Flow equations with axial anomaly in the broken regime Here we collect the flow equations for the various couplings in the broken regime, including the contributions arising from the axial anomaly. Let us begin with the scalar and fermion \\begin{table} \\begin{tabular}{|l||r|r|r||r|r|} \\hline \\(\\beta\\) function & \\(m_{\\mathrm{f}}\\)/MeV & \\(m_{\\eta}\\)/MeV & \\(m_{\\eta}/m_{\\mathrm{f}}\\) & \\(k_{\\chi\\mathrm{SB}}\\,\\)/MeV & \\(\\tilde{m}_{\\mathrm{f}}\\)/MeV \\\\ \\hline \\hline \\(\\beta_{\\mathrm{Ref}}\\) & 1765 & 4438 & 2.5 & 423 & 371 \\\\ \\hline \\(\\beta_{\\mathrm{SDE}}\\) & 1777 & 4226 & 2.4 & 457 & 427 \\\\ \\hline \\(\\beta_{\\mathrm{APT}}\\) & 563 & 1990 & 3.5 & 4 & 3 \\\\ \\hline \\(\\beta_{\\mathrm{FE}}\\) & 903\\(\\pm\\)2 & 2117\\(\\pm\\)1 & 2.3 & 243\\(\\pm\\)2 & 241\\(\\pm\\)3 \\\\ \\hline \\(\\beta_{\\eta_{*}=-0.1}\\) & 513 & 1793 & 3.5 & 11 & 8 \\\\ \\hline \\(\\beta_{\\eta_{*}=-0.5}\\) & 1946 & 4534 & 2.3 & 394 & 395 \\\\ \\hline \\end{tabular} \\end{table} Table 1: Characteristic masses \\(m_{\\mathrm{f}}\\) and \\(m_{\\eta}\\) for various nonperturbative \\(\\beta\\) functions for the strong gauge coupling. The main uncertainty concerns the overall scale, whereas the ratio \\(m_{\\eta}/m_{\\mathrm{f}}\\) is relatively robust. We also show the scale of transition to \\(\\chi\\mathrm{SB}\\) and the fermion mass \\(\\tilde{m}_{\\mathrm{f}}\\) in absence of instanton effects. anomalous dimensions: \\[\\eta_{\\phi} = 4v_{4}\\,\\kappa\\lambda_{\\phi}^{2}\\,m_{2,2}^{4}(\\tfrac{\ u}{2\\sqrt{ \\kappa}},\\tfrac{\ u}{2\\sqrt{\\kappa}}+2\\kappa\\lambda_{\\phi};\\eta_{\\phi})\\] (C.12) \\[+4N_{\\rm c}v_{4}\\,h^{2}\\left[m_{4}^{({\\rm F}),4}(\\kappa h^{2}; \\eta_{\\psi})+\\kappa h^{2}\\,m_{2}^{({\\rm F}),4}(\\kappa h^{2};\\eta_{\\psi})\\right],\\] \\[\\eta_{\\psi} = 2C_{2}(N_{\\rm c})v_{4}\\,g^{2}\\Big{[}(3-\\xi)\\,m_{1,2}^{({\\rm FB}),4}(\\kappa h^{2},0;\\eta_{\\psi},\\eta_{\\rm F})-3(1-\\xi)\\,\\tilde{m}_{1,1}^{({\\rm FB }),4}(\\kappa h^{2},0;\\eta_{\\psi},\\eta_{\\rm F})\\Big{]}\\] (C.13) \\[+v_{4}\\,h^{2}\\big{[}m_{1,2}^{({\\rm FB}),4}(\\kappa h^{2},\\tfrac{ \ u}{2\\sqrt{\\kappa}}+2\\kappa\\lambda_{\\phi};\\eta_{\\psi},\\eta_{\\phi})+m_{1,2}^{( {\\rm FB}),4}(\\kappa h^{2},\\tfrac{\ u}{2\\sqrt{\\kappa}};\\eta_{\\psi},\\eta_{\\phi}) \\big{]}.\\] Including the appropriately adjusted fermion-boson translation as outlined in Sect. 5, the flow equations for the minimum of the scalar potential and the scalar self-interaction read: \\[\\partial_{t}\\kappa = -(2+\\eta_{\\phi})\\kappa\\] (C.14) \\[+2v_{4}\\frac{\\lambda_{\\phi}}{\\lambda_{\\phi}+\\frac{\ u}{4\\kappa^{3 /2}}}\\big{[}l_{1}^{4}(\\tfrac{\ u}{2\\sqrt{\\kappa}};\\eta_{\\phi})+3l_{1}^{4}( \\tfrac{\ u}{2\\sqrt{\\kappa}}+2\\kappa\\lambda_{\\phi};\\eta_{\\phi})\\big{]}-8N_{\\rm c }v_{4}\\,h^{4}\\,l_{1}^{({\\rm F}),4}(\\kappa h^{2};\\eta_{\\psi})\\] \\[+\\frac{2(\\kappa\\lambda_{\\phi}-\\frac{\ u}{2\\sqrt{\\kappa}})}{( \\lambda_{\\phi}+\\frac{\ u}{4\\kappa^{3/2}})h^{2}}\\left(1-\\kappa\\lambda_{\\phi}+ \\frac{\ u}{2\\sqrt{\\kappa}}\\right)\\] \\[\\qquad\\times\\left(1+\\left(1-\\kappa\\lambda_{\\phi}+\\frac{\ u}{2 \\sqrt{\\kappa}}\\right)Q_{\\sigma}\\right)\\big{(}\\beta_{\\lambda_{\\sigma}}^{g^{4} }\\,g^{4}+\\beta_{\\lambda_{\\sigma}}^{h^{4}}\\,h^{4}\\big{)},\\] \\[\\partial_{t}\\lambda_{\\phi} = 2\\eta_{\\phi}\\,\\lambda_{\\phi}+2v_{4}\\,\\lambda_{\\phi}^{2}\\big{[}l_ {2}^{4}(\\tfrac{\ u}{2\\sqrt{\\kappa}};\\eta_{\\phi})+9l_{2}^{4}(\\tfrac{\ u}{2 \\sqrt{\\kappa}}+2\\kappa\\lambda_{\\phi};\\eta_{\\phi})\\big{]}-8N_{\\rm c}v_{4}\\,h^{4 }\\,l_{2}^{({\\rm F}),4}(\\kappa h^{2};\\eta_{\\psi})\\] \\[+\\frac{4\\lambda_{\\phi}}{h^{2}}\\left[1-2\\kappa\\lambda_{\\phi}+ \\frac{\ u}{\\sqrt{\\kappa}}+\\left(1-\\kappa\\lambda_{\\phi}+\\frac{\ u}{2\\sqrt{ \\kappa}}\\right)^{2}Q_{\\sigma}\\right]\\big{(}\\beta_{\\lambda_{\\sigma}}^{g^{4}}\\,g ^{4}+\\beta_{\\lambda_{\\sigma}}^{h^{4}}\\,h^{4}\\big{)}\\] \\[+\\frac{16\\pi^{2}\\lambda_{\\phi}}{\ u h}\\left(\\!\\frac{\ u}{2\\sqrt{ \\kappa}}\\!-\\!\\kappa\\lambda_{\\phi}\\!\\right)d_{0}^{N_{\\rm c}}(f_{\\rm c}/k)\\,C_{ \\rm E}(N_{\\rm c})\\left(\\frac{\\alpha(k/f_{\\rm c})}{\\alpha(\\bar{\\mu})}\\right)^{ -4/b_{0}}\\frac{1}{f_{\\rm c}}\\left(\\!1\\!+\\!\\frac{(-f_{\\rm c}^{\\prime})}{f_{\\rm c }}\\,\\partial_{t}(\\kappa h^{2})\\!\\right)\\!.\\] The Yukawa coupling flows in the broken regime according to \\[\\partial_{t}h^{2} = (2\\eta_{\\psi}+\\eta_{\\phi})\\,h^{2}-4v_{4}\\,h^{4}\\big{[}l_{1,1}^{( {\\rm FB}),4}(\\kappa h^{2},\\tfrac{\ u}{2\\sqrt{\\kappa}};\\eta_{\\psi},\\eta_{\\phi}) -l_{1,1}^{({\\rm FB}),4}(\\kappa h^{2},\\tfrac{\ u}{2\\sqrt{\\kappa}}+2\\kappa \\lambda_{\\phi};\\eta_{\\psi},\\eta_{\\phi})\\big{]}\\] (C.16) \\[-8(3+\\xi)C_{2}(N_{\\rm c})v_{4}\\,g^{2}h^{2}\\,l_{1,1}^{({\\rm FB}),4 }(\\kappa h^{2},0;\\eta_{\\psi},\\eta_{\\rm F}),\\] \\[+2\\left(1-2\\kappa\\lambda_ whereas \\(\\beta^{g^{4}}_{\\tilde{\\lambda}_{\\sigma}}\\) remains the same. ## References * [1] C. Wetterich, Phys. Lett. B **301**, 90 (1993); Nucl. Phys. B **352**, 529 (1991); Z. Phys. C **48**, 693 (1990). * [2] H. Gies and C. Wetterich, Phys. Rev. D **65**, 065001 (2002) [arXiv:hep-th/0107221]. * [3] Y. Nambu and G. Jona-Lasinio, Phys. Rev. **122**, 345 (1961); _ibid._**124**, 246 (1961). * [4] J. Berges, N. Tetradis and C. Wetterich, arXiv:hep-ph/0005122. * [5] L. F. Abbott, Nucl. Phys. B **185**, 189 (1981). * [6] M. Reuter and C. Wetterich, Nucl. Phys. B **417**, 181 (1994); Phys. Rev. D **56**, 7893 (1997) [arXiv:hep-th/9708051]; F. Freire, D. F. Litim and J. M. Pawlowski, Phys. Lett. B **495**, 256 (2000) [arXiv:hep-th/0009110]. * [7] M. Bonini, M. D'Attanasio and G. Marchesini, Nucl. Phys. B **421**, 429 (1994) [arXiv:hep-th/9312114]; U. Ellwanger, Phys. Lett. B **335**, 364 (1994) [arXiv:hep-th/9402077]. * [8] J. M. Pawlowski, Int. J. Mod. Phys. A **16**, 2105 (2001). * [9] D. F. Litim and J. M. Pawlowski, JHEP **0209**, 049 (2002) [arXiv:hep-th/0203005]. * [10] J. Jaeckel and C. Wetterich, arXiv:hep-ph/0207094. * [11] D. U. Jungnickel and C. Wetterich, Phys. Rev. D **53**, 5142 (1996) [hep-ph/9505267]. * [12] R. S. Chivukula, M. Golden and E. H. Simmons, Phys. Rev. Lett. **70**, 1587 (1993). * [13] F. Hoefling, C. Nowak and C. Wetterich, arXiv:cond-mat/0203588. * [14] K. I. Aoki, K. i. Morikawa, J. I. Sumi, H. Terao and M. Tomoyose, Prog. Theor. Phys. **97**, 479 (1997) [arXiv:hep-ph/9612459]. * [15] K. I. Aoki, K. Takagi, H. Terao and M. Tomoyose, Prog. Theor. Phys. **103**, 815 (2000) [arXiv:hep-th/0002038]. * [16] D. F. Litim, Phys. Lett. B **486**, 92 (2000) [hep-th/0005245]; Phys. Rev. D **64**, 105007 (2001) [arXiv:hep-th/0103195]. * [17] U. Ellwanger, M. Hirsch and A. Weber, Z. Phys. C **69**, 687 (1996) [arXiv:hep-th/9506019]; Eur. Phys. J. C **1**, 563 (1998) [arXiv:hep-ph/9606468]. * [18] D. F. Litim and J. M. Pawlowski, Phys. Lett. B **435**, 181 (1998) [arXiv:hep-th/9802064]. * [19] G. 't Hooft, Phys. Rev. D **14**, 3432 (1976) [Erratum-ibid. D **18**, 2199 (1978)]; M. A. Shifman, A. I. Vainshtein and V. I. Zakharov, Nucl. Phys. B **163**, 46 (1980); E. V. Shuryak, Nucl. Phys. B **203**, 93 (1982). * [20] For recent reviews, see T. Schafer and E. V. Shuryak, Rev. Mod. Phys. **70**, 323 (1998) [arXiv:hep-ph/9610451]; D. Diakonov, arXiv:hep-ph/0212026. * [21] J. M. Pawlowski, Phys. Rev. D **58**, 045011 (1998) [arXiv:hep-th/9605037]. * [22] K. i. Kondo, M. Tanabashi and K. Yamawaki, Prog. Theor. Phys. **89**, 1249 (1993) [arXiv:hep-ph/9212208]; K. I. Kubota and H. Terao, Prog. Theor. Phys. **102**, 1163 (1999) [arXiv:hep-th/9908062]; M. Reenders, Phys. Rev. D **62**, 025001 (2000) [arXiv:hep-th/9908158]. * [23] C. Wetterich, Phys. Lett. B **209**, 59 (1988); S. Bornholdt and C. Wetterich, Phys. Lett. B **282**, 399 (1992). * [24] E. Eichten _et al._,Phys. Rev. Lett. **34**, 369 (1975) [Erratum-ibid. **36**, 1276 (1975)]; T. Barnes, F. E. Close and S. Monaghan, Nucl. Phys. B **198**, 380 (1982); S. Godfrey and N. Isgur, Phys. Rev. D **32**, 189 (1985); A. C. Mattingly and P. M. Stevenson, Phys. Rev. Lett. **69**, 1320 (1992); [arXiv:hep-ph/9207228]; A. C. Aguilar, A. Mihara and A. A. Natale, arXiv:hep-ph/0208095. * [25] L. von Smekal, R. Alkofer and A. Hauck, Phys. Rev. Lett. **79**, 3591 (1997) [arXiv:hep-ph/9705242]; Annals Phys. **267**, 1 (1998) [Erratum-ibid. **269**, 182 (1998)] [arXiv:hep-ph/9707327]; D. Atkinson and J. C. Bloch, Phys. Rev. D **58**, 094036 (1998) [arXiv:hep-ph/9712459]; D. Zwanziger, arXiv:hep-th/0109224; C. Lerche and L. von Smekal, arXiv:hep-ph/0202194. * [26] F. D. Bonnet, P. O. Bowman, D. B. Leinweber, A. G. Williams and J. M. Zanotti, Phys. Rev. D **64**, 034501 (2001) [arXiv:hep-lat/0101013]; J. R. Bloch, A. Cucchieri, K. Langfeld and T. Mendes, arXiv:hep-lat/0209040. * [27] C. S. Fischer and R. Alkofer, arXiv:hep-ph/0202202. * [28] D. V. Shirkov and I. L. Solovtsov, Phys. Rev. Lett. **79**, 1209 (1997) [arXiv:hep-ph/9704333]; Theor. Math. Phys. **120**, 1220 (1999) [Teor. Mat. Fiz. **120**, 482 (1999)] [arXiv:hep-ph/9909305]. * [29] H. Gies, Phys. Rev. D **66**, 025006 (2002) [arXiv:hep-th/0202207].
We investigate one-flavor QCD with an additional chiral scalar field. For a large domain in the space of coupling constants, this model belongs to the same universality class as QCD, and the effects of the scalar become unobservable. This is connected to a \"bound-state fixed point\" of the renormalization flow for which all memory of the microscopic scalar interactions is lost. The QCD domain includes a microscopic scalar potential with minima at nonzero field. On the other hand, for a scalar mass term \\(m^{2}\\) below a critical value \\(m_{\\rm c}^{2}\\), the universality class is characterized by perturbative spontaneous chiral symmetry breaking which renders the quarks massive. Our renormalization group analysis shows how this universality class is continuously connected with the QCD universality class. CERN-TH/2002-242 HD-THEP-02-33 **Universality of spontaneous chiral symmetry breaking** **in gauge theories** Holger Gies\\({}^{a}\\) and Christof Wetterich\\({}^{b}\\) \\({}^{a}\\) _CERN, Theory Division, CH-1211 Geneva 23, Switzerland_ _E-mail: [email protected]_ \\({}^{b}\\) _Institut fur theoretische Physik, Universitat Heidelberg,_ _Philosophenweg 16, D-69120 Heidelberg, Germany_ _E-mail: [email protected]_
Provide a brief summary of the text.
arxiv-format/0210039v1.md
**OBSERVABLE CONSEQUENCES** **OF CROSSOVER-TYPE DECONFINEMENT PHASE TRANSITION** V.D. Toneev\\({}^{\\dagger}\\) _Bogoliubov Laboratory of Theoretical Physics Joint Institute for Nuclear Research,_ _Dubna, Russia_ \\(\\dagger\\) _E-mail: [email protected]_ ## 1 Introduction The predicted phase transition from confined hadrons to a deconfined phase of their constituents (_i.e._ the asymptotically-free quarks and gluons or the so-called Quark-Gluon Plasma, QGP) is a challenge to the theory of strong interaction. Over the past two decades a lot of efforts has been spent to both the theoretical study of deconfinement phase transition and the search for its possible manifestation in relativistic heavy ion collisions, properties of neutron stars and Universe evolution. A unique opportunity provided by relativistic heavy-ion collisions allowed to reach a state with temperature and energy density exceeding the critical values, \\(T_{c}\\sim 170\\ MeV\\) and \\(\\varepsilon_{c}\\sim 1\\ GeV/fm^{3}\\), specific for the deconfinement phase transition. A rather long list of various signals for the QGP formation in hot and dense nuclear matter is available now and it has been probed in experiments with heavy ions. Unfortunately, there is no crucial signal for unambiguous identification of the deconfinement phase and, for a particular reaction at the given bombarding energy, practically every proposed signal can be simulated to some extent by hadronic interactions. In this paper we turn to the study of excitation functions for observables to be sensitive to the expected QCD deconfinement phase transition. Its manifestation has been considered already some time ago by [1, 2]. Since a phase transition slows down the time evolution of the system due to _softening_ of the EoS, and one can expect a remarkable loss of correlations around some critical incident energy resulting in definite observable effects. ## 2 Equation of state in mixed phase model Following the common strategy of the two-phase (2P) bag model [3], one can determine the deconfinement phase transition by means of the Gibbs conditions matching the EoS of a relativistic gas of hadrons and resonances, whose interactions are simulated by the Van der Waals excluded volume correction, to that of an ideal gas of quarks and gluons, where the change in vacuum energy in a QGP state is parameterized by the bag constant \\(B\\). Thermodynamics of the hadron gas is described in the grand canonical ensemble. All hadrons with the mass \\(m_{j}<1.6\\ GeV\\) i are taken into consideration. One should emphasize that the phase transition in the 2P model is right along of the first order by constructing. To reproduce the variety of phase transitions predicted by QCD lattice calculations we represent a phenomenological Mixed Phase (MP) model [4, 5]. The underlying assumption of the MP model is that unbound quarks and gluons _may coexist_ with hadrons forming a _homogeneous_ quark/gluon-hadron phase. Since the mean distance between hadrons and quarks/gluons in this mixed phase may be of the same order as that between hadrons, the interaction between all these constituents (unbound quarks/gluons and hadrons) plays an important role and defines the order of the phase transition. Within the MP model [4, 5] the effective Hamiltonian is expressed in the quasiparticle approximation with density-dependent mean-field interactions. Under quite general requirements of confinement for color charges, the mean-field potential of quarks and gluons is approximated by \\[U_{q}(\\rho)=U_{g}(\\rho)=\\frac{A}{\\rho^{\\gamma}}\\ ;\\quad\\gamma>0 \\tag{1}\\] with _the total density of quarks and gluons_ \\[\\rho=\\rho_{q}+\\rho_{g}+\\sum_{j}\\ \ u_{j}\\rho_{j}\\,\\] where \\(\\rho_{q}\\) and \\(\\rho_{g}\\) are the densities of unbound quarks and gluons outside of hadrons, while \\(\\rho_{j}\\) is the density of hadron type \\(j\\) and \\(\ u_{j}\\) is the number of valence quarks inside. The presence of the total density \\(\\rho\\) in (1) implies interactions between all components of the mixed phase. The approximation (1) mirrors two important limits of the QCD interaction. For \\(\\rho\\to 0\\), the interaction potential approaches infinity, _i.e._ an infinite energy is necessary to create an isolated quark or gluon, which simulates the confinement of color objects. In the other extreme case of large energy density corresponding to \\(\\rho\\to\\infty\\), we have \\(U_{q}=U_{g}=0\\) which is consistent with asymptotic freedom. The use of the density-dependent potential (1) for quarks and the hadronic potential, described by a modified non-linear mean-field model [6], requires certain constraints to be fulfilled, which are related to thermodynamic consistency [4, 5]. For the chosen form of the Hamiltonian these conditions require that \\(U_{g}(\\rho)\\) and \\(U_{q}(\\rho)\\) do not depend on temperature. From these conditions one also obtains a form for the quark-hadron potential [4]. A detailed study of the pure gluonic \\(SU(3)\\) case with a first-order phase transition allows one to fix the values of the parameters as \\(\\gamma=0.92\\) and \\(A^{1/(3\\gamma+1)}=250\\) MeV. These values are then used for the \\(SU(3)\\) system including quarks. As is shown in Fig.1 for the case of quarks of two light flavors at zero baryon density (\\(n_{B}=0\\)), the MP model is consistent with lattice QCD data providing a continuous phase transition if the crossover type with a deconfinement temperature \\(T_{dec}=153\\) MeV. For a two-phase approach based on the bag model a first-order deconfinement phase transition occurs with a sharp jump in energy density \\(\\varepsilon\\) at \\(T_{dec}\\) close to the value obtained from lattice QCD. Though at a glimpse the temperature dependencies of the energy density \\(\\varepsilon\\) and pressure \\(p\\) for the different approaches presented in Fig.1 look quite similar, there is a large difference revealed when \\(p/\\varepsilon\\) is plotted versus \\(\\varepsilon\\) (cf. Fig.2, left panel). The lattice QCD data differ at low \\(\\varepsilon\\), which is due to difficulties within the Kogut-Susskind scheme [8] in treating the hadronic sector. A particular feature in the MP model is that, for \\(n_{B}=0\\), the _softest point_ of the EoS, defined as a minimum of the function \\(p(\\varepsilon)/\\varepsilon\\)[9], is not very pronounced and located at comparatively low values of the energy density: \\(\\varepsilon_{SP}\\approx 0.45\\) GeV/fm\\({}^{3}\\), which roughly agrees with the lattice QCD value [7]. This value of \\(\\varepsilon\\) is close to the energy density inside a nucleon, and hence, reaching this value indicates that we are dealing with a single _big hadron_ consisting of deconfined matter. In contradistinction, the bag-model EoS exhibits a very pronounced softest point at large energy density \\(\\varepsilon_{SP}\\approx 1.5\\) GeV/fm\\({}^{3}\\)[9, 10]. The MP model can be extended to baryon-rich systems in a parameter-free way [4, 5]. As demonstrated in Fig.2 (right panel), the softest point for baryonic matter is gradually washed out with increasing baryon density and vanishes for \\(n_{B}\\mathrel{\\hbox to 0.0pt{\\lower 4.0pt\\hbox{ $\\sim$}}\\raise 1.0pt\\hbox{$>$}}0.3\\)\\(n_{0}\\) (\\(n_{0}\\) is normal nuclear matter density). This behavior differs drastically from that of the two-phase bag Figure 1: The reduced energy density and pressure (the \\(\\varepsilon_{SB}\\) and \\(p_{SB}\\) are corresponding Stephan-Boltzmann quantities) of the \\(SU(3)\\) system with two light flavors for \\(n_{B}=0\\) calculated within the MP (solid lines) and bag (dashed lines) models. Circles and squares are lattice QCD data obtained within the Wilson [7] and Kogut–Susskind [8] schemes. Figure 2: The \\((\\varepsilon,p/\\varepsilon)\\)-representation of the EoS for the two-flavor \\(SU(3)\\) system at various baryon densities \\(n_{B}\\). Notation of data points and lines is the same as in Fig.1. model EoS, where \\(\\varepsilon_{SP}\\) is only weakly dependent on \\(n_{B}\\)[9, 10]. It is of interest to note that the interacting hadron gas model has no softest point at all and, in this respect, its thermodynamic behavior is close to that of the MP model at high energy densities [5]. These differences between the various models of EoS should manifest themselves in dynamics discussed below. ## 3 Directed flow of baryons The EoS described above is applied to a two-fluid (2F) hydrodynamic model [11], which takes into account finite stopping power of colliding heavy ions. In this dynamical model, the total baryonic current and energy-momentum tensor are written as \\[J^{\\mu} = J_{p}^{\\mu}+J_{t}^{\\mu}\\ \\, \\tag{2}\\] \\[T^{\\mu\ u} = T_{p}^{\\mu\ u}+T_{t}^{\\mu\ u}\\ \\, \\tag{3}\\] where the baryonic current \\(J_{\\alpha}^{\\mu}=n_{\\alpha}u_{\\alpha}^{\\mu}\\) and energy-momentum tensor \\(T_{\\alpha}^{\\mu\ u}\\) of the fluid \\(\\alpha\\) are initially associated with either target (\\(\\alpha=t\\)) or projectile (\\(\\alpha=p\\)) nucleons. Later on these fluids contain all hadronic and quark-gluon species, depending on the model used for describing the fluids. The twelve independent quantities (the baryon densities \\(n_{\\alpha}\\), 4-velocities \\(u_{\\alpha}^{\\mu}\\) normalized as \\(u_{\\alpha\\mu}u_{\\alpha}^{\\mu}=1\\), as well as temperatures \\(T\\) and pressures \\(p\\) of the fluids) are obtained by solving the following set of equations of two-fluid hydrodynamics [11] \\[\\partial_{\\mu}J_{\\alpha}^{\\mu} = 0\\ \\, \\tag{4}\\] \\[\\partial_{\\mu}T_{\\alpha}^{\\mu\ u} = F_{\\alpha}^{\ u}\\ \\, \\tag{5}\\] where the coupling term \\[F_{\\alpha}^{\ u}=n_{p}^{s}n_{t}^{s}\\left\\langle V_{rel}\\int d\\sigma_{NN\\to NX}( s)\\ (p-p_{\\alpha})^{\ u}\\right\\rangle \\tag{6}\\] characterizes friction between the counter-streaming fluids. The cross sections \\(d\\sigma_{NN\\to NX}\\) take into account all elastic and inelastic interactions between the constituents of different fluids at the invariant collision energy \\(s^{1/2}\\) with the local relative velocity \\(V_{rel}=[s(s-4m_{N}^{7})]^{1/2}/2m_{N}^{2}\\). The average in (6) is taken over all particles in the two fluids which are assumed to be in local equilibrium intrinsically [11]. The set of Eqs. (4) and (5) is closed by EoS, which is naturally the same for both colliding fluids. Following the original paper [11], it is assumed that a fluid element decouples from the hydrodynamic regime, when its baryon density \\(n_{B}\\) and densities in the eight surrounding cells become smaller than a fixed value \\(n_{f}\\). A value \\(n_{f}=0.8n_{0}\\) is used for this local freeze-out density which corresponds to the actual density of the freeze-out fluid element of about \\(0.6-0.7\\ n_{0}\\). The directed flow characterizes the deflection of emitted hadrons away from the beam axis within the reaction \\(x-z\\) plane. In particular, one defines the differential directed flow by the mean in-plane component \\(\\left\\langle p_{x}(y)\\right\\rangle\\) of the transverse momentum at a given rapidity \\(y\\). This deflection is believed to be quite sensitive to the _elasticity_ or _softness_ of the EoS and can be quantified in two ways: In terms of the derivative (a slope parameter) at mid-rapidity \\[F_{y}=\\left.\\frac{d\\ \\left\\langle p_{x}(y)\\right\\rangle}{dy}\\right|_{y=y_{cm}}\\ \\, \\tag{7}\\]which is quite suitable for analyzing the flow excitation function, and by another integral quantity to be less sensitive to possible rapidity fluctuations of the in-plane momentum: \\[\\langle P_{x}\\rangle=\\frac{\\int dp_{x}dp_{y}dy\\ p_{x}\\ \\left(E\\frac{d^{3}N}{ dp^{3}}\\right)}{\\int dp_{x}dp_{y}dy\\ \\left(E\\frac{d^{3}N}{ dp^{3}}\\right)}\\ \\, \\tag{8}\\] where the integration in the c.m.system runs over the rapidity region \\([0,y_{cm}]\\). Excitation functions in the SIS-AGS-SPS energy range are plotted in Fig.3 for both characteristics [12]. Our first 2F hydrodynamic calculations of \\(F_{y}(E_{lab})\\) are in a good agreement with experiment in the whole energy range considered. In the left lower panel of Fig.3 our results are compared with transport calculations. The ARC and ART are cascade models, while the RQMD takes also into account mean-field effects. Though all these models agree with experimental data at \\(E_{lab}\\approx 10\\) A\\(\\cdot\\)GeV (considered as a reference point), values of \\(F_{y}\\) at lower energies are clearly underestimated, as is evident from comparison with results of the E895 Collaboration [16] (see empty squares in Fig.3). Recently, a good description Figure 3: Excitation functions of the slope parameter \\(F_{y}\\) (left panel) and the average directed flow (right panel) for baryons from Au + Au collisions within hydrodynamics and different transport simulations. Collected experimental points are taken from [12]. The results of transport calculations for three different codes (left lower panel) are given by the thin solid (RQMD), dashed (ARC) and dot-dashed (ART) lines (cited according to [13]). The solid line (RBUU) is taken from [14]. 2F hydrodynamics with the MP EoS at the impact parameter 3 fm is compared with the corresponding results of 1F- [10] (right upper panel) and 3F- (right lower panel) [15] hydrodynamics with the bag-model EoS. 1F calculations both with and without the phase transition (PT) are displayed. of experimental points (including the E895 data) was reported within a relativistic BUU (RBUU) model [14]. The good agreement with experiment was achieved here by a special fine tuning of the mean fields involved in the particle propagation. The calculated excitation functions of \\(\\langle P_{x}\\rangle\\) for baryons within different hydrodynamic models are shown in the right panel of Fig.3. Conventional 1F hydrodynamics for pure hadronic matter [10] results in a very large directed flow due to the inherent instantaneous stopping of the colliding matter. This instantaneous stopping is unrealistic at high beam energies. If the deconfinement phase transition, based on the bag-model EoS [10], is included into this model, the excitation function of \\(\\langle P_{x}\\rangle\\) exhibits a deep minimum near \\(E_{lab}\\approx 6\\) A\\(\\cdot\\)GeV, which manifests the softest-point effect of the bag-model EoS as shown in the right panel of Fig.2. The result of 2F hydrodynamics with the MP EoS noticeably differs from the 1F calculations. After a maximum around 1 A\\(\\cdot\\)GeV, the average directed flow decreased slowly and smoothly. This difference is caused by two reasons. First, as follows from Fig.2, the softest point of the MP EoS is washed out for \\(n_{B}\\mathrel{\\hbox to 0.0pt{\\lower 4.0pt\\hbox{ $\\sim$}}\\raise 1.0pt\\hbox{$>$} }0.4\\). The second reason is dynamical: the finite stopping power and direct pion emission change the evolution pattern. The latter point is confirmed by comparison to three-fluid calculations with the bag EoS [15] plotted in the right lower panel of Fig.3. The third pionic fluid in this model is assumed to interact only with itself neglecting the interaction with baryonic fluids. Therefore, with regard to the baryonic component, this three-fluid hydrodynamics [15, 17] is completely equivalent to our two-fluid model and the main difference is due to the different EoS. As seen in Fig.3, the minimum of the directed flow excitation function, predicted by the one-fluid hydrodynamics with the bag-model EoS, survives in the three-fluid (nonunified) regime but its value decreases and its position shifts to higher energies. If one applies the unification procedure of [15], which favors fusion of two fluids into a single one, and thus making stopping larger, three-fluid hydrodynamics practically reproduces the one-fluid result and predicts in addition a bump at \\(E_{lab}\\approx 40\\) A\\(\\cdot\\)GeV. ## 4 Strangeness production Enhanced strangeness production as compared to proton-proton or proton-nucleus collisions is one of the QGP signals proposed a long time ago. In the hydrodynamic model described above only baryon charge rather than strangeness exchange is included. So, to see experimental consequences of EoS with different phase transition order, we consider an expanding homogeneous blob of the compressed and heated QCD matter (a fireball) formed in heavy-ion collisions. The initial state (\\(\\varepsilon_{0}\\) and \\(n_{B}\\)) for this fireball is estimated from results of the QGSM transport calculations in the center-of-mass frame inside a cylinder of the volume \\(V_{0}\\) with radius \\(R=5\\)\\(fm\\) and Lorentz-contracted length \\(L=2R/\\gamma_{c.m.}\\). Isoentropic expansion is treated in an approximate manner assuming \\(V\\sim V_{0}t\\) until the freeze-out point defined by \\(\\varepsilon_{f}=0.15\\)\\(GeV/fm^{3}\\approx m_{N}n_{0}\\) (for more detail see [18]). One should note that till this point the grand canonical ensemble was used where complete chemical equilibrium is assumed and the strangeness conservation is controlled on average by the strange chemical potential \\(\\mu_{S}\\). In the thermodynamical limit, fluctuations in a number of strange particles are small and coincide with those for the canonical ensemble. However, it is not the case for finite systems at relatively small \\(T\\) where the strangeness canonical ensemble should be applied, taking into account the associative nature of strange particle creation by exact and local conservation of strangeness. Using the general formal ism for the canonical strangeness conservation proposed in [19, 20], the partition function of a gas of hadrons with strangeness \\(s_{i}=0,\\pm 1,\\pm 2,\\pm 3\\) and total strangeness \\(S=0\\) can be written as follows \\[Z_{S}=\\frac{1}{2\\pi}\\int_{-\\pi}^{\\pi}d\\phi\\ \\exp(\\sum_{s=-3}^{3}{\\cal S}_{s}\\ e^{ is\\phi}) \\tag{9}\\] where \\({\\cal S}_{s}=V\\sum_{i}Z_{i}\\). Here \\(Z_{i}\\) is the one-particle partition function for species \\(i\\) and the sum is taken over all particles and resonances carrying strangeness \\(s_{i}\\). A number of strange particles can be found by the appropriate differentiation of the partition function \\(Z_{S}\\), Eq.(9). It is easy to see that canonical result can be obtain in the Boltzmann approximation from the grand canonical one by replacing the strange fugacity in the following way : \\[\\exp(\\mu_{s}/T)\\rightarrow\\left(\\frac{{\\cal S}_{1}}{\\sqrt{{\\cal S}_{1}{\\cal S }_{-1}}}\\right)^{s}\\ \\frac{I_{s}(x)}{I_{0}(x)}\\, \\tag{10}\\] where the argument of the Bessel functions \\(x\\equiv 2\\sqrt{{\\cal S}_{1}{\\cal S}_{-1}}\\sim V\\). This receipt was applied to our treatment of particle abundance at the freeze-out point. Generally, the correlation volume in the suppression factor \\(I_{s}(x)/I_{0}(x)\\) of (10) does not coincide with the system volume \\(V\\). In our model, the initial Lorentz-contracted volume \\(V_{0}\\) is considered as a strangeness correlation volume. Figure 4: Particle ratios for strange hadrons in full \\(4\\pi\\) angle interval for central \\(Au+Au\\) collision as a function of bombarding energy. The compilation of available experimental points is taken from [21, 22]. The calculated excitation functions represent four modeling EoS with the canonical suppression factor \\(I_{s}(x)/I_{0}(x)\\). For the case of the MP model, the grand canonical results (dashed lines) are given, as well. Inspection of Fig.4 shows that the inclusion of the canonical strangeness suppression factor allows one to decrease noticeably strange particle abundance. However, comparing canonical and grand canonical results for the MP model, one can see that they do not coincide at high energies as it would be expected. This is explained by the beam energy dependence of the strangeness correlation volume in contrast with usual canonical description [19, 20]. The most striking result followed from Fig.4 is that all the models considered in the strangeness canonical ensemble predict practically the same relative abundance of strange hadrons in the whole energy range studied. The measured excitation function for \\(K^{+}/\\pi^{+}\\) are reproduced reasonably well excluding maybe the SIS energy. In the case of \\(K^{-}/\\pi^{-}\\) the general form of excitation functions also agrees with experimental one but the relative abundance is overestimated what mainly originates from neglecting the electric charge (isospin) conservation. A simple estimate shows that taking into account the isospin conservation the \\(K^{-}/\\pi^{-}\\) ratio decreases by about 18% and 12% at \\(E_{lab}=10\\) and 150 \\(AGeV\\), respectively, without any essential influence on the \\(K^{+}/\\pi^{+}\\) ratio. The relative yield of hyperons, in particular \\(\\Lambda/\\pi^{+}\\)'s, seems to be overshot in the energy range \\(E_{lab}\\mathrel{\\hbox to 0.0pt{\\lower 4.0pt\\hbox{ $\\sim$}}\\raise 1.0pt\\hbox{$<$}} 10\\)\\(AGeV\\) what can result from the simplified dynamical treatment: The Bjorken-like longitudinal expansion can be applied at the SPS energies, but at the SIS energies the transverse expansion is not negligible. It is worthy to note that all calculations have been done with the same shock-like freeze-out condition for every EoS without any special tuning. ## 5 Conclusions It has been shown that the directed flow excitation functions \\(F_{y}\\) and \\(\\langle P_{x}\\rangle\\) for baryons are sensitive to the EoS, but this sensitivity is significantly masked by nonequilibrium dynamics of nuclear collisions. Nevertheless, the results indicate that the widely used two-phase EoS, based on the bag model [9, 10] and giving rise to a first-order phase transition, seems to be inappropriate. The neglect of interactions near the deconfinement temperature results in an unrealistically strong softest-point effect within this two-phase EoS. Smooth experimental excitation function of the directed flow is reasonably reproduced within the MP model. The dynamical trajectories for a fireball state in the \\(T-\\mu_{B}\\) plane are quite different for different EoS [18, 23]. However, after using the shock-like freeze-out, the global strangeness production is turned out to be completely insensitive to the particular EoS as illustrated by the calculation of excitation functions for \\(K^{+},K^{-}\\) and hyperons. To get agreement with experiment the canonical suppression factor should be taken into account. The only trace of dynamics is the beam-energy dependence of the strangeness correlation volume. This effect results in a negative slope of the \\(K^{+}/\\pi^{+}\\) excitation function at high energies which is not reproduced by the equilibrium statistical model with canonical account of strangeness [20, 21]. Among other signals of the QCD deconfinement phase transition, the dilepton and hard photon production is the most promising. Being sensitive to the whole evolution time of a system, these signals can become experimentally observable to disentangle the crossover phase transition, predicted by the MP model, from the first order one which is peculiar for the bag-model EoS. Useful and numerous discussions with B. Friman, Yu.B. Ivanov, E.G. Nikonov, W. Norenberg and K. Redlich are acknowledged. This work was supported in part by DFG (project 436 RUS 113/558/0) and RFBR (grant 00-02-04012). ## References * [1] E. Shuryak and O.V. Zhirov, _Phys. Lett._ B **89**, 253 (1979). * [2] L. van Hove, _Z. Phys._ C **21**, 93 (1983). * [3] J. Cleymans, R.V. Gavai, and E. Suhonen, _Phys. Rep._**130**, 217 (1986). * [4] E.G. Nikonov, A.A. Shanenko, and V.D. Toneev, _Heavy Ion Physics_**8** (1998) 89. * [5] V.D. Toneev, E.G. Nikonov, and A.A. Shanenko, in _Nuclear Matter in Different Phases and Transitions_, eds. J.-P. Blaizot, X. Campi, and M. Ploszajczak, Kluwer Academic Publishers (1999), p.309. * [6] J. Zimanyi _et al._, _Nucl. Phys._**A484**, 147 (1988). * [7] K. Redlich and H. Satz, _Phys. Rev._ D **33**, 3747 (1986). * [8] C. Bernard _et al._, _Nucl. Phys. (Proc. Suppl.)_**B47**, 499 (1996); _ibid_ 503. * [9] C.M. Hung and E.V. Shuryak, _Phys. Rev. Lett._**75**, 4063 (1995). * [10] D.H. Rischke _et al._, _Heavy Ion Physics_**9**, 309 (1996); D.H. Rischke, _Nucl. Phys._**A610**, 28c (1996). * [11] I.N. Mishustin, V.N. Russkikh, and L.M. Satarov, _Nucl. Phys._**A494**, 595 (1989); _Yad. Fiz._**54**, 479 (1991) (translated as _Sov. J. Nucl. Phys._**54**, 260 (1991). * [12] Yu.B. Ivanov, E.G. Nikonov, W. Norenberg, A.A. Shanenko and V.D. Toneev, _Heavy Ion Physics_**15** (2002) 117. * [13] N.N. Ajitanand _et al._, _Nucl. Phys._**A638**, 451c (1998). * [14] P.K. Sahu, W. Cassing, U. Mosel, and A. Ohnishi, _Nucl. Phys._**A672**, 376 (2000). * [15] J. Brachmann _et al._, _Phys. Rev._ C **61**, 024909 (2000). * [16] H. Liu for the E895 Collaboration, _Nucl. Phys._**A638**, 451c (1998). * [17] J. Brachmann _et al._, _Nucl. Phys._**A619**, 391 (1997). * [18] B. Friman, E.G. Nikonov, W. Norenberg, K. Redlich and V.D. Toneev, _Strangeness Production in Nuclear Matter and Expansion Dynamics_ (in preparation). * [19] R. Hagedorn and K. Redlich, _Z. Phys._ A **27**, 541 (1985). * [20] J. Cleymans, K. Redlich, and E. Suhonen, _Z. Phys._ C **51**, 137 (1991). * [21] K. Redlich, _Nucl. Phys._**A698**, 94 (2002). * [22] The NA49 Collaboration, _nucl-ex/0205002_. * [23] V.D. Toneev, J. Cleymans, E.G. Nikonov, K. Redlich, and A.A. Shanenko, _J. Phys._ G **27**, 827 (2001).
Equation of State (EoS) for hot and dense nuclear matter with a quark-hadron phase transition is constructed within a statistical mixed-phase model assuming coexistence of unbound quarks in nuclear surrounding. This model predicts the deconfinement phase transition of the crossover type. The so-called \"softest point\" effect of EoS is analyzed and confronted to that for other equations of state which exhibit the first order phase transition (two-phase bag model) or no transition at all (hadron resonance gas). The collective motion of nucleons from high-energy heavy-ion collisions is considered within a relativistic two-fluid hydrodynamics for different EoS. It is demonstrated that the beam energy dependence of the directed flow is a smooth function in the whole range from SIS till SPS energies and allows to disentangle different EoS, being in good agreement with experimental data for the statistical mixed-phase model. In contrast, excitation functions for relative strangeness abundance turn out to be insensitive to the order of phase transition. **Key-words:** QCD phase transition, heavy-ion collisions, hydrodynamics, directed flow, strange particle production.
Condense the content of the following passage.
arxiv-format/0210075v1.md
A comparison of extremal optimization with flat-histogram dynamics for finding spin-glass ground states Jian-Sheng Wang\\({}^{1}\\) and Yutaka Okabe\\({}^{2}\\) \\({}^{1}\\)Singapore-MIT Alliance and Department of Computational Science, National University of Singapore, Singapore 119260, Republic of Singapore \\({}^{2}\\)Department of Physics, Tokyo Metropolitan University, Hachioji, Tokyo 192-0397, Japan 3 October 2002 ## 1 Introduction Optimization with methods motivated from real physical processes is an active field of research. Simulated annealing [1] and genetic algorithm [2] are two well-known examples. In particular, there have been a large variety of methods proposed to find spin-glass ground states [3, 4, 5, 6, 7, 8, 9, 10]. Recently, Boettcher and Percus [11, 12] introduced 'extremal optimization' (EO) inspired by models of self-organized criticality [13], which gave impressive performance. Most of the heuristic optimization methods (including simulated annealing and genetic algorithm) are designed to find ground states only, thus it is not possible to give correct thermodynamics from a simulation. On the other hand, multi-canonical ensemble simulation [14], \\(1/k\\)-sampling [15], parallel tempering [16], and recent flat-histogram dynamics [17] are constructed for equilibrium thermodynamics, but can also be used as methods for optimization. A study of optimization by flat-histogram algorithm on the two-dimensional spin-glass model is carried out in ref. [18]. Unlike simulated annealing and other heuristic methods, we note that these methods do not have any adjustable parameters. It is useful to know the efficiencies of this second class of methods when used as an optimization tool. In this paper, we make a comparative study of the extremal optimization and flat-histogram/equal-hit dynamics. We compare four algorithms: EO at \\(\\tau=1\\) with a continuous approximation in the probability of choosing a site, original EO at optimal value of \\(\\tau=1.15\\), single-spin-flip flat-histogram dynamics, and equal-hit algorithm with \\(N\\)-fold way. It is found that EO at the value \\(\\tau=1.15\\) is very good for both two- and three-dimensional Ising spin glasses. The equal-hit algorithm with \\(N\\)-fold way is also competitive. For large systems, equal-hit appears even slightly better than EO. It is useful to have the efficiency of EO but still give equilibrium results. To this end, we introduce a rejection step in EO, thus turning EO into an equilibrium simulation method. ## 2 Single-spin-flip algorithms In the following, we specialize our discussion in the context of spin models, and particularly the spin-glass model [19]. The energy function is defined by \\[E(\\sigma)=-\\sum_{\\langle i,j\\rangle}J_{ij}\\sigma_{i}\\sigma_{j}, \\tag{1}\\] where the spin \\(\\sigma_{i}\\) takes on value \\(+1\\) or \\(-1\\) with \\(i\\) varying over a hypercubic lattice in \\(d\\) dimensions. The coupling constant \\(J_{ij}\\) for each nearest neighbor pair \\(\\langle i,j\\rangle\\) takes on a random value of \\(+J\\) and \\(-J\\) with equal probability. We impose a constraint \\(\\sum_{\\langle i,j\\rangle}J_{ij}=0\\). The spin glass is known to be one of the hardest problems [20] to find the state \\(\\sigma\\) that minimizes \\(E(\\sigma)\\). A single-spin-flip with rejection is described by a Markov chain transition matrix of the form \\[W(\\sigma\\rightarrow\\sigma^{\\prime})=\\delta_{N}(\\sigma,\\sigma^{\\prime})\\frac{ 1}{N}a(\\sigma\\rightarrow\\sigma^{\\prime}),\\qquad\\sigma\ eq\\sigma^{\\prime}, \\tag{2}\\] where \\(\\delta_{N}(\\sigma,\\sigma^{\\prime})=1\\) if \\(\\sigma^{\\prime}\\) is obtained from \\(\\sigma\\) by a single spin flip, and 0 otherwise. The factor \\(1/N\\) represents the random selection of a spin, where \\(N\\) (\\(=L^{d}\\)) is the number of spins. \\(a(\\sigma\\rightarrow\\sigma^{\\prime})\\) is the flip rate. If we choose \\(a(\\sigma\\rightarrow\\sigma^{\\prime})\\) according to Metropolis rate, \\[a(\\sigma\\rightarrow\\sigma^{\\prime})=\\min\\left(1,\\frac{f\\big{(}E(\\sigma^{ \\prime})\\big{)}}{f\\big{(}E(\\sigma)\\big{)}}\\right), \\tag{3}\\] we can realize equilibrium distribution with the probability of states distributed according to \\(f\\big{(}E(\\sigma)\\big{)}\\). Some choices are: \\[f(E)=\\left\\{\\begin{array}{ll}\\exp\\bigl{(}-E/(k_{B}T)\\bigr{)},&\\mbox{canonical ensemble;}\\\\ 1/n(E),&\\mbox{multicanonical ensemble;}\\\\ 1/\\int_{-\\infty}^{E}n(E^{\\prime})dE^{\\prime},&1/k\\mbox{ sampling,}\\end{array}\\right. \\tag{4}\\]where \\(T\\) is temperature, \\(k_{B}\\) is Boltzmann constant, and \\(n(E)\\) is density of states at energy \\(E\\). Arbitrary choice of the flip rate \\(a(\\sigma\\to\\sigma^{\\prime})\\) in general would not give one important property of the equilibrium systems, i.e., the microcanonical property that the probability distribution of the configuration \\(\\sigma\\) is a function of energy \\(E\\) only. For example, the original broad histogram rate [21] \\[a(\\sigma\\to\\sigma^{\\prime})=\\min\\left(1,\\frac{N_{Z-k}(\\sigma)}{N_{k}(\\sigma)} \\right), \\tag{5}\\] and the random walk rate of Berg [22] do not have microcanonical property, where \\(N_{k}(\\sigma)\\) is the number of possible moves of class \\(k\\) in the state \\(\\sigma\\); we associate a class for each site \\(i\\) with a number from \\(0\\) to \\(Z\\) (\\(=2d\\)) by a scaled energy change \\(k=\\big{(}(E(\\sigma^{\\prime})-E(\\sigma))/J+2Z\\big{)}/4\\), i.e., \\[k=\\frac{1}{2}\\sum_{j}(J_{ij}\\sigma_{i}\\sigma_{j}+1). \\tag{6}\\] \\(N_{k}(\\sigma)\\) is the number of such sites having a class \\(k\\). The single-spin-flip version with rejection can be turned into a rejection-free \\(N\\)-fold way [23] simulation where a class is chosen with probability \\[P_{k}=\\frac{a_{k}N_{k}(\\sigma)}{A(\\sigma)},\\quad A(\\sigma)=\\sum_{k=0}^{Z}a_{k }N_{k}(\\sigma), \\tag{7}\\] where \\(a_{k}\\) is \\(a(\\sigma\\to\\sigma^{\\prime})\\) associated with an energy change indicated by class \\(k\\). A spin in that class is picked up at random, and the flip is always accepted. Thermodynamic quantities need to be weighted by a factor \\(1/A(\\sigma)\\). EO [11, 12] is somewhat related to \\(N\\)-fold way in the sense that a class is chosen with some probability \\(P_{k}\\), and a spin in that class is picked up and always flipped. The EO algorithm can be stated as follows: we classify the site by its 'fitness' \\(k\\). There are \\(Z+1\\) possible values for \\(k\\). In the general EO, the sites are sorted according to the fitness \\(k\\). Since there are only a small number of possible values in the \\(\\pm J\\) spin-glass model, the sorting is not necessary. We simply make a list of sites in each category. We pick a class according to the probability \\(P_{k}\\), and then choose a spin in that class and flip with probability one. The corresponding transition matrix is then \\[W(\\sigma\\to\\sigma^{\\prime})=\\delta_{N}(\\sigma,\\sigma^{\\prime})P_{k}\\frac{1}{N _{k}},\\qquad\\sigma\ eq\\sigma^{\\prime}. \\tag{8}\\] The original choice of EO is to take \\[P_{k}\\propto\\sum_{N_{0}+N_{1}+\\cdots+N_{k-1}<i\\leq N_{0}+N_{1}+\\cdots+N_{k}}i ^{-\\tau}, \\tag{9}\\] with \\(\\tau\\) being a parameter of the algorithm. We define the standard EO to be a continuous approximation to the above sum at \\(\\tau=1\\) with the analytical expression by \\[P_{k}=\\frac{1}{\\ln(N+1)}\\ln\\frac{1+\\sum_{j=0}^{k}N_{j}}{1+\\sum_{j=0}^{k-1}N_{j}}. \\tag{10}\\] The number \\(1\\) in the numerator and denominator are introduced somewhat arbitrarily to avoid divergence when the sum of \\(N_{k}\\) is zero. An optimized EO will be the discrete version, Eq. (9), with \\(\\tau\\) that gives best performance; we use \\(\\tau=1.15\\) as recommended in ref [12]. To realize the power-law distribution, we generate an integer \\(i=\\lfloor\\xi^{1/(1-\\tau)}\\rfloor\\), where \\(\\xi\\) is a uniformly distributed random number between \\(0\\) and \\(1\\), and pick a corresponding site \\(i\\) ordered by the class. We have used \\(\\lfloor\\cdots\\rfloor\\) for the floor function. ## 3 Comparison of EO with flat-histogram and equal-hit algorithms The flat-histogram algorithm [17, 24] is a special choice of the flip rate \\[a^{\\rm FH}(\\sigma\\to\\sigma^{\\prime})=\\min\\left(1,\\frac{\\langle N_{Z-k}(\\sigma ^{\\prime})\\rangle_{E^{\\prime}}}{\\langle N_{k}(\\sigma)\\rangle_{E}}\\right), \\tag{11}\\] where the angular brackets with subscript \\(E\\) denote a microcanonical average of the quantity \\(N_{k}(\\sigma)\\) at energy \\(E\\); the starting state \\(\\sigma\\) has energy \\(E\\) and the final state \\(\\sigma^{\\prime}\\) has energy \\(E^{\\prime}\\). This particular choice of the rate gives a flat distribution for the energy histogram, \\(H(E)=n(E)f(E)=\\mbox{const}\\), \\(n(E)\\) is density of states. This is one way to realize multicanonical ensemble. In the \\(N\\)-fold way equal-hit algorithm [24, 25], we perform the usual \\(N\\)-fold-way move (thus rejection-free) which is constructed from the following single-spin-flip rate: \\[a^{\\rm EQ}(\\sigma\\to\\sigma^{\\prime})=\\min\\left(1,\\frac{\\langle A\\rangle_{E} \\langle N_{Z-k}(\\sigma^{\\prime})\\rangle_{E^{\\prime}}}{\\langle A\\rangle_{E^{ \\prime}}\\langle N_{k}(\\sigma)\\rangle_{E}}\\right). \\tag{12}\\] We note that \\(\\langle A\\rangle_{E}^{-1}=\\langle 1/A\\rangle_{N}\\), where \\(\\langle\\cdots\\rangle_{N}\\) is average over the \\(N\\)-fold way samples. In equal-hit algorithm, it is guaranteed by construction that we change state in every move, and the distribution of the visits to different energies is flat. Since the microcanonical averages used in the flip rates are not known before the simulation, we use running average to replace the exact microcanonical average. It appears that this is a valid approximation and should converge to the correct values for sufficiently long runs. For a truly exact algorithm (in the sense of realizing microcanonical property), it is sufficient with a two-pass simulation. The first pass uses a running average; in the second pass, we use a multicanonical rate determined from the first pass. Many different criteria are used to measure the effectiveness of an optimization algorithm, such as the fraction of cases for which ground states are found in a set of runs. The first-passage-time, the time in units of Monte Carlo sweeps that a ground state is found, starting with similar random configurations, should be a good measure of the algorithms' efficiency. We consider sample average of the first-passage-time, although the distribution of it is also very useful. The computer CPU time is another useful criterion when comparing algorithms of very different types. In the flat-histogram and equal-hit algorithms, we can sample positive as well as negative energies uniformly. In this study, we have restricted to the negative energy part, where moves to \\(E>0\\) region are rejected. We compute the average time (first-passage time) for each lattice size and given algorithm in units of sweeps (\\(N=L^{d}\\) basic moves) to find a ground state, starting from a random configuration of equal probability of spin up and spin down. For two-dimensional \\(\\pm J\\) spin glass, we determine the first-passage time \\(t_{g}\\) by comparing the current value of energy with the exact value of ground state energy, obtained from the Spin Glass Server [26]. Thus the results are unbiased. The average first-passage time \\(t_{g}\\) for the two-dimensional Ising spin-glass model is shown in Fig. 1. Over \\(10^{3}\\) realization of random coupling samples are used for averaging for each algorithm and size. Since the ground state energies are usually not known in three dimensions, we consider instead the time for finding the lowest energy for a fixed amount of Figure 1: The average first-passage time \\(t_{g}\\) in units of sweeps to find a ground state for four algorithms: standard EO (triangles), optimal EO (circles), flat-histogram (diamonds), and equal-hit (squares) for the two-dimensional spin-glass model. Over \\(10^{3}\\) realization of random coupling samples are used for averaging for each algorithm and size. sweeps \\(t\\), averaged over the coupling constants with the constraint \\(\\sum J_{ij}=0\\). For any fixed running length \\(t\\), results obtained are only a lower bound for \\(t_{g}\\). We consider run lengths of \\(10^{4}\\), \\(10^{5}\\), \\(10^{6}\\), etc, until the first-passage time converges for large \\(t\\). This limiting time \\(t_{g}\\) is reported for the three-dimensional Ising spin-glass model in Fig. 2. To compare the efficiency of the four algorithms, the actual CPU times are also an important factor. For our implementation, it turns out that the optimized EO, standard EO, and \\(N\\)-fold way equal-hit all have about the same speed at 6 microsecond per spin flip on a 700 MHz Pentium, while the single-spin-flip flat-histogram algorithm takes 3 microsecond. There are several important features in this comparison, see Fig. 1. All of them have a first-passage time that grows rapidly with sizes. With the exception of the standard EO, they nearly have the same slope of about 6 on a double logarithmic scale. It is also interesting to compare the first-passage time with that of equilibrium tunneling time reported in ref. [18, 24]. EO gives excellent performance for small to moderate size systems. However, for large sizes, equal-hit is as good as EO, or even better. Flat-histogram is worse by some constant factor. On the other hand, the performance of the standard EO at \\(\\tau=1\\) is rather poor. This shows that the results of EO is rather sensitive to the value of \\(\\tau\\). Another very interesting aspect of Fig. 1 is that the curves all look linear in Figure 2: The average limiting time \\(t_{g}\\) to find a ground state for the three-dimensional Ising spin-glass model. The meanings of the symbols are the same as those of Figure 1. the semi-logarithmic scale. This implies that \\(t_{g}\\sim\\exp(cL)\\) for some constant \\(c\\sim 1\\), not a power law in \\(L\\). Thus, all of the algorithms are asymptotically inefficient. It would be very interesting if this numerical observation can be supported by some argument. Similar results for the three-dimensional Ising spin glass is presented in Fig. 2. In Table 1, we report some typical data for the average first-passage time \\(t_{g}\\), energy per site, length of the run, and number of random samples for four algorithms for the three-dimensional spin glasses. Since we use the same set of samples with the four algorithms, a lower energy indicates a better performance. The data show that equal-hit is comparable to EO at optimal \\(\\tau\\). ## 4 Turning EO into an equilibrium algorithm The flat-histogram and equal-hit algorithms can be used for equilibrium simulation. With the help of counting the number of potential moves, \\(N_{k}\\), a basic requirement for obtaining equilibrium property of the simulated model is the microcanonical property. Using the broad histogram equation [27, 28], \\[n(E)\\langle N_{k}(\\sigma)\\rangle_{E}=n(E^{\\prime})\\langle N_{Z-k}(\\sigma^{ \\prime})\\rangle_{E^{\\prime}},\\qquad k=\\big{(}(E^{\\prime}-E)/J+2Z\\big{)}/4, \\tag{13}\\] we can obtain the density of states \\(n(E)\\) of energy \\(E\\), thus the equilibrium thermodynamic quantities, including free energy. Unfortunately, the microcanonical property that the probability distribution \\(P(\\sigma)\\) of the configurations is a function of energy \\(E\\) only is strongly violated in EO. The probabilities of the ground states cluster into groups, rather than uniformly distributed. Numerical tests show that \\(P(\\sigma)\\) is a function of \\(E\\), \\(N_{k}\\), as well as additional unknown parameters. To correct this problem, we introduce a rejection step in the EO algorithm, as follows: \\[W(\\sigma\\to\\sigma^{\\prime})=\\delta_{N}(\\sigma,\\sigma^{\\prime})P_{k}\\frac{1}{N _{k}}a(\\sigma\\to\\sigma^{\\prime}). \\tag{14}\\] \\begin{table} \\begin{tabular}{|l|l|l|} \\hline \\multicolumn{3}{|l|}{\\(L=6\\), \\(\\text{MCS}=10^{6}\\), sample \\(=1024\\)} \\\\ \\hline EO & \\(t_{g}\\) & \\(E_{g}\\) \\\\ optimal EO & \\((6.95\\pm 0.65)\\times 10^{4}\\) & \\(-1.7713\\pm 0.0012\\) \\\\ & \\((1.36\\pm 0.15)\\times 10^{3}\\) & \\(-1.7715\\pm 0.0006\\) \\\\ Flat-Histogram & \\((1.46\\pm 0.13)\\times 10^{4}\\) & \\(-1.7715\\pm 0.0006\\) \\\\ Equal-Hit & \\((1.92\\pm 0.20)\\times 10^{3}\\) & \\(-1.7716\\pm 0.0006\\) \\\\ \\hline \\multicolumn{3}{|l|}{\\(L=10\\), \\(\\text{MCS}=10^{6}\\), sample \\(=256\\)} \\\\ \\hline EO & \\(t_{g}\\) & \\(E_{g}\\) \\\\ optimal EO & \\((2.98\\pm 0.17)\\times 10^{5}\\) & \\(-1.7721\\pm 0.0008\\) \\\\ & \\((1.04\\pm 0.11)\\times 10^{5}\\) & \\(-1.7809\\pm 0.0008\\) \\\\ Flat-Histogram & \\((1.93\\pm 0.15)\\times 10^{5}\\) & \\(-1.7802\\pm 0.0007\\) \\\\ Equal-Hit & \\((1.27\\pm 0.17)\\times 10^{5}\\) & \\(-1.7815\\pm 0.0010\\) \\\\ \\hline \\end{tabular} \\end{table} Table 1: The average time \\(t_{g}\\) and lowest energy \\(E_{g}\\) obtained for three-dimensional Ising spin-glass. The acceptance rate \\(a\\) is determined by imposing a detailed balance with an unknown probability distribution \\(f\\big{(}E(\\sigma)\\big{)}\\), \\[f\\big{(}E(\\sigma)\\big{)}W(\\sigma\\to\\sigma^{\\prime})=f\\big{(}E(\\sigma^{\\prime}) \\big{)}W(\\sigma^{\\prime}\\to\\sigma). \\tag{15}\\] This gives an equation for the rate \\(a\\): \\[f(E)P_{k}\\frac{1}{N_{k}(\\sigma)}a(\\sigma\\to\\sigma^{\\prime})=f(E^{\\prime})P^{ \\prime}_{Z-k}\\frac{1}{N_{Z-k}(\\sigma^{\\prime})}a(\\sigma^{\\prime}\\to\\sigma). \\tag{16}\\] The prime on \\(P^{\\prime}\\) indicates that it is a set of \\(P\\) values calculated from the state \\(\\sigma^{\\prime}\\). A solution to this equation is a Metropolis-type choice: \\[a(\\sigma\\to\\sigma^{\\prime})=\\min\\left(1,\\frac{f(E^{\\prime})P^{\\prime}_{Z-k}/N_ {Z-k}(\\sigma^{\\prime})}{f(E)P_{k}/N_{k}(\\sigma)}\\right). \\tag{17}\\] To implement this, we need a two-pass simulation. The first pass determines the function \\(f(E)\\). The procedure is by no means unique. Here, we collect histogram of energy \\(H(E)\\) as well as statistics for \\(\\langle N_{k}\\rangle_{E}\\) from an incorrect simulate of the original EO. Then we determine an approximate density of states with the help of the broad histogram equation, Eq. (13). The function \\(f\\) is computed from \\(f(E)=H(E)/n(E)\\). The EO with rejection is implemented in the second pass. The above procedure should be applicable for any model and any optimization algorithms that has a'steady state'. If the microcanonical property is only slightly violated, it will give a correct equilibrium algorithm with \\(a\\) nearly equal to 1. Thus, we hope to have a method that is efficient for optimization, and yet at the same time, give correct equilibrium statistics. Indeed, with the above method the microcanonical property is restored. Due to the rejection step, the dynamics is slightly changed. A consequence is that the histogram in the second pass shifted towards high energy side, thus the efficiency of the original EO is lost. ## 5 Conclusion From this study, we show that equal-hit algorithm is an excellent candidate for ground state search. At the same time, it also offers the possibility for equilibrium calculations, such as the computation of the ground state entropy. We also show how optimization algorithms like EO can be turned into equilibrium algorithms by introducing a rejection step. All the algorithms studied here give rather rapid increase of \\(t_{g}\\) with sizes, thus it is important and challenging to find algorithms that reduce this growth. Perhaps, algorithms based on single-spin-flip have their fundamental limitations. ## Acknowledgements J.-S. W. thanks the hospitality of Tokyo Metropolitan University during part of his sabbatical leave stay. We also thank N. Kawashima and K. Chen for discussions. We thank M. Iwamatsu for drawing our attention to EO algorithm. ## References * [1] S. Kirkpatrick, C. D. Gelatt, and M. P. Vecchi, Science **220**, 671 (1983). * [2] J. Holland, Adaptation in Natural and Artificial Systems (University of Michigan Press, Ann Arbor, 1975). * [3] N. Kawashima and M. Suzuki, J. Phys. A: Math. Gen. **25**, 1055 (1992). * [4] F.-M. Dittes, Phys. Rev. Lett. **76**, 4651 (1996). * [5] K. F. Pal, Physica A **223**, 283 (1996). * [6] A. K. Hartmann, Europhys. Lett. **40**, 429 (1997). * [7] K. Chen, Europhys. Lett. **43**, 635 (1998). * [8] B. A. Berg and W. Janke, Phys. Rev. Lett. **80**, 4771 (1998). * [9] J. Houdayer and O. C. Martin, Phys. Rev. Lett. **83**, 1030 (1999). * [10] J. Dall, and P. Sibani, Comp. Phys. Commun. **141**, 260 (2001). * [11] S. Boettcher and A. G. Percus, Artif. Intellig. **119**, 275 (2000). * [12] S. Boettcher and A. G. Percus, Phys. Rev. Lett. **86**, 5211 (2001). * [13] P. Bak, C. Tang, and K. Wiesenfeld, Phys. Rev. Lett. **59**, 381 (1987). * [14] B. A. Berg and T. Neuhaus, Phys. Rev. Lett. **68**, 9 (1992). * [15] B. Hesselbo and R. B. Stinchcombe, Phys. Rev. Lett. **74**, 2151 (1995). * [16] K. Hukushima and Y. Nemoto, J. Phys. Soc. Jpn. **65**, 1604 (1996). * [17] J.-S. Wang, Eur. Phys. J. B **8**, 287 (1999). * [18] Z. F. Zhan, L. W. Lee, and J.-S. Wang, Physica A **285**, 239 (2000). * [19] K. Binder and A. P. Young, Rev. Mod. Phys. **58**, 801 (1986); M. Mezard, G. Parisi, and M. A. Virasoro, Spin Glass Theory and Beyond (World Scientific, Singapore, 1987). * [20] F. Barahona, J. Phys. A **15**, 3241 (1982). * [21] P. M. C. de Oliveira, T. J. P. Penna, H. J. Herrmann, Braz. J. Phys. **26**, 677 (1996). * [22] B. A. Berg, Nature, **361**, 708 (1993). * [23] A. B. Bortz, M. H. Kalos, J. L. Lebowitz, J. Comput. Phys. **17**, 10 (1975). * [24] J.-S. Wang and R. H. Swendsen, J. Stat. Phys. **106**, 245 (2002). * [25] R. H. Swendsen, B. Diggs, J.-S. Wang, S.-T. Li, C. Genovese, J. B. Kadane, Int. J. Mod. Phys. C **10**, 1563 (1999). * [26]http interface of the spin glass server is at [http://www.informatik.uni-koeln.de/ls_juenger/projects/sgs.html](http://www.informatik.uni-koeln.de/ls_juenger/projects/sgs.html). We thank Thomas Lange for generating the samples used in the comparisons. * [27] P. M. C. de Oliveira, Eur. Phys. J. B **6**, 111 (1998); P. M. C. Oliveira, cond-mat/0204332. * [28] B. A. Berg and U. H. E. Hansmann, Euro. Phys. J B **6**, 395 (1998).
We compare the performance of extremal optimization (EO), flat-histogram and equal-hit algorithms for finding spin-glass ground states. The first-passage-times to a ground state are computed. At optimal parameter of \\(\\tau=1.15\\), EO outperforms other methods for small system sizes, but equal-hit algorithm is competitive to EO, particularly for large systems. Flat-histogram and equal-hit algorithms offer additional advantage that they can be used for equilibrium thermodynamic calculations. We also propose a method to turn EO into a useful algorithm for equilibrium calculations. Keywords: extremal optimization. flat-histogram algorithm, equal-hit algorithm, spin-glass model, ground state.
Summarize the following text.
arxiv-format/0210077v1.md
# Long term persistence in the sea surface temperature fluctuations Roberto A. Monetti\\({}^{(1)}\\), Shlomo Havlin\\({}^{(1)}\\), and Armin Bunde\\({}^{(2)}\\) \\({}^{(1)}\\) Minerva Center and Department of Physics Bar-Ilan University, Ramat-Gan 52900, Israel \\({}^{(2)}\\) Institut fur Theoretische Physik III Justus-Liebig-Universitat Giessen Heinrich-Buff-Ring 16, 35392 Giessen, Germany Present address: Center for Interdisciplinary Plasma Science (CIPS), Max-Planck-Institut fur extraterrestrische Physik, Giessenbachstr. 1, 85749 Garching, Germany. 29th October 2018 ###### Introduction The oceans cover almost three quarters of the Earth's surface and have the greatest capacity to store heat. Thus, they are able to regulate the temperature on land even in sites located far away from the coastline. This property of the oceans suggests that they may posses a strong temperature persistence, i.e. a strong tendency that the water temperature of a particular day will remain the same the next day. Persistence can be characterized by the auto-correlation function \\(C(s)\\) of temperature variations separated by a time period \\(s\\). Recently, quantitative studies of the persistence in atmospheric land temperatures revealed that atmospheric land temperature fluctuations exhibit long-range power law correlations \\(C(s)\\sim s^{-\\gamma}\\) with \\(\\gamma\\approx 0.70\\)[Koscielny-Bunde et al., 1996; Koscielny-Bunde et al., 1998; Pelletier, 1997; Pelletier et al., 1997; Talkner et al., 2000]. In this Letter, we study the persistence in sea surface temperature (SST) records at many sites in the Atlantic and Pacific oceans using the detrended fluctuation analysis (DFA) method [Peng et al., 1994; Kantelhardt et al., 2001]. We find that for all time scales the SST fluctuations exhibit stronger correlations than atmospheric land temperature fluctuations. The long term persistence of the SST is characterized by a correlation exponent \\(\\gamma\\sim 0.4\\) for both oceans. We have analyzed both types of SST data sets that are available, the monthly SST for the period \\(1856-2001\\) and the weekly SST for the period \\(1981-2001\\). For the period \\(1856\\)-\\(1981\\), the monthly SST data sets were obtained by Kaplan et al. [Kaplan et al., 1998] who used optimal estimation in space applying \\(80\\) empirical orthogonal functions to interpolate ship observations of the United Kingdom Meteorological Office database [Parker et al., 1994]. After \\(1981\\), the monthly data correspond to the projection of the National Center for Environmental Prediction (NCEP) optimal interpolation (OI) analysis [Reynolds et al., 1993; Reynolds et al., 1994]. The weekly SST's also correspond to the NCEP OI analysis. The data are freely available from [http://ingrid.ldeo.columbia.edu/SOURCES/](http://ingrid.ldeo.columbia.edu/SOURCES/). ## 2 The Method We focus on the temperature fluctuations around the periodical seasonal trend. In order to remove the periodical trend, we first determine the mean temperature \\(\\langle T_{a}\\rangle\\) for each month/week by averaging over all years in the time series. Then, we analyze the temperature deviations \\(\\Delta T_{i}=T_{i}-\\langle T_{a}\\rangle\\) from these mean values. The persistence in the temperature fluctuations can be characterized by the auto-correlation function, \\[C(s)\\equiv\\langle\\Delta T_{i}\\Delta T_{i+s}\\rangle=\\frac{1}{N-s}\\sum_{i=1}^{N-s }\\Delta T_{i}\\Delta T_{i+s}, \\tag{1}\\] where \\(N\\) is the length of the record and \\(s\\) is the time lag. A direct calculation of \\(C(s)\\) is hindered by the level of noise present in the finite temperature series and by possible nonstationarities in the data. To reduce the noise, we study the temperature profile function, \\[Y_{k}=\\sum_{i=1}^{k}\\Delta T_{i}. \\tag{2}\\] We can consider the profile \\(Y_{k}\\) as the position of a random walker on a linear chain after \\(k\\) steps. According to the random walk theory, the fluctuations \\(F(s)\\) of the profile in a given time window of length \\(s\\) are related to the correlation function \\(C(s)\\). For the relevant case of long-range power law correlations, \\[C(s)\\sim s^{-\\gamma},\\hskip 28.452756pt0<\\gamma<1, \\tag{3}\\] the fluctuations increase as a power law (Barabasi et al., 1995; Shlesinger et al., 1987), \\[F(s)\\sim s^{\\alpha},\\hskip 28.452756pt\\alpha=1-\\gamma/2. \\tag{4}\\] For uncorrelated data (\\(\\gamma\\geq 1\\)), we have \\(\\alpha=1/2\\). To find how the fluctuations scale with \\(s\\), we divide the profile into non-overlapping intervals of length \\(s\\). We calculate the square fluctuations \\(F_{\ u}^{2}(s)\\) in each interval \\(\ u\\) and obtain \\(F(s)\\) by averaging over all intervals, \\(F(s)\\equiv\\langle F_{\ u}^{2}(s)\\rangle^{1/2}\\). Here, we use two methods that differ in the way fluctuations are measured. In the fluctuation analysis (FA), the square of the fluctuations is defined as \\(F_{\ u}^{2}(s)=(Y_{(\ u+1)s}-Y_{\ u s})^{2}\\) where \\(Y_{\ u s}\\) and \\(Y_{(\ u+1)s}\\) are the values of the profile at the beginning and the end of each segment \\(\ u\\), respectively. In the detrended fluctuation analysis, we determine in each interval the best polynomial fit of the profile and define \\(F_{\ u}(s)\\) as the variance between the profile and the best fit in the intervals. Different orders \\(n\\) of DFA (DFA1, DFA2, etc) differ in the order of the polynomial used in the fitting procedure. By construction, FA is sensitive to any kind of trend and thus equivalent to the Hurst and the power spectrum analyses. In contrast, DFA\\(n\\) removes a polynomial trend of order \\(n-1\\) in the temperature record and thus, it is superior to the conventional methods. To characterize the persistence, we have applied the FA and DFA methods to 36 (46) monthly SST records and 64 (35) weekly SST records in the Atlantic (Pacific) ocean. ## 3 Results and Discussion Figure 1(a-c) show three typical plots of the monthly temperature profile function \\(Y_{t}\\) for a land station (Prague), a site in the Atlantic ocean, and a site in the Pacific ocean, respectively. Parabolic-like profile functions which are concave (convex) may indicate the presence of a positive (negative) linear trend (see Eq. 2). However, Fig. 1(d) illustrates that pure correlated data may also lead to parabolic-like profile functions. Trends and correlations can be distinguished and characterized by comparing the FA and DFA results [Kantelhardt et al., 2001; Govindan et al., 2001]. Figure 2 shows log-log plots of the FA and DFA curves for the profiles shown in Fig. 1. Figure 2(a) shows that at large times Prague temperature fluctuations display a power law behavior. The fluctuation exponent obtained from the FA (0.81) is greater than the values given by the DFA1-5 (0.65). This difference is probably due to the effect of the well known urban warming of Prague. The fluctuation exponent \\(\\alpha\\approx 0.65\\) is consistent with the earlier finding, where the whole Prague record (218 years) has been analyzed [Koscielny-Bunde et al., 1996; Govindan et al., 2001]. Figure 2(a) shows that the FA (and the similar Hurst and power spectrum methods) may lead to spurious results because of the presence of trends, yielding a large overestimation of long range correlations. Figure 2(d) shows the FA and DFA results for the artificial data used in Fig. 1(d). Although the profile function suggested the presence of a trend, the FA and the DFA show no evidence of any trend (see references [Hu et al., 2001; Vjushin et al., 2001]). Figures 2(b) and 2(c) show the FA and DFA results for two typical sites in the Atlantic and Pacific oceans, respectively. Here, for long time scales, FA and DFA curves are straight lines with roughly the same fluctuation exponent \\(\\alpha\\sim 0.8\\). This shows that (a) trends do not falsify the FA result and therefore may be regarded as much less important than for Prague temperatures, and (b) long range correlations also occur in SST's. These correlations are stronger than the correlations in the atmospheric land temperatures, since the fluctuation exponent \\(\\alpha\\sim 0.8\\) corresponds to a correlation exponent \\(\\gamma\\sim 0.4\\). As in the case of atmospheric land temperatures [Koscielny-Bunde et al., 1996], the range of this persistence law seems to exceed one decade and is possible even longer than the range of the SST series considered. In contrast to Prague, there is a pronounced short-time regime which ends roughly at 10 months. This regime can be better revealed by the analysis of the weekly SST series. Figure 3 shows the FA and DFA results for 4 sites in the Atlantic and Pacific oceans. This figure shows that for short times, the SST exhibits a persistence which is considerably stronger than both the SST long term persistence and the atmospheric land temperature persistence. The typical SST short-time fluctuation exponent is \\(\\alpha\\approx 1.2\\). However, in the northern Atlantic (latitudes from \\(30^{o}\\) to \\(50^{o}\\) north) we have found even higher fluctuation exponents. Figure 3(d) shows the results for a typical site in the northern Atlantic, yielding \\(\\alpha\\approx 1.4\\). The fact that \\(\\alpha\\) is above 1 means that the variance of the original temperature fluctuations in a time window \\(s\\) increases as \\(s^{\\alpha-1}\\), i.e. as \\(s^{0.4}\\) in the Northern Atlantic and \\(s^{0.2}\\) in the rest of the oceans for time scales below 10 months. This non-stationary behavior must be contrasted with the atmospheric land temperature fluctuations where the variance stays constant and the persistence decays with a nearly universal exponent \\(\\gamma\\sim 0.7\\). Non-stationary behavior has also been found in the analysis of marine stratocumulus cloud base height records (Kitova et al., 2002). We like to suggest the following interpretation for the difference in the short-term persistence between the Northern Atlantic and the rest of the oceans. In the northern Atlantic, the dominant mode of interannual variability in the atmospheric circulation is the North Atlantic Oscillation (NAO) (Hurrell, 1995; Thompson et al., 1998). This weather phenomenon highly influences the climate in the eastern part of North America and northern Europe and is usually characterized by the NAO index which is based on the normalized difference in sea level pressure between Ponta Delgada, Azores (\\(26^{o}\\) W, \\(38^{o}\\) N) and Akureyri, Iceland (\\(18^{o}\\) W, \\(66^{o}\\) N). The NAO index varies from positive values in winters to negative values in other seasons. During the last twenty years, the NAO index has displayed a persistent and exceptionally strong positive phase (Hurrell, 1995). Since the sea level pressure and the SST are coupled variables, it is likely that the observed persistence in the NAO index is also revealed by the greater fluctuation exponent found in SST's in the same period. In order to find how representative the values of the fluctuation exponents are, we have studied the distribution of the short- and long-term exponents for both the Atlantic and the Pacific ocean. For the long-term exponents, we exclude those sites in the tropical Pacific region where the El Nino southern oscillation (ENSO) takes place [Tziperman et al., 1994; Cane et al., 1986]. The reason for this is that ENSO is a cyclic phenomenon which warms the east equatorial Pacific ocean every three to six years. This cycle cannot be detrended and strongly affects the DFA results on scales between 2 and 20 years. At small scales below 2y, higher order DFA is able to remove the trend. At larger scales, well above 20y, the oscillations cancel each other and the fluctuations again become dominant. However, for obtaining reliable results on the scaling above 20y, we need data covering far more than 200y. Those data are not available, and therefore we cannot specify the long-term exponents in the ENSO region. Figure 4 shows the results from our fluctuation analysis for a typical site in the tropical Pacific region, both for the weekly and the monthly data. Below 2y, the exponent is close to 1.2, and is therefore similar to the short-term exponent for the rest of the sites. Above 2y, the influence of the oscillations shows up. First, the exponent crosses over to a larger value, and then, above 3y for DFA1 and above 8y for DFA5, crosses over to a very low value. This effect of oscillations on the DFA analysis was recently described in [Kantelhardt et al., 2001; Hu et al., 2001]. We expect that at much larger scales, the exponent will gradually increase approaching the value \\(\\alpha\\sim 0.8\\) as for the sites outside the ENSO regime. However, the data sets are too short to observe this effect. Figure 5 summarizes our results for the short- and long-term exponents for both the Atlantic and the Pacific oceans. As said before, for the short-term exponents, sites in the ENSO region are included in the histogram, while for the long-term exponents they are not. The histogram shows that the short term exponents for the Northern Atlantic (\\(\\alpha=1.38\\pm 0.04\\)) where the NAO takes place, are well distinct from the short term exponents of the remaining sites (\\(\\alpha=1.17\\pm 0.08\\)). For the asymptotic long-term exponents (\\(\\alpha=0.8\\pm 0.08\\)) there is not such a clear distinction between the Northern Atlantic area and the rest. ## 4 Conclusions In summary, we have studied the persistence of the sea surface temperature in the Atlantic and Pacific oceans. We found that, in contrast to land stations, there exist two pronounced scaling regimes. In the short-time regime that roughly ends at 10 months, the fluctuations of the temperature profile, in a given time window \\(s\\), scales as \\(s^{\\alpha}\\), with an exponent \\(\\alpha\\) in the northern Atlantic (\\(\\alpha\\sim 1.4\\)) that differs from the rest of the oceans (\\(\\alpha\\sim 1.2\\)). This behavior is well distinct from the temperature fluctuations on land, where \\(\\alpha\\) is close to 0.65 above typically 10 days. The fact that in the short-time regime \\(\\alpha\\) is well above 1 points to an intrinsic non-stationary behavior where the variance of the original temperature fluctuations in a time window of size \\(s\\) increases with \\(s\\) as \\(s^{\\alpha-1}\\). This non-stationary behavior crosses over to stationary behavior at time scales above 10 months, where now the fluctuation exponent reaches the value \\(\\alpha\\sim 0.8\\) for all sites considered in both oceans. This result reveals that pronounced long term correlations govern the SST, with an exponent \\(\\gamma\\sim 0.4\\). The persistence in the SST is due to the capacity of the oceans to store heat [Levitus et al., 2001]. The oceans also contribute to the temperature persistence on land but in a less direct way, i.e. by coupling to the atmosphere. This may be the reason why the persistence of atmospheric land temperatures is less pronounced. In the view of our results, it is interesting that coastline stations (like Melbourne, Sidney, and New York) show the same persistence exponent like inland stations (like Prague and Luling). Finally, we also like to emphasize that the scaling laws we find here may serve as further non-trivial test-bed for the state-of-the-art global climate models (see [Govindan R., 2002]). **Acknowledgments**: We like to acknowledge financial support from CONICET (Argentina), the Israel Science Foundation and the Deutsche Forschungsgemainschaft. ## References * [1] Koscielny-Bunde E., Bunde A., Havlin S., Roman H. E., Goldreich Y., Schellnhuber H.-J., Indication of a universal persistence law governing atmospheric variability, _Phys. Rev. Lett., 81,_ 729-732, 1998. * [2] Koscielny-Bunde E., Bunde A., Havlin S., Goldreich Y., Analysis of daily temperature fluctuations, _Physica A, 231,_ 393-396, 1996. * [3] Pelletier J. D., Analysis and modeling of the natural variability of climate, _J. Climate, 10,_ 1331-1342, 1997. * [4] Pelletier J. D. and Turcotte D. L., _J. Hydrology, 203,_ 198-208, 1997. * [5] Talkner P. and Webber R.O., Power spectrum and detrended fluctuation analysis: Application to daily temperatures, _Phys. Rev. E, 62,_ 150-160, 2000. * [6] Peng C.-K., Buldyrev S.V., Havlin S., Simons M., Stanley H.E., Goldberger A.L., Mosaic Organization of DNA Nucleotides, _Phys. Rev. E, 49,_ 1685-1689, 1994. * [7] Kantelhardt J.W., Koscielny-Bunde E., Rego H.H.A., Havlin S., Bunde A., Detecting long-range correlations with detrended fluctuation analysis, _Physica A, 295,_ 441-454, 2001. * [8] Kaplan A., Cane M., Kushnir Y., Clement A., Blumenthal M., and Rajagopalan B., Analyses of global sea surface temperature 1856-1991, _J. of Geophys. Res-Oceans, 103,_ 18567-18589, 1998. * [9] Parker D. E., Jones P. D., Folland C. K., and Bevan A., Interdecadal changes of Surface-Temperature since the late-19th-century, _J. of Geophys. Res-Atmos, 99,_ 14373-14399, 1994. * [10] Reynolds R. and Marsico D., An Improved Real-Time Global Sea-Surface Temperature Analysis, _J. Climate, 6,_ 114-119, 1993. * [11] Reynolds R. and Smith T., Improved Global Sea-Surface Temperature Analyses using Optimum Interpolation, _J. Climate, 7,_ 929-948, 1994. * [12] Barabasi A.-L and Stanley H. E., _Fractal Concepts in Surface Growth_ (Cambridge University Press, 1995). * Application to Turbulence, _Phys. Rev. Lett., 58,_ 1100-1103, 1987. * [14] Govindan R., Vjushin D., Brenner S., Bunde A., Havlin S., and Schellnhuber H.-J., Long-range correlations and trends in global climate models: Comparison with real data, _Physica A, 294,_ 239-248, 2001. * [15] Hu K., Ivanov P.Ch., Chen Z., Carpena P., and Stanley H. E., Effect of trends on detrended fluctuation analysis, _Phys. Rev. E, 64,_ 011114, 2001. * [16] Vjushin D., Govindan R., Monetti R., Havlin S., and Bunde A., Scaling analysis of trends using DFA, _Physica A, 302,_ 234-243, 2001. * [17] Kitova N., Ivanova K., Ausloos M., Ackerman T., Mikhalev M., Time dependent correlations in marine stratocumulus cloud base height records, _Int. Jour. Mod. Phys. C, 13,_ 217-227, 2002. - Regional Temperatures and Precipitation, _Science, 269,_ 676-679, 1995. * [19] Thompson D. and Wallace J., The Arctic Oscillation signature in the wintertime geopotential height and temperature fields, _Geophys. Res. Lett., 25,_ 1297-1300, 1998. * [20] Tziperman E., Stone L., Cane M., and Jarosh H., _Science, 264,_ 72-74, 1994. * [21] Cane M., Zebiak S., and Dolan S., Experimental Forecast of El-Nino, _Nature, 321,_ 827-832, 1986. * [22] Levitus S, Antonov JI, Wang JL, Delworth TL, Dixon KW, and Broccoli AJ, Anthropogenic warming of Earth's climate system, _Science, 292,_ 267-270, 2001. * [23] Govindan R., Vjushin D., Brenner S., Bunde A., Havlin S., Schellnhuber H.-J., Global climate models violate scaling of the observed atmospheric variability, _Phys. Rev. Lett., 89,_ 028501, 2002. Figure 1: Typical temperature profile functions for the last 146 years (monthly data). (a) Prague, (b) Atlantic ocean (22.5W, 42.5S), (c) Pacific ocean (172.5W, 12.5S), (d) Artificial correlated data with \\(\\gamma=0.4\\). Figure 2: Log-log plots of the FA and DFA curves for the data shown in Fig. 1. From top to bottom curves correspond to FA, DFA1 to DFA5. Lines of slope 0.8 and 0.65 have been drawn to compare the typical SST long term fluctuation exponent with the atmospheric land temperature fluctuation exponent. Figure 3: Log-log plots of the FA and DFA curves for the last 20 years (weekly data) for typical sites in the Atlantic and Pacific oceans. From top to bottom curves correspond to FA, DFA1 to DFA5. Lines of slopes 1.2 and 1.4 have been drawn to compare the short-time SST fluctuation exponent obtained in the northern Atlantic with the short-time SST fluctuation exponent for the rest of the oceans. Figure 4: Log-log plots of the FA and DFA curves at 92.5\\({}^{o}\\)W - 2.5 \\({}^{o}\\)S in the tropical Pacific region. The arrows indicate the position of the crossovers. (a) Monthly SST’s for the last 146 years. A line of slope 0.8 has been drawn to note the influence of the oscillation on the results. (b) Weekly SST’s for the last 20 years. A line of slope 1.2 representative of the short-time regime has been included. Figure 5: Histograms for the short-time and long-time fluctuation exponents.
We study the temporal correlations in the sea surface temperature (SST) fluctuations around the seasonal mean values in the Atlantic and Pacific oceans. We apply a method that systematically overcome possible trends in the data. We find that the SST persistence, characterized by the correlation \\(C(s)\\) of temperature fluctuations separated by a time period \\(s\\), displays two different regimes. In the short-time regime which extends up to roughly 10 months, the temperature fluctuations display a nonstationary behavior for both oceans, while in the asymptotic regime it becomes stationary. The long term correlations decay as \\(C(s)\\sim s^{-\\gamma}\\) with \\(\\gamma\\sim 0.4\\) for both oceans which is different from \\(\\gamma\\sim 0.7\\) found for atmospheric land temperature.
Give a concise overview of the text below.
arxiv-format/0210323v1.md
# Reactive glass and vegetation patterns N.M. Shnerb\\({}^{1}\\) P. Saraa\\({}^{2}\\) H. Lavee\\({}^{2}\\) and S. Solomon\\({}^{3}\\) \\({}^{1}\\)Department of Physics, Judea and Samaria College, Ariel, Israel 44837 \\({}^{2}\\) Department of Geography, Bar-Ilan University, Rama-Gan, Israel 52900 \\({}^{3}\\) Racah institute of physics, The Hebrew University, Jerusalem Israel 91904 November 3, 2021 ###### pacs: 87.23.Cc, 89.75.Kd, 45.70.Qj Vegetation patterns in the arid and the semi-arid climatic zones [1; 2] are an interesting example of spontaneous symmetry breaking in complex systems. Competition of shrubs for a limited supply of water is the relevant process that dictates the spatial organization. The struggle for water induces an indirect interaction among shrubs, as the flora extinct if its water supply is insufficient. Competition for common resource has been considered for many years as one of the basic processes in population dynamics [3; 4]. It may be shown that, if two species compete for a common resource, the one that is able to survive at a lower resource level prevails and displaces the other species population. Stable coexistence of N species is possible if there exist N different resources and each of the species is a superior competitor for one of the supplies, that is, it has one _biological niche_. The situation becomes more complicated if the resource admits spatial dynamics. Recent theoretical and experimental work reveals the dynamics of competing populations in water, where light, the limiting resource, is consumed gradually by the upper layers of aquatic phytoplankton [5]. This model may be extended to include spatial dynamics of the fauna, but it does not support time independent patterning. Vegetation patterns are an example of one species (shrubs) and one resource (water) system, where field studies revealed wide variety of stable, or almost stable, spontaneous segregation modes. Understanding the underlying mechanism for generation of such patterns and their observed resilience is considered as an important step toward a comprehension of the desertification process, where environmental effects like climate changing and grazing destroys the natural balance toward stable aridity. Technically, the water-biomass system has been considered as a spatially extended nonlinear system, that, in some parameter range, may yield stripes, spots, labyrinth and other ordered arrangements attributed to a positive feedback mechanism, i.e., to the inhibition of water runoff and evaporation by the flora [6]. However, the typical perennial vegetation patterns in the semi-arid zone are disordered, as one can easily see in Fig. (1). The generic spatial organization of perennial flora varied along the precipitation gradient: from scattered \"green spots\" in the arid zone through clusters of shrubs in the semi-arid zone, to an almost full coverage of the soil by biomass in the humid/subhumid climate. Analysis of the transverse correlations in the three panels of Fig. (1) shows that the correlation length in the semi-arid zone is larger (by factor of 2-3) than in the other regions, and seems to indicate weak long-range oscillations in the Mediterranean site, perhaps a precursor of Turing instability. In this letter we present a general and simple model of the water-shrubs reaction that is able to yield all these features. Our model takes into account the intrinsic \"noise\", i.e., the amplification of initial fluctuations due to the minimal size needed for the survival of perennial flora. The resulting pattern is disordered but robust, thus it may be considered as the reactive equivalent of a glass. Although the concept of free energy, deep local minima and thermal equilibrium is absent in reactive systems, it still presents an example of a spontaneous breakdown of symmetry toward a disordered, long-living meta-stable state. To present the model, let us begin from its zero dimensional (\"flower pot\") deterministic and continuous dynamics. With water supplied to the system at some rate \\(R\\) and continuum vegetation growth, the time evolution of the water-shrub system is described by the following, nondimensionalized rate equations: \\[\\frac{\\partial B}{\\partial t} = wB-\\mu B \\tag{1}\\] \\[\\frac{\\partial w}{\\partial t} = R-w-\\lambda wB\\] Where \\(w\\) stands for the available water density, B is the density of shrubs biomass, the term \\(wB\\) represent shrubs growth as they consume water while \\(-\\lambda wB\\) is thecorresponding consumption of water by shrubs. \\(\\mu\\) is the \"death rate\" of the vegetation in the absence of water and the term \\(-w\\) represents water losses by percolation and evaporation. The set of differential equations (1) admits two non-negative fixed points. The trivial one at \\(B_{0}=0\\), \\(w_{0}=R\\), becomes unstable to small perturbations as \\(R=\\mu\\), while above this value a \"coexistence\" fixed point at \\[w_{1}=\\mu\\quad B_{1}=\\frac{R-\\mu}{\\lambda\\mu}, \\tag{2}\\] becomes a stable node (if \\(R^{2}<4(R-\\mu)\\mu^{2}\\)) or a stable focus (if \\(R^{2}>4(R-\\mu)\\mu^{2}\\)). Adding lateral water flow to the above model leads to a reaction-diffusion equations of the form, \\[\\frac{\\partial B}{\\partial t} = wB-\\mu B \\tag{3}\\] \\[\\frac{\\partial w}{\\partial t} = D\ abla^{2}w+{\\bf v}\\cdot\ abla w+R-w-\\lambda wB,\\] where \\({\\bf v}\\) pointed down the hillslope. Simple linear analysis implies that in the absence of cross-diffusion effects (like those considered recently by [7]), no Turing-like instability exists in that system; the steady state is a uniform covering of all the plane by the same amount of flora which corresponds to the stable fixed point and fluctuations of wavenumber \\(k\\) decay like \\(e^{-k^{2}t}\\). In the desert area considered here there are two seasons, dry summer and humid winter. Eqs. (2,3) presents the winter, with \"smeared\" rain events. While annual flora wilt in the summer, perennial shrubs have to survive, so they must reach some _threshold size_ before the dry season. If the winter is not long enough to allow for a full development of the plant to its stable fixed point, the survival of a shrub depends on its size at the end of the rainy season, which, in turn, depends on small fluctuations in its initial size and the consumption of water by its neighbors. In the next winter, the existence of a shrub causes a depletion of the available soil moisture in its immediate neighborhood (roughly speaking, in an area of typical linear size \\(\\sqrt{1/\\mu}\\)), and the chance for another shrub that pops out in the depletion region to reach the threshold is lowered. The whole area is then segregated into a mosaic of water accepting and water contributing patches. This is a non-Turing mechanism that has nothing with the effect of shrubs on the overland flow. While Turing instability is characterized by some typical wavelength that sets the linear size of vegetation and bare soil patches to be equal, our model allows for clusters of arbitrary size, as indicated in Fig. (1). The optimal segregation of the hillslope, that gives maximal biomass per unit area, is an ordered array of shrubs, each located in the lower end (or, on a flat plane, in the middle) of its contributing area. A regular or distorted \"lattice\" of flora is then formed, similar to the structure of atoms in a two dimensional crystal. However, this optimal scenario is rarely accomplished in nature, due to the stochastic character of the growth process itself. For simplicity, let us assume that the seed bank in the soil ensures the development of a perennial shrub if some water exist at the site. As the first shrub pops out in an empty region, the soil moisture in its surroundings (primarily downhill) is depleted, and the next shrub will not grow in this \"shadowed\" area. Nevertheless, as the next shrub also occurs at random, its position will be uncorrelated with the first, except that it can not pop out at the shadowed region of the first one, and so on. The process continues until all the slope is shaded (\"jammed\"). This stochastic growth process yields a ran Figure 1: Results of direct field measurements at three different locations along the precipitation gradient. The distribution of perennial shrubs (annual flora not included) is presented for an area of 100 square meters at each site. Each black spot presents a shrub and the size of a spot is proportional to the area size of the canopy. Shrubs distribution on hillslopes has been taken at three sites representing mildly-arid, semi-arid and subhumid climate conditions in Israel. The mildly-arid site (left panel, with mean annual rainfall 260mm, hillslope gradient \\(11^{\\circ}\\)) is located at Mishor Adumim, 10km east of Jerusalem. The semi-arid site (middle, with mean annual rainfall 330mm and gradient of \\(16^{o}\\)), is located at Ma’ale Adumim, 8km east of Jerusalem. The Mediterranean site (right, mean annual rainfall 620mm, hillslope gradient \\(13^{o}\\)), is located at Giv’at Ye’arim, 11km west of Jerusalem. All the three sites have hard calcareous bedrock and southeast exposure (azimuth \\(140^{o}-150^{o}\\)). dom covering of the slope by shrubs, with typical distance between nearest neighbors, but no long-range structure. The random arrangement is, however, extremely robust; although the slope is covered inefficiently by the shrubs, there is not enough source area for the next shrub. Essential changes, such as the death of a plant and a formation of another one, are discontinuous, and though are very rare, unless some intervene come from the outside, perhaps in the form of grazing or climate changes. In Figure (2), typical results of a numerical simulation of a threshold-noise model on a \"flat\" (no slope) land are presented. In the simulation, the system freezes rapidly (in 1-2 rainy seasons) to a robust configuration (slower dynamics will increase the efficiency of biomass growth), and then persists, with negligible fluctuations, up to 500 winters. The average amount of flora is much smaller than \\(B_{1}\\), and the average amount of water is larger than \\(w_{1}\\), i.e., there is an inefficient use of the water due to the stochastic arrangement of the shrubs. To guide the eye, soil moisture contours are also plotted (dotted lines, with the water level inside smaller than 0.46) and reveal the depletion zone around each shrub [8]. No empty site in the region maintains enough water to allow for new shrub to develop, and the whole region is \"shaded\" by the existing flora. Figure (3) present the results of a simulation with the same growth and diffusion parameters, but with nonzero downhill slope, and the effect on the moisture depletion zones is evident. Various aspects of this competition scenario are similar to the adsorption of large particles at an interface [9]. In the model of random sequential adsorption, \"hard\" particles are added sequentially to a D dimensional volume at random positions with the condition that no trial particle can overlap previously inserted ones. The addition process is than repeated until the system reaches its \"jamming limit\", at which the density saturates. The adsorbed particles density at the jamming limit is lower than the close-packed form, and the configuration of adsorbed particles is \"freezed\" at some disordered pattern. A shrub above threshold, with its excluded volume of depleted moisture is similar to an adsorbed \"disc\". The shrubs-water model, although non-local (excess water transferred downhill) and reversible (new shrub may remove an existing plant by depleting its water resources), yields a similar jamming disordered and inefficient covering of the slope. The above considerations about glassy structure elucidates the existence and the robustness of the vegetation patterns in the arid zone, but fails to explain the aggregation of shrubs (larger correlation length) in the semi-arid zone, or the Turing patterns [6; 7]. To explain patchiness one should consider the positive-feedback mechanism, i.e., the inhibition of water dynamics induced by the shrubs themselves. In the absence of shrubs (and other meso topography factors), water flow downhill, with some typical lateral displacement per unit length. In the presence of shrubs the flow in their vicinity is slower than that in bare soil, as a result of higher infiltration rates. In addition the microclimate under shrub is characterized by less direct radiation and smaller evaporation rate [6]. This means that close to the shrub there are favorable soil water conditions and more flora may grow. The \"repulsive\" interaction among shrubs due to the struggle Figure 2: Numerical results of forward Euler integration of the reaction-diffusion equations (3) on 100X100 sites grid with periodic boundary conditions. A grey spot is plotted around the location of each shrub, with the size of the spot proportional to its biomass. The dotted lines present soil moisture contours around the shrubs. The simulation parameters are \\(v=0\\) (no slope) and \\(D=10\\), \\(\\mu=0.2\\), \\(R=0.5\\), \\(\\lambda=1.2\\) (this implies \\(B_{1}=1.25\\) and \\(w_{1}=0.2\\)). Initial conditions are no water and a seed of biomass taken from a square distribution between [0,0.01]. Effect of the summer is modelled at the end of each “winter” (21 time cycles) by setting all water to zero while the flora at a site is dropped to the seed level if the biomass is smaller than a threshold, with \\(B_{th}=1.2\\). The average (per site) values of water (about 0.32) and flora (0.57) reflect the inefficient use of water attributed to the glass-like structure. Figure 3: Same as Fig. (3) but with a downhill drift of water where the asymmetry parameter is \\(v=0.5\\). Note the change in the shape of the water contours. Average biomass and water levels are almost the same as in Figure (2). for water is then balanced by an \"attraction\". Accordingly, the size of a typical cluster changes along the precipitation gradient, from a single shrub at the arid limit to large clusters in the semi-arid regime. The response of the system to external parameters (climate change) and grazing seems to depend on its phase. In particular, it seems that hysteresis loops (desertification transition) like those described in [7] are a characteristic of the clustered phase. A detailed discussion of these issues will be presented elsewhere. In Fig. (4), the results of the same simulation program with positive feedback are presented. The only new ingredient added to the simulation is a suppression of the asymmetry in the downhill water flow, with the asymmetry term \\({\\bf v}\\) multiplied by \\(exp[-5B({\\bf r})/B_{th}]\\) (this is a cross-convection effect). Diffusion and evaporation remain the same as in (3). As the downhill flow becomes smaller, water tend to accumulate at shrub's neighborhood. The transition from the arid (left) zone, with no clustering and glassy structure, to more clustered patterns and even some linear order in the semi-arid (right) is evident. In conclusion, the generic spatial patterns due to the struggle for water are disordered frozen patterns, and the threshold-noise process dictates the vegetation spatial organization in the arid zone. The instability that yields this glassy structure is not Turing-like, thus the inhomogeneity is not characterized by typical length scale. Shrub clusters and Turing patterns emerge as further instability of this glassy structure if the precipitation is large enough, where the positive feedback dominates. It then leads first to clustering of shrubs and then to a global order in the form of \"tiger bush\" patterns. ###### Acknowledgements. We wish to acknowledge E. Meron, F.J. Weissing and J. Huisman for most helpful comments and discussions. ## References * (1) D.J. Tongway anf J.A. Ludwig, Australian Journal of Ecology, **15**, 23 (1990). * (2) C. Valentin, J.M. Herbes and J. Poesen, Catena **37**, 1 (1999). * (3) G. F. Gause, _The Struggle for Existence,_ (Williams and Wilkins, Baltimore, 1934). * (4) D. Tilman, _Resource Competition and Community Structure_, (Princeton University Press, Princeton, 1982); J. D. Murray, Mathematical Biology (Springer-Verlag, New York, 1993). * (5) J. Huisman and F. J. Weissing, Ecology **75** 507 (1994); F. J. Weissing and J. Huisman, Jour. of Theoretical Biology, **168**, 323 (1994); J. Huisman, R.R. Jonker C. Zonneveld and F.J. Weissing, Ecology **80**, 211 (1999). See also S. V. Petrovskii and H. Malchow, Theo. Pop. Bio. **59**, 157 (2001) for a prey-predator model of zooplankton that supports _oscillating_ spatial patterns. * (6) J.B. Wilson and A.D.Q. Agnew, Adv. Ecol. Res. **23**, 263 (1992). R. Lefever and O. Lejeune, Bull. Math. Biol. **59**, 263 (1997). H. Lavee, A.C. Imerson and P. Sara, Land Degradation and development **9**, 407 (1998). * (7) J. von Hardenberg, E. Meron, S. Shachak and Y. Zarmi, Phys. Rev. Lett. **87** 198101 (2001). * (8) Actual field experiments usually finds the soil moisture under a shrub larger than the moisture few meters away from it. The reason for that has to do with the positive feedback mechanisms that reduce water losses in this region. However, up to some limit, this water resource serves the shrub during the summer, and is not available for the use of another planet. One may regard this fraction of soil moisture as part of the biomass, while the contours in Figs. (2,3,4) represent available water density. * (9) See, e.g., J. Talbot, G. Tarjus, P.R. Van-Tassel and P. Viot, Physica **A 165**, 287 (2000) and references therein. Figure 4: Effect of the positive feedback mechanism. While at low precipitation (\\(R=0.45\\), left panel) the repulsive interaction wins and no clustering occurs, at higher humidity (R=0.47, middle panel) clustering is more pronounced, and at \\(R=0.5\\) (right) vegetation stripes may be recognized. All other parameters are the same as in Figure (3).
The formation of vegetation patterns in the arid and the semi-arid climatic zones is studied. Threshold for the biomass of the perennial flora is shown to be a relevant factor, leading to a frozen disordered patterns in the arid zone. In this \"glassy\" state, vegetation appears as a singular plant spots separated by irregular distances, and an indirect repulsive interaction among shrubs is induced by the competition for water. At higher precipitation rates, the diminish of hydrological losses in the presence of flora becomes important and yields spatial attraction and clustering of biomass. Turing-like patterns with characteristic length scale may emerge from the disordered structure due to this positive feedback instability.
Condense the content of the following passage.
arxiv-format/0210600v3.md
# Test of a theoretical equation of state for elemental solids and liquids Eric D. Chisolm, Scott D. Crockett, and Duane C. Wallace Theoretical Division, Los Alamos National Laboratory Los Alamos, NM 87545 ## 1 Introduction Over approximately the last sixty years, numerous models and techniques have been developed for creating equations of state (EOS) for a variety of materials that are valid up to very extreme pressures (tens of Mbar) andtemperatures (several eV). In the EOS community at the national laboratories, for instance, we have often used models based on the Mie-Gruneisen EOS together with the Thomas-Fermi or Thomas-Fermi-Dirac model (or one of its modifications) to include the contributions from the electrons (see [1] for examples). The models usually contain enough independent parameters to adjust the EOS until it correctly reproduces the experimentally measured Hugoniot (and perhaps a few other data points), but it is generally an open question how accurate the EOS is away from the Hugoniot. In this paper we argue that for one class of materials, elemental solids and liquids, our understanding of the underlying condensed matter Hamiltonian for the nuclei and electrons has grown to the point that we can construct highly accurate EOS from essentially first principles, and we also propose a means for doing so. We also argue that, since the underlying physics is well understood, an EOS derived this way should have the right functional form, even if we are unsure of the values of some of its parameters; thus, if the resulting EOS is shown to be accurate in one thermodynamic region (say, along the Hugoniot), then we can be confident that it is roughly equally accurate elsewhere. In this formalism, the EOS in the solid phase depends on a decomposition of the Hamiltonian due to Wallace (see Chapter 1 of [2]), extending the work of Born [3] to metals as well as insulators; the resulting free energy contains terms describing the harmonic motion of the nuclei about their lattice sites (phonons), thermal excitation of the electrons from their ground state, anharmonic corrections to the nuclear motion (represented as phonon-phonon interactions), and interactions between the electron excitations and the nuclear motion, represented as electron-phonon interactions. (Please note that this description is exact; all of the physics contained in the true Hamiltonian of the system is included here. Specific EOS models usually neglect the anharmonic and electron-phonon terms, arguing that anharmonicity is small and making reference to some form of the Born-Oppenheimer approximation; we will take a somewhat different route, commenting on approximations below.) A recently developed theory of the dynamics of monatomic liquids (see [4] for a review) uses the same Hamiltonian to derive a liquid free energy which is quite similar to the expression for a solid, with additional terms accounting for the fact that the liquid, as opposed to the solid, traverses many potential valleys and thus sees the boundaries between them. For both phases, the resulting free energies have been compared with experimental data in the low-pressure regime (\\(P\\leq 100\\) kbar), with the following results (Sections 17-19 and 23 of [2]):(a) Molecular dynamics (MD) calculations of the anharmonic contribution to the entropy of several solids match experimental entropy data to the accuracy of the data themselves. (b) Low-temperature (\\(T\\leq 20\\) K) calculations of the electron-phonon term for several solids lead to predictions that also match experimental entropy to the accuracy of the data. (c) Theoretical arguments show that the electron-phonon contribution is entirely negligible except when the electronic contribution dominates the free energy, such as in metallic solids at low temperatures. (d) For the 27 elemental solids for which accurate data are available from low \\(T\\) (but not too low; see point (c)) to the melting temperature \\(T_{m}\\), the free energy excluding the anharmonic and electron-phonon terms accounts for the experimental thermal energy and entropy to an accuracy of 5% (in fact, an accuracy of 2% for all but about five materials). (e) For the 6 elements in the liquid phase for which accurate data are available at temperatures up to around \\(3\\,T_{m}\\), the effect of neglecting the anharmonic, boundary, and electron-phonon contributions to the energy and entropy is similarly small. This tells us that at low pressures, we can neglect the anharmonic, boundary, and electron-phonon terms in both the solid and liquid free energy (which happen to be the hardest terms to calculate), and the resulting thermal energy and entropy are both simple in form and accurate at the 5% level. It is for this reason, not an appeal to the Born-Oppenheimer or other approximations, that we know we can simplify our EOS and what the results of the simplification will be, at least at low pressures. In this paper we do two things: (1) We describe in more detail this framework for constructing EOS and discuss the theoretical and experimental inputs needed to implement it, and (2) we construct a sample EOS, neglecting anharmonic, boundary, and electron-phonon terms, both to illustrate the method and to discover whether points (d) and (e) above continue to hold in the high-pressure regime. We use Aluminum as our sample because of the availability of extensive electronic structure calculations, up to a compression of three, and highly accurate shock Hugoniot data, which provide a test of our EOS through both phases to pressures of around 5 Mbar. In Subsection 2.1 we develop the general theory of the solid EOS, and in Subsection 2.2 we do the same for the liquid. In Subsection 3.1, we construct our sample EOS for Al, comparing it with other EOS work, and in Subsection 3.2 we compute the Hugoniot predicted by the EOS and compare it with experimental data. The results are encouraging. Finally, we review our work, discuss the advantages and disadvantages of this formalism (and how to address the disadvantages), and suggest directions for future development. ## 2 General theory ### Solid phase The condensed matter Hamiltonian, decomposed as described above, consists of terms describing the motion of the nuclei in a potential generated by the electrons in their ground state, plus additional terms that lead to the thermal excitation of the electrons and describe their interactions with the nuclear motion. With this Hamiltonian, the Helmholtz free energy per atom for a solid at temperature \\(T\\) with volume \\(V\\) per atom takes the form \\[F^{\\rm s}(V,T)=\\Phi^{\\rm s}_{0}(V)+F^{\\rm s}_{\\rm ph}(V,T)+F^{\\rm s}_{\\rm el}(V,T)+F^{\\rm s}_{\\rm anh}(V,T)+F^{\\rm s}_{\\rm ep}(V,T). \\tag{1}\\] Here \\(\\Phi^{\\rm s}_{0}\\) is the static lattice potential (the electronic ground state energy when the nuclei are fixed at their lattice sites); it depends on the particular crystal structure. \\(F^{\\rm s}_{\\rm ph}\\) is the contribution from the harmonic motion of the nuclei about their lattice sites, \\(F^{\\rm s}_{\\rm el}\\) represents the thermal excitation of the electrons when the nuclei are fixed at their lattice sites, \\(F^{\\rm s}_{\\rm anh}\\) accounts for the anharmonicity of the nuclear motion (which may be represented as phonon-phonon interactions), and \\(F^{\\rm s}_{\\rm ep}\\) expresses the interactions between the electron excitations and the nuclear motion, represented as electron-phonon interactions. (We emphasize again that this free energy is exact; it includes all of the physics present in the Hamiltonian.) The discussion in the Introduction justifies our approximating the solid free energy as \\[F^{\\rm s}(V,T)=\\Phi^{\\rm s}_{0}(V)+F^{\\rm s}_{\\rm ph}(V,T)+F^{\\rm s}_{\\rm el}( V,T), \\tag{2}\\] so let us now consider the forms of \\(F^{\\rm s}_{\\rm ph}\\) and \\(F^{\\rm s}_{\\rm el}\\) and the parameters on which they depend. The phonon term in the Hamiltonian describes harmonic motion, which leads uniquely to the free energy of lattice dynamics: \\[F^{\\rm s}_{\\rm ph}(V,T)=\\int_{0}^{\\infty}g^{\\rm s}(\\omega)\\left[\\frac{1}{2} \\hbar\\omega+\\ln(1-e^{-\\beta\\hbar\\omega})\\right]d\\omega, \\tag{3}\\] where \\(\\beta=1/kT\\) and \\(g^{\\rm s}(\\omega)\\) is the distribution of phonon frequencies in the Brillouin zone. (Note that \\(g^{\\rm s}(\\omega)\\) is volume dependent.) Sometimes we require not the full Eq. (3) but only its high- and low-temperature limits, for which we need not the full \\(g^{\\rm s}(\\omega)\\) but only three of its moments, expressed in terms of the characteristic temperatures \\(\\Theta_{0}^{\\rm s}\\), \\(\\Theta_{1}^{\\rm s}\\), and \\(\\Theta_{2}^{\\rm s}\\) defined by \\[\\ln k\\Theta_{0}^{\\rm s} = \\langle\\ln\\hbar\\omega\\rangle_{\\rm BZ}\\] \\[k\\Theta_{1}^{\\rm s} = \\frac{4}{3}\\langle\\hbar\\omega\\rangle_{\\rm BZ}\\] \\[k\\Theta_{2}^{\\rm s} = \\left[\\frac{5}{3}\\langle(\\hbar\\omega)^{2}\\rangle_{\\rm BZ}\\right] ^{1/2}, \\tag{4}\\] where \\(\\langle\\cdots\\rangle_{\\rm BZ}\\) indicates an average over all the frequencies in the Brillouin zone. Then the following limits hold: \\[F_{\\rm ph}^{\\rm s}(V,T)\\rightarrow\\frac{9}{8}k\\Theta_{1}^{\\rm s}\\quad{\\rm as} \\quad T\\to 0 \\tag{5}\\] and \\[F_{\\rm ph}^{\\rm s}(V,T)=-3kT\\left[\\ln\\left(\\frac{T}{\\Theta_{0}^{\\rm s}}\\right) -\\frac{1}{40}\\left(\\frac{\\Theta_{2}^{\\rm s}}{T}\\right)^{2}+\\cdots\\right]\\ \\ {\\rm at\\ high}\\ T. \\tag{6}\\] The leading term in Eq. (6) describes purely classical nuclear motion, while the series of terms in powers of \\(T^{-2}\\) are quantum corrections. Keeping only the first quantum correction, the thermodynamic functions derived from Eq. (6) are accurate to 1% at temperatures above \\(\\frac{1}{2}\\Theta_{2}^{\\rm s}\\). The electronic excitation free energy \\(F_{\\rm el}^{\\rm s}\\) can be expressed generally as an integral function of the electronic density of states per atom, \\(n^{\\rm s}(\\epsilon)\\), and the Fermi distribution \\[f(\\epsilon)=\\frac{1}{e^{\\beta(\\epsilon-\\mu)}+1}, \\tag{7}\\] where \\(\\beta\\) is still \\(1/kT\\) and \\(\\mu\\) is the chemical potential. If each atom contributes \\(Z\\) electrons to the valence bands (notice that \\(Z\\) is not necessarily the atomic number), with the lowest valence energy set to zero, then \\(\\mu\\) is a function of \\(T\\) determined by the normalization condition \\[\\int_{0}^{\\infty}n^{\\rm s}(\\epsilon)f(\\epsilon)\\,d\\epsilon=Z. \\tag{8}\\] The electronic free energy is then \\[F_{\\rm el}^{\\rm s}(V,T)=\\] \\[\\mu Z-\\int_{0}^{\\epsilon_{F}}\\epsilon\\,n^{\\rm s}(\\epsilon)\\,d \\epsilon-kT\\int_{0}^{\\infty}n^{\\rm s}(\\epsilon)\\ln[1+e^{-\\beta(\\epsilon-\\mu) }]\\,d\\epsilon, \\tag{9}\\]where \\(\\epsilon_{F}\\), the Fermi energy, is the value of \\(\\mu\\) when \\(T=0\\). The second term on the right hand side of Eq. (9) is the subtraction of the electronic ground state energy, which ensures that \\(F_{\\rm el}^{\\rm s}\\to 0\\) as \\(T\\to 0\\). This property makes sense if \\(F_{\\rm el}^{\\rm s}\\) represents purely thermal excitation of the electrons. (It also avoids double counting of the energy, as the electronic ground state energy is already represented as \\(\\Phi_{0}^{\\rm s}\\).) We see from this discussion that to evaluate the terms in Eq. (2) for the solid free energy we require three unknown functions: \\(\\Phi_{0}^{\\rm s}\\), \\(g^{\\rm s}(\\omega)\\) (or \\(\\Theta_{0}^{\\rm s}\\), \\(\\Theta_{1}^{\\rm s}\\), and \\(\\Theta_{2}^{\\rm s}\\) if we are concerned only with the high- and low-\\(T\\) limits), and \\(n^{\\rm s}(\\epsilon)\\) (and the associated quantities \\(Z\\) and \\(\\epsilon_{F}\\)). These can be determined in various ways: compressibility data and diamond anvil cell data can be used to construct \\(\\Phi_{0}^{\\rm s}(V)\\); neutron scattering experiments can determine \\(g^{\\rm s}(\\omega)\\) or its various moments at \\(P=1\\) bar; and for many elements all three of these functions can be computed reliably using electronic structure theory. (Or one could use results from multiple sources in combination, which is often an option with \\(\\Phi_{0}^{\\rm s}\\) and is basically a necessity with \\(g^{\\rm s}(\\omega)\\).) One must keep in mind, however, that the accuracy of one's answers will be limited by the accuracy and range of applicability of these functions, regardless of how they are determined. ### Liquid phase and two-phase region According to the theory of liquid dynamics reviewed in [4], the same Hamiltonian that gave us the solid free energy leads to a similar form for the free energy of a monatomic liquid. In this theory, the region of the many-body potential surface in which the system moves in the liquid phase is dominated by a large number of intersecting nearly-harmonic valleys, called \"random\" valleys because they correspond to particle configurations which retain no remnant crystal symmetry, and which are all macroscopically identical. In particular, the valleys all have the same distribution of normal mode frequencies, and they all have the same depth (which, as in the solid case, is the electronic ground state energy when the nuclei are fixed at the valley minimum). The resulting liquid free energy per atom is \\[F^{l}(V,T) = \\Phi_{0}^{l}(V)+F_{\\rm ph}^{l}(V,T)+F_{\\rm el}^{l}(V,T)+F_{\\rm ab }^{l}(V,T)+ \\tag{10}\\] \\[F_{\\rm ep}^{l}(V,T)-kT\\ln w.\\] All of the terms correspond to their solid counterparts with the following exceptions:(1) \\(\\Phi_{0}^{l}\\), now called the static _structure_ potential, is the depth of a typical valley in which the liquid system moves. (2) The normal mode spectrum appearing in \\(F_{\\rm ph}^{l}\\) is that of a typical liquid potential valley, not the unique solid potential valley. (3) The term \\(F_{\\rm ab}^{l}\\) includes corrections due both to anharmonicity and to the fact that the potential valleys have boundaries, which the liquid (as opposed to the solid) encounters as it transits from valley to valley. (4) The extra term \\(-kT\\ln w\\) corresponds to an increase in entropy of \\(k\\ln w\\) per atom; the value \\(\\ln w\\approx 0.8\\) is estimated from entropy of melting data of the elements (again, see [4] for details). In liquid dynamics theory, this term is due to the hypothesis that the number of potential valleys among which the liquid moves is of order \\(w^{N}\\), where \\(N\\) is the number of atoms in the system. We emphasize that the same Hamiltonian gives rise to both Eq. (1) and (10); the differences are that the potential is expanded about different equilibrium configurations in the two cases, and that the region of configuration space over which the liquid moves is obviously far larger than the space available to the solid (hence the \\(-kT\\ln w\\) term). Again making the approximations discussed in the Introduction, our form for the liquid free energy becomes \\[F^{l}(V,T)=\\Phi_{0}^{l}(V)+F_{\\rm ph}^{l}(V,T)+F_{\\rm el}^{l}(V,T)-kT\\ln w, \\tag{11}\\] and the additional term \\(-kT\\ln w\\) is fully determined by setting \\(\\ln w=0.8\\), as mentioned above. The form of the phonon term is dictated by a central hypothesis of liquid dynamics theory: The motion of the liquid consists of oscillations in the macroscopically similar valleys described above together with occasional _transits_ between valleys; the transits are of such short duration that they do not contribute to the thermodynamics to lowest order. Thus we will take \\(F_{\\rm ph}^{l}\\) to have the same form as the solid phonon term, Eq. (3), with a possibly different phonon frequency distribution \\(g^{l}(\\omega)\\). The electronic excitation term for the solid was derived using only the assumption that the electrons are thermally distributed over the available states using Fermi statistics; all of the information about the configuration of the nuclei is contained in the density of states. Hence \\(F_{\\rm el}^{l}\\) also takes the same form as the corresponding solid term, Eq. (9), with a density of states \\(n^{l}(\\epsilon)\\) appropriate for the liquid phase. (What this means is discussed briefly below.) The liquid and solid EOS together determine the melting temperature as a function of pressure \\(T_{m}(P)\\) by the requirement that the solid and liquid these functions are generally not available experimentally. (It is possible that one might be able to compute \\(\\Phi_{0}^{l}\\) using liquid compressibility data, but we suspect that this will be very difficult.) However, for many materials these functions should be computable using electronic structure theory, proceeding much as one would in the solid case except that the nuclei would be arranged not in a crystal configuration but in a disordered structure characteristic of a \"random\" valley in the liquid potential surface [4]. To our knowledge very few such calculations have been attempted; the only ones we are aware of are \\(\\Phi_{0}^{l}\\) and \\(g^{l}(\\omega)\\) at a single volume for liquid sodium in [5] (the results are referred to in [4] and a graph of \\(g^{l}(\\omega)\\) using their results appears as Fig. 1 in [6]). Another function that is sometimes available is the melt curve \\(T_{m}(P)\\), but this curve cannot be chosen independently of the others, since the solid and liquid EOS determine it jointly; but this can be an advantage, since if \\(T_{m}(P)\\) is known from experiment, for example, it can be used to compute one of the other needed functions if it is not otherwise available. In fact, this is how we will determine \\(\\Phi_{0}^{l}\\) in our example EOS, to which we now turn. ## 3 An example: Aluminum To illustrate the application of the theory we've described, we will now construct an EOS for Aluminum, which has been the subject of extensive electronic structure calculations and for which a great deal of high-quality experimental data are available. We will then compare the Hugoniot predicted by our EOS with data up to pressures of approximately 5 Mbar; this will tell us whether the approximations we discussed in the Introduction (neglecting anharmonic, boundary, and electron-phonon effects), known to be accurate at low pressures, continue to be reasonable in the high-pressure domain. ### Constructing the EOS We recall from Subsection 2.1 that the solid EOS requires three functions: \\(\\Phi_{0}^{\\rm s}\\), \\(g^{\\rm s}(\\omega)\\), and \\(n^{\\rm s}(\\epsilon)\\). Since we will be testing the EOS by comparison with Hugoniot data, we will always be in the high-\\(T\\) region (except for one brief low-\\(T\\) excursion; see below), so we use Eq. (6) for \\(F_{\\rm ph}^{\\rm s}\\) instead of Eq. (3); this means that we require only \\(\\Theta_{0}^{\\rm s}\\), \\(\\Theta_{1}^{\\rm s}\\), and \\(\\Theta_{2}^{\\rm s}\\) in place of \\(g^{\\rm s}(\\omega)\\). To determine these functions, we began by consulting the results of density functional theory (DFT) calculations carried out in the local density approximation by Straub et al. [7]. They worked with fcc and bcc Al at atomic volumes from 37 \\(a_{0}^{3}\\) to 160 \\(a_{0}^{3}\\), where \\(a_{0}\\) is the Bohr radius, corresponding to densities from 8.17 g/cm\\({}^{3}\\) to 1.89 g/cm\\({}^{3}\\) (the density of Al at 293 K and 1 bar is 2.700 g/cm\\({}^{3}\\)). Their calculations indicate a \\(T=0\\) transition from fcc to bcc at 51 \\(a_{0}^{3}\\), corresponding to \\(\\rho=5.93\\) g/cm\\({}^{3}\\), but we will neglect this phase change and treat solid Al as an fcc crystal for two reasons: The DFT calculations themselves suggest that the effect of the phase change on the thermodynamic functions will be quite small; and we know from experiment that the solid-liquid transition on the Hugoniot takes place well before reaching the density of concern (see Subsection 3.2), so we are confident of our assumption of fcc along the Hugoniot until melting. However, this assumption may have an effect on the liquid EOS at high densities, which we will comment on below. (Other electronic structure work, discussed on pp. 89-90 of [8], suggests the possibility of an hcp phase between the fcc and bcc phases, but as [8] also mentions, no experimental signature of this phase has been found, so we will proceed under the assumption of a single solid phase.) Straub et al. computed \\(\\Phi_{0}^{\\rm s}\\) for fcc by fitting their results to a Birch-Murnaghan form, \\[\\Phi_{0}^{\\rm s}(V)=c_{0}+V_{b}\\sum_{n=2}^{5}\\frac{c_{n}}{n!}\\left\\{\\frac{1}{2 }\\left[\\left(\\frac{V}{V_{b}}\\right)^{-2/3}-1\\right]\\right\\}^{n}, \\tag{16}\\] with coefficients \\[V_{b}=106.302\\,a_{0}^{3},\\qquad\\quad c_{0}=-287.7832\\ {\\rm mRy},\\] \\[c_{2}=761.2029\\ {\\rm GPa},\\quad\\quad c_{3}=1319.036\\ {\\rm GPa},\\] \\[c_{4}=-13\\,661.06\\ {\\rm GPa},\\,\\,\\,c_{5}=50\\,315.53\\ {\\rm GPa}. \\tag{17}\\] The \\(\\Theta_{n}^{\\rm s}\\) were determined by computing the bulk modulus and four zone-boundary phonons at several volumes, and these results were used to calibrate a pseudopotential model at each volume. The pseudopotential was then used to calculate phonon frequencies throughout the Brillouin zone, allowing the determination of \\(\\Theta_{0}^{\\rm s}\\), \\(\\Theta_{1}^{\\rm s}\\), and \\(\\Theta_{2}^{\\rm s}\\). Their results are shown in Table 1 and Figure 1. (The full set of results was not reported in [7].) To check these results, Straub et al. compared experimental phonon moments for Al at \\(T=80\\,\\)K and \\(P=1\\) bar based on Born-von Karmen fits to neutron scattering data [9] with their predictions interpolated to the appropriate atomic volume of 110.7 \\(a_{0}^{3}\\). The experimental points, also shown in Figure 1, are in very \\begin{table} \\begin{tabular}{c|c c c} \\hline \\(V\\) (\\(a_{0}^{3}\\)) & \\(\\Theta_{0}^{\\rm s}\\) (K) & \\(\\Theta_{1}^{\\rm s}\\) (K) & \\(\\Theta_{2}^{\\rm s}\\) (K) \\\\ \\hline 111.97 & 278.09 & 386.55 & 387.20 \\\\ 106.65 & 304.63 & 423.81 & 424.86 \\\\ 93.318 & 381.43 & 532.00 & 534.48 \\\\ 74.655 & 525.01 & 735.49 & 741.68 \\\\ 55.991 & 741.62 & 1044.7 & 1058.3 \\\\ 37.327 & 1109.5 & 1575.0 & 1605.4 \\\\ \\hline \\end{tabular} \\end{table} Table 1: DFT calculations of \\(\\Theta_{0}^{\\rm s}\\), \\(\\Theta_{1}^{\\rm s}\\), and \\(\\Theta_{2}^{\\rm s}\\) from [7]. good agreement with their calculations; hence these results for the \\(\\Theta_{n}^{\\rm s}\\) are acceptable for use in our EOS without modification. To determine the \\(\\Theta_{n}^{\\rm s}\\) at any volume, we first constructed a functional fit to the \\(\\Theta_{0}^{\\rm s}\\) points, with the result \\[\\Theta_{0}^{\\rm s}(V)=2852.69+\\frac{17\\,319.9}{V}+2.33667\\,V-633.858\\ln(V), \\tag{18}\\] where \\(\\Theta_{0}^{\\rm s}\\) is in K and \\(V\\) is in \\(a_{0}^{3}\\). Then we noted that according to the DFT results both \\(\\Theta_{1}^{\\rm s}\\) and \\(\\Theta_{2}^{\\rm s}\\) approximately equal \\(e^{1/3}\\,\\Theta_{0}^{\\rm s}\\), so we made the same approximation using Eq. (18) for \\(\\Theta_{0}^{\\rm s}\\) to determine \\(\\Theta_{1}^{\\rm s}\\) and \\(\\Theta_{2}^{\\rm s}\\) at any volume. These functions are also shown in Figure 1. The DFT calculations also provided data on the electronic density of states \\(n^{\\rm s}(\\epsilon)\\). Graphs of \\(n^{\\rm s}(\\epsilon)\\) for fcc and bcc Al at atomic volume 112.0 \\(a_{0}^{3}\\) (corresponding to \\(P=0\\) and \\(T=295\\) K) are shown in Figure 2, along with the free electron \\(n^{\\rm s}(\\epsilon)\\), for which \\[n^{\\rm s}(\\epsilon)=\\sqrt{\\frac{\\epsilon}{\\epsilon_{F}}}\\left(\\frac{3Z}{2 \\epsilon_{F}}\\right)\\ \\ \\ \\ \\ {\\rm and}\\ \\ \\ \\ \\ \\epsilon_{F}=\\frac{\\hbar^{2}}{2m_{e}}\\left(\\frac{3\\pi^{2}Z}{V}\\right)^{2/3}, \\tag{19}\\] at \\(V=112.0\\)\\(a_{0}^{3}\\) and \\(Z=3\\). The Figure shows that the free electron model is a good approximation for either crystal structure, for electronic excitations to around \\(\\frac{1}{2}\\) Ry. The same match, at a slightly poorer level of approximation and for excitations to around 1 Ry, is found at our smallest atomic volume of 37 \\(a_{0}^{3}\\). For all volumes of our study and temperatures up to \\(T_{m}\\), the total electronic excitation contribution to the energy, entropy, and pressure is at most 5%, so the error introduced by using the free electron \\(n^{\\rm s}(\\epsilon)\\) in our calculations is negligible. Making this approximation, the normalization condition from Eq. (8) becomes \\[F_{1/2}(\\beta\\mu)=\\frac{2}{3}(\\beta\\epsilon_{F})^{3/2 The solid EOS that results from assembling all of these functions is reliable over a large range of volumes and temperatures; however, it is not in perfect agreement with the highly accurate experimental data that are available at low pressures. Specifically, experiments on Al at \\(T=0\\) and \\(P=0\\) show that [7] \\[V_{0}=110.6\\ a_{0}^{3},\\qquad E_{0}=-249\\ {\\rm mRy},\\] \\[B_{0}=79.4\\ {\\rm GPa},\\quad\\frac{dB_{0}}{dP}=4.7, \\tag{22}\\] but the EOS yields \\(V_{0}=107.3\\ a_{0}^{3}\\), which is outside the experimental error. Therefore, we chose to make a small correction to our purely theoretical free energy to agree with experiment. These quantities are obviously determined by the \\(T=0\\) form of the free energy, \\(F_{0}^{\\rm s}=\\Phi_{0}^{\\rm s}+\\frac{9}{8}k\\Theta_{1}^{\\rm s}\\) (see Eq. (5)); since \\(\\Theta_{1}^{\\rm s}\\) is already in excellent agreement with available experiment, we chose to modify only \\(\\Phi_{0}^{\\rm s}\\). To proceed, we noted that the data determine \\(P_{0}\\), the \\(T=0\\) pressure, in the vicinity of \\(V=V_{0}\\) by the relation \\[P_{0}(V) = P_{0}(V_{0})+\\left.\\frac{dP_{0}}{dV}\\right|_{V_{0}}(V-V_{0})+ \\frac{1}{2}\\left.\\frac{d^{2}P_{0}}{dV^{2}}\\right|_{V_{0}}(V-V_{0})^{2}+\\cdots\\] Figure 2: \\(n^{\\rm s}(\\epsilon)\\) for bcc and fcc Al at atomic volume 112.0 \\(a_{0}^{3}\\) from the calculations in [7]. The free electron \\(n^{\\rm s}(\\epsilon)\\) at this volume and \\(Z=3\\) from Eq. (19) is shown for comparison. (From [2].) \\[= -\\frac{B_{0}}{V_{0}}\\left(V-V_{0}\\right)+\\frac{B_{0}}{2V_{0}^{2}} \\left(1+\\frac{dB_{0}}{dP}\\right)\\,(V-V_{0})^{2}+\\cdots \\tag{23}\\] while at higher compressions we have no information to supplement the electronic structure result; so we decided to construct a \\(\\Phi_{0}^{\\rm s}\\) that correctly reproduces Eq. (23) near \\(V_{0}\\) but smoothly interpolates to Eq. (16) at lower volumes. To do this, we computed \\(P_{0}\\) at 10 volumes between 110 \\(a_{0}^{3}\\) and 111.25 \\(a_{0}^{3}\\) using Eq. (23), and we also computed \\(P_{0}=-\\partial F_{0}^{\\rm s}/\\partial V\\) using the above form for \\(F_{0}^{\\rm s}\\), with \\(\\Phi_{0}^{\\rm s}\\) from Eq. (16), at 23 volumes between 30 \\(a_{0}^{3}\\) and 41 \\(a_{0}^{3}\\). We then performed a least-squares fit to these points using an expression similar to the Birch-Murnaghan form, but carried to a slightly higher order; after integrating, adjusting the constant of integration to correctly match \\(E_{0}\\) from Eq. (22), and subtracting off \\(\\frac{9}{8}k\\Theta_{1}^{\\rm s}\\), we had a new \\(\\Phi_{0}^{\\rm s}\\) given by \\[\\Phi_{0}^{\\rm s}(V) = -1.64615\\times 10^{6}+\\frac{2.07608\\times 10^{7}}{V^{2/3}}-\\frac{4.61515\\times 10^{8}}{V^{4/3}} \\tag{24}\\] \\[+\\frac{5.71249\\times 10^{9}}{V^{2}}-\\frac{5.49998\\times 10^{10}}{V ^{8/3}}+\\frac{3.71978\\times 10^{11}}{V^{10/3}}\\] \\[-\\frac{1.66284\\times 10^{12}}{V^{4}}+\\frac{4.41118\\times 10^{12}}{ V^{14/3}}-\\frac{5.25064\\times 10^{12}}{V^{16/3}}\\] \\[+\\frac{2.26789\\times 10^{7}}{V}-220.716\\,V+23\\,6788\\ln(V)\\] where \\(\\Phi_{0}^{\\rm s}\\) is in mRy and \\(V\\) is in \\(a_{0}^{3}\\). This \\(\\Phi_{0}^{\\rm s}\\), which reproduces the data in Eq. (22) and interpolates smoothly to the DFT curve at higher compressions, is what we use in our EOS instead of Eq. (16). The \\(T=0\\) pressure-volume curves constructed using both the original and new \\(\\Phi_{0}^{\\rm s}\\) are shown in Figure 3. Our choice of a Birch-Murnaghan-like form was dictated by the fact that the Straub et al. result provides most of our information about the shape of \\(\\Phi_{0}^{\\rm s}\\); so our goal was to preserve that form insofar as was possible, interpolating back to their result as quickly as we could without introducing enough curvature to compromise agreement with \\(dB_{0}/dP\\). This correction to \\(\\Phi_{0}^{\\rm s}\\) constitutes a small change to the overall EOS; the effect of this change on the Hugoniot will be considered in the next Section. This modification completes the full solid free energy, so we can now consider the liquid. For the liquid we need the same three functions that we needed for the solid, and we must also consider the melting curve \\(T_{m}(P)\\). Having chosen to Figure 3: The \\(T=0\\) pressure-volume relations calculated using the original \\(\\Phi_{0}^{\\rm s}\\) and the new \\(\\Phi_{0}^{\\rm s}\\) we constructed. Notice how they differ in the vicinity of \\(V=110.6\\ a_{0}^{3}\\) but then agree more closely at lower volumes. use Eq. (6) for \\(F_{\\rm ph}^{\\rm s}\\), we certainly did the same for \\(F_{\\rm ph}^{l}\\), since the Hugoniot will obviously enter the liquid region only at rather high temperatures; thus we needed only \\(\\Theta_{0}^{l}\\) and \\(\\Theta_{2}^{l}\\). From experiment we know that Al is what is called in liquid dynamics a \"normal melting element\" (the entropy of melting at constant density is approximately \\(0.8\\,k\\) per atom), and we argue in [4] that \\(\\Theta_{0}\\) in solid and liquid phases of such an element are approximately equal. (Experimental and computational work supporting this conjecture are also discussed in [4].) Thus we took the liquid to have the same \\(\\Theta_{0}\\) as the solid. It is also true that in the liquid, \\(T\\) is typically much larger than \\(\\Theta_{2}^{\\rm s}\\) (for example, in liquid Al at normal density \\(T\\geq 2\\Theta_{2}^{\\rm s}\\)), rendering the first quantum correction to \\(F_{\\rm ph}^{l}\\) negligible (roughly 1% at normal density), so even if \\(\\Theta_{2}^{l}\\) differs from \\(\\Theta_{2}^{\\rm s}\\) by 25% or more, the impact on the phonon term will be very small; therefore we also used the same \\(\\Theta_{2}\\) in the liquid as in the solid. Since the free electron model approximates the DFT result for \\(n^{\\rm s}(\\epsilon)\\) so well for both fcc and bcc Al (Figure 2), which correspond to two valleys in the many-body potential surface with rather different structures, we also expect this model to be a good approximation for \\(n^{l}(\\epsilon)\\), the density of states for the structure characteristic of a liquid. Since at all volumes and temperatures up to \\(5\\,T_{m}\\) (the relevance of this number will appear below), the electronic contribution to the thermodynamic functions does not exceed 25%, the error introduced by the free electron model is still acceptable. We fixed the only remaining term in Eq. (11), \\(\\Phi_{0}^{l}\\), by the requirement that the Gibbs free energies of the solid and liquid match along the Al melting curve. Boehler and Ross [13] suggested that \\[T_{m}(P)=933.45\\ {\\rm K}\\left(\\frac{P}{6.049\\ {\\rm GPa}}+1\\right)^{0.531} \\tag{25}\\] on the basis of their experimental work up to 80 GPa (0.8 Mbar), and experiments by McQueen et al. [14] and Hanstrom and Lazor [15] and theoretical work by Pelissier [16] suggest that this curve continues to be accurate up to 200 GPa. In the absence of evidence to the contrary, we took Eq. (25) to be valid to higher pressures as needed. (As we will see later, our EOS will assume Eq. (25) no higher than 400 GPa.) We computed \\(\\Phi_{0}^{l}\\) as follows: We made a guess for \\(\\Phi_{0}^{l}\\) not very different from \\(\\Phi_{0}^{\\rm s}\\), and then we used it and Eq. (25) to calculate the difference between the two Gibbs free energies, \\[\\Delta G(P)=G^{\\rm s}(P,T_{m}(P))-G^{l}(P,T_{m}(P)), \\tag{26}\\]at several hundred values of \\(P\\) over the entire pressure range considered in this study. We also calculated the liquid melt volume \\(V_{m}^{l}(P)\\) at each \\(P\\). If the rms average of Eq. (26) over all calculated points was not sufficiently small, we used the following easily verified fact: To first order, a small change \\(\\delta\\Phi_{0}^{l}\\) produces a small change \\(\\delta G^{l}(P,T_{m}(P))\\) given by \\(\\delta G^{l}(P,T_{m}(P))=\\delta\\Phi_{0}(V_{m}^{l}(P))\\). Thus we performed the substitution \\[\\Phi_{0}^{l}(V)\\rightarrow\\Phi_{0}^{l}(V)+\\Delta G(P_{m}^{l}(V)), \\tag{27}\\] where \\(\\Delta G\\) was computed by Eq. (26) and \\(P_{m}^{l}(V)\\) is the inverse of \\(V_{m}^{l}(P)\\), and calculated Eq. (26) again. We iterated until the rms average was sufficiently small (less than 0.001 mRy in our case), giving us the needed \\(\\Phi_{0}^{l}\\), which is shown in Figure 4 along with \\(\\Phi_{0}^{\\rm s}\\). We recorded \\(\\Phi_{0}^{l}\\) as a list of points, and we did not create a functional fit for it; instead we used an interpolator to calculate it and its derivative when needed. It is at this point that the existence of other solid phases in Al, discussed earlier, affects the EOS of the liquid. It is likely that the liquid region borders the fcc crystal only over part of its boundary, beyond which the liquid borders the bcc region or other solid phases. In such a case, at sufficiently high pressures we should use the free energy appropriate for that solid phase, not the fcc free energy, in Eq. (26). This suggests that \\(\\Phi_{0}^{l}\\) may become inaccurate beyond densities in the neighborhood of 6 g/cm\\({}^{3}\\), where the \\(T=0\\) fcc-bcc phase transition occurs. We will take this fact into consideration when we discuss the limits of applicability of the EOS below. Once we had the full solid and liquid EOS, we then solved Eq. (12) directly to compute \\(T_{m}(P)\\), verifying that we had reproduced the Boehler-Ross curve; our result is shown in Figure 5, together with the data from [14, 15] and some points from Pelissier's theoretical curve. (According to [14], their data point at 125 GPa marks the onset of melting along the Hugoniot. We will comment on this in the next Subsection.) Now that we have the full two-phase EOS, it is profitable to compare our work with another EOS for Al, due to Moriarty et al. [17]. These authors also use a full lattice dynamics treatment of the crystal phonons, although they calculate their \\(g^{\\rm s}(\\omega)\\) two separate ways, using both Moriarty's Generalized Pseudopotential Theory (GPT) and a local pseudopotential model with parameters chosen to match solid-phase EOS data. We strongly prefer to rely on DFT results, as we believe DFT has reached such a stage of maturity that it more accurately captures the physics contained in the real Hamiltonian of the system, which as we have emphasized we believe to be understood Figure 4: \\(\\Phi_{0}^{l}\\) determined by matching the liquid and solid Gibbs free energies along the melt curve. \\(\\Phi_{0}^{\\rm s}\\) is also shown for comparison. Figure 5: The melt curve \\(T_{m}(P)\\) computed from our full solid and liquid EOS (which reproduces the Boehler-Ross curve, Eq. (25)), the experimental data from [14, 15], and points from the theoretical curve in [16]. in sufficient depth that it should underlie all of our work. Second, in their treatment of the liquid phase Moriarty et al. rely on fluid variational theory, described in detail in [18], to compute the least upper bound to the \"real\" liquid free energy (from a liquid Hamiltonian based on pseudopotentials) that can be obtained from the free energy of a reference system; Moriarty et al. investigate hard-sphere, soft-sphere, and one-component plasma reference systems before settling on the soft-sphere system as providing the best bound. Based on the investigations summarized in [4], we claim that we have the actual Hamiltonian of the liquid itself, not a Hamiltonian based on pseudopotential theory; furthermore, this Hamiltonian decomposes naturally into a dominant term that produces a free energy that can be used directly (instead of requiring approximation by the free energy of a reference system) and remaining terms whose contributions to the free energy are known to be small (see the Introduction). The same point we made above for the solid phase applies; we argue that it is a better strategy in developing EOS to try to understand the true Hamiltonian of the system, and then to use it when doing statistical mechanics. Almost inevitably, one must make approximations (which we certainly have done here), but we believe we are in a better position to understand and improve upon them if the physical foundation of the EOS is as solid as we can make it. Finally, let us make some conservative estimates of the range of applicability of this EOS. Any limits will arise from two sources: the validity of the approximation that \\(F_{\\rm ab}^{l}\\) is negligible in the liquid (see the Introduction), and the limited ranges over which the functions \\(\\Phi_{0}(V)\\), \\(g(\\omega)\\), \\(n(\\epsilon)\\), and \\(T_{m}(P)\\) are known. Let's consider each in turn. (1) We know from experiment that \\(F_{\\rm ab}^{l}\\) is negligible when \\(T\\leq 3\\,T_{m}\\) (again, see the Introduction), and judging from trends in the data we suspect \\(F_{\\rm ab}^{l}\\) will still be small up to \\(T\\approx 5\\,T_{m}\\), but clearly this term must become relevant as the nuclear motion becomes more gaslike. Thus we shall take care with any data at \\(\\rho\\) and \\(T\\) such that \\(T\\) approaches or exceeds \\(5\\,T_{m}(\\rho)\\). (2) At densities below approximately 6 g/cm\\({}^{3}\\), we are confident that the solid is in the fcc phase, and the liquid free energy is based on this phase, so we trust the full EOS here. At higher densities, we must be more circumspect; the solid may have undergone a phase transition to bcc, and the liquid EOS at this density may be based on the wrong solid free energy. Further, as we have indicated earlier, Eq. (25) for the melt curve has received independent support only up to 200 GPa, so we must be cautious with the liquid EOS in regions beyond this point. We decided to be brave and accept the melt curve as valid up to 400 GPa; this corresponds to a liquid density of 6.15 g/cm\\({}^{3}\\), and since this is not far from the probable location of the solid fcc-bcc transition, we take it as the density limit of our EOS. (Even if we did not have this concern, we would be restricted to densities below 8.17 g/cm\\({}^{3}\\), where electronic structure results are available.) Also, the free electron approximation to \\(n^{\\rm s}(\\epsilon)\\) has been validated only for \\(\\epsilon-\\epsilon_{F}\\) up to 1/2 Ry, or 6.8 eV, at low compression and 1 Ry, or 13.6 eV, at high compression, but at higher temperatures the electronic energy and entropy are sensitive to the details of \\(n^{\\rm s}(\\epsilon)\\) to energies above these limits. We estimated the values of \\(T\\) that begin to probe the unvalidated region of \\(n^{\\rm s}(\\epsilon)\\) (roughly \\(3kT=\\epsilon-\\epsilon_{F}\\)), and we found that over our valid density range the \\(T=5\\,T_{m}\\) limit always took precedence. Hence this limit is not relevant for us, but we mention it for completeness, as it may become a concern if the EOS is extended to higher densities. Figure 6 shows the limits \\(\\rho\\leq 6.15\\) g/cm\\({}^{3}\\) and \\(T\\leq 5\\,T_{m}\\) of the EOS in \\(T-P\\) space, together with the melt curve and the Hugoniot (see the next Subsection), while Figure 7 shows the same three features in \\(T-\\rho\\) space. In this Figure, the melt curve becomes a two-phase region, which we will consider in more detail in the next Subsection. We are confident that this EOS is valid within these limits, but we don't know how far beyond them the inaccuracies begin to appear; thus we will not be shy about considering data not too far outside this range. ### Comparison with Hugoniot data If a shock wave travels at speed \\(u_{s}\\) through a sample of material, accelerating its particles from rest to speed \\(u_{p}\\) and changing its density, atomic volume, pressure, and internal energy per atom from \\(\\rho_{0}\\), \\(V_{0}\\), \\(P_{0}\\), and \\(E_{0}\\) to \\(\\rho\\), \\(V\\), \\(P\\), and \\(E\\), then (assuming thermal equilibrium before and after the shock) these quantities must satisfy the Rankine-Hugoniot relations, \\[\\rho(u_{s}-u_{p}) = \\rho_{0}u_{s}\\] \\[P-P_{0} = \\rho_{0}u_{s}u_{p}\\] \\[E-E_{0} = \\frac{1}{2}(P_{0}+P)(V_{0}-V), \\tag{28}\\] derived from considerations of mass, momentum, and energy conservation. (It is assumed that the wave is steady and strength effects are negligible.)Figure 6: The limits of our EOS, the melt curve, and the Hugoniot. Figure 7: The limits of our EOS, the two-phase region (solid below the region, liquid above), and the Hugoniot. Figure 8: The \\(u_{s}\\)-\\(u_{p}\\) Hugoniot for Al predicted by our EOS together with experimental data from [19, 20, 21, 22, 23, 24]. The intersection of the Hugoniot with the limit of validity of the EOS (dot-dash line) is also indicated. By solving these equations together with the EOS, which relates \\(P\\), \\(V\\), and \\(E\\), we can determine the Hugoniot, the curve of all possible end states of the shocked material. We used our EOS and Eqs. (28) to compute \\(u_{s}\\) as a function of \\(u_{p}\\) and \\(P\\) as a function of \\(\\rho\\) along the Al Hugoniot; the results are shown in Figures 8 and 9 along with the intersection of the Hugoniot with the limit of validity of the EOS. Hugoniot data from several sources [19, 20, 21, 22, 23, 24] are also included. The low-pressure region of the Hugoniot is highlighted in Figures 10 and 11, and the intermediate-pressure region, including the intersections with the phase boundaries, is shown in Figures 12 and 13. Figure 9: The \\(P\\)-\\(\\rho\\) Hugoniot for Al predicted by our EOS together with experimental data from [19, 20, 21, 22, 23, 24]. The intersection of the Hugoniot with the limit of validity of the EOS (dot-dash line) is also indicated. Figure 10: The \\(u_{s}\\)-\\(u_{p}\\) Hugoniot in the low-\\(P\\) region, with data from [19, 21, 22, 23]. The \\(u_{p}\\) error bars on the circles [23] appear as slightly broadened vertical lines. Figure 11: The \\(P\\)-\\(\\rho\\) Hugoniot in the low-\\(P\\) region, with data from [19, 21, 22, 23]. Three important considerations in selecting which data to include are (1) the initial densities of the samples, (2) the quality of the experimental technique, and (3) whether the measurements were absolute or relative. All of the available data were taken using Al alloys with densities that differ from the known pure metal value of 2.70 g/cm\\({}^{3}\\) (predicted correctly by our EOS); some alloys are as close as 2.71 g/cm\\({}^{3}\\) while others differ much more. Since Hugoniots in general are quite sensitive to the initial density, we chose to compare only with the data for which \\(\\rho_{0}\\) clustered around 2.71 g/cm\\({}^{3}\\). (Thus we used only one data point from [20], which mainly concerns porous materials. All of the data from the other references were used.) We also avoided sources which gathered data using unusual shock wave geometries (such as [25]), and we also chose not to use the results of indirect or relative Figure 12: The \\(u_{s}\\)-\\(u_{p}\\) Hugoniot in the intermediate-\\(P\\) region, including intersections with the phase boundaries, with data from [23, 24]. Figure 13: The \\(P\\)-\\(\\rho\\) Hugoniot in the intermediate-\\(P\\) region, including intersections with the phase boundaries, with data from [23, 24]. measurements, such as [26, 27, 28], preferring to rely on the absolute measurements that are available. Finally, we did not use the few data points available (primarily nuclear-driven) that lie very far beyond the limits of applicability of our EOS (but see below). The theoretical Hugoniot compares well with both the \\(u_{s}\\)-\\(u_{p}\\) and the \\(P\\)-\\(\\rho\\) data all the way up to the predicted limit of its validity, at approximately 500 GPa (5 Mbar). More specifically, theory agrees with experiment at \\(P\\stackrel{{<}}{{\\sim}}40\\) GPa (Figures 10 and 11); at 40-125 GPa, theory falls below the experimental error bars by around 1% at most (Figures 10 through 13); and theory again lies within the experimental error bars through the liquid phase (Figures 8, 9, 12, and 13). (We recall that given percentage errors in \\(u_{s}\\) and \\(u_{p}\\) correspond to roughly the same percentage errors in the \\(P\\)-\\(\\rho\\) plane.) The presence of theoretical error only in the solid phase is likely due to strength effects, which are present in the solid but not in the liquid, and which are neglected in our Hugoniot calculations. Furthermore, as Figure 13 shows, we predict that the Hugoniot crosses the two-phase region between \\(\\rho=4.43\\) g/cm\\({}^{3}\\) and \\(\\rho=4.58\\) g/cm\\({}^{3}\\), corresponding to a range in \\(P\\) from 126 to 156 GPa; this agrees very well with [14], in which melting was found to occur between 125 and 150 GPa. (We note that [14] used Al 2024, an alloy whose density is sufficiently different from pure Al that we did not use Hugoniot data taken with that alloy in our Figures. We consider their melting results because, as we saw in the previous Subsection, their data are consistent with other experiments that did use pure Al.) The correction to \\(\\Phi_{0}^{s}\\) from the previous Subsection, shown in Figure 3, shifts the Hugoniot at pressures below 30 GPa, bringing it into excellent agreement with experiment, while at pressures above 60 GPa or so, the effect on the Hugoniot is insignificant. We have also compared our results with data just beyond the EOS limits of validity; the points in [29] that match our initial density (one of which is a reanalysis of the single point in [30]), lying at about 10 Mbar, fall noticeably below our Hugoniot, and their consistency with the very-high-pressure points of Ragan [31, 32] strongly suggests that they represent the true Hugoniot, which thus falls beneath our prediction at higher pressures. Possible errors in our EOS at such densities include, in what we estimate to be decreasing order of importance, (1) the shift from the fcc to the bcc crystal, with a corresponding change in \\(\\Phi_{0}^{l}\\) as discussed in the last Subsection, (2) deviations in the melt curve \\(T_{m}(P)\\) from the Boehler-Ross form at higher pressures (the densities of the points in [29] correspond to melt pressures around 620 GPa according to our EOS), (3) the fact that at such high \\(T\\) the EOS is probingthe high-energy region of \\(n^{8}(\\epsilon)\\), and (4) the neglect of the anharmonic and boundary contributions to the liquid EOS (\\(T\\) is only slightly below \\(5\\,T_{m}\\) at these points according to our EOS). ## 4 Conclusions Drawing upon theory developed in [2], we have described a framework for constructing EOS for elemental solids and liquids, and we have discussed experimental and theoretical results indicating that the framework remains highly accurate at low pressures when certain small effects (anharmonicity, boundaries, electron-phonon interactions) are neglected. After displaying the resulting formulas for the Helmholtz free energy, we considered the information one needs to evaluate them, and we discussed the combination of experiment and theoretical work that could be used to get this information. Finally, as an illustration we constructed an EOS for Al, established its range of validity based on the inputs to the EOS, and compared it with Hugoniot data to 5 Mbar; our EOS matched the data to the accuracy we expected based on the low-pressure results. We consider the primary advantage of this method to lie in the fact that it incorporates into the decomposition of the Hamiltonian a great deal of accumulated knowledge of condensed matter physics both for the solid and liquid phases (for example, the fact that the electronic ground state energy is the most appropriate potential for the nuclear motion). If we have indeed captured the correct physics (and we expect no new physics to enter until the relativistic domain), then the EOS should have the right functional form, which means that if it is shown to agree with available data, then we have reason to believe that it will be equally accurate in regions where no data are available; and making predictions where we have no data is the point of having an EOS to begin with. Furthermore, the better the foundation we can build, the better our position for intelligently investigating and controlling our approximations. This discussion bears on the second goal of this paper, which was to learn whether the approximation of neglecting anharmonic, boundary, and electron-phonon effects remains useful at higher pressures. We already knew, as discussed in the Introduction, that at low \\(P\\) the anharmonic and electron-phonon terms are small, and we found this from direct calculations; we also knew that for several elements, over a range of \\(T\\), at low \\(P\\) the approximations in question yielded thermal energies and entropies that disagreed with experiments by 5% at most. Our work here has shown that for one material at much higher \\(P\\) the approximations yield results that match data along a single curve, the Hugoniot, to comparable accuracy. Based on our arguments above, that the EOS incorporates the correct physics and is thus of the correct functional form, we claim to have shown that this Al EOS is trustworthy throughout its range of validity, for all \\(T\\) and \\(P\\). The main disadvantage of this method is that it relies on many inputs (\\(\\Phi_{0}\\), \\(g(\\omega)\\), and \\(n(\\epsilon)\\) for each phase) which may be available only over limited ranges, and each of these limits also restricts the range of validity of the EOS. Our Al example amply illustrates this problem; with a compression range from a little under one to just over two, and a temperature range that reaches only to slightly under 4 eV, this EOS is inadequate for many applications at the national laboratories. We argue, however, that this problem does not indicate a deficiency in the approach; it only underscores the need for many more DFT calculations of these quantities for more materials with ever greater accuracy over ever larger ranges. In the meantime, though, we would like to be able to say something about elemental solid and liquid EOS at higher compressions. We do know that as density increases, the electrons come to dominate the free energy, and it is also known that TFD correctly describes the electrons in the limit of infinite density. This suggests the following possibility: Construct an EOS using the present techniques to compressions as high as the available experimental or DFT results allow, and then interpolate between these results and the predictions of TFD for higher compressions. This raises an important question: At what pressures does TFD begin to become accurate? Conventional wisdom, usually traced back to Feynman et al. [33], has held that TFD becomes reliable starting at \\(P\\approx 10\\) Mbar, but other work [34] suggests that TFD (or TF in their case, but TF and TFD converge at high pressures) deviates noticeably from electronic structure results until 100 Mbar at least. This suggests that the pressure threshold at which TFD is trustworthy has not yet been adequately established; it would be of great interest to settle this question more definitively. **Acknowledgment** This work was supported by the U. S. DOE through contract W-7405-ENG-36. ## References * [1] S. P. Lyon and J. D. Johnson, \"T-1 Handbook of the Sesame Equation of State Library,\" Los Alamos Report LA-CP-98-100 (unpublished). * [2] D. C. Wallace, _Statistical Physics of Crystals and Liquids_ (World Scientific, Singapore, 2003). * [3] M. Born and K. Huang, _Dynamical Theory of Crystal Lattices_ (Oxford Univ Press, New York, 1988). * [4] E. D. Chisolm and D. C. Wallace, J. Phys.: Condens. Matter **13**, R739 (2001). * [5] D. C. Wallace and B. E. Clements, Phys. Rev. E **59**, 2942 (1999). * [6] E. D. Chisolm, B. E. Clements, and D. C. Wallace, Phys. Rev. E **63**, 031204 (2001); **64**, 019902 (2001). * [7] G. K. Straub, J. B. Aidun, J. M. Wills, C. R. Sanchez-Castro, and D. C. Wallace, Phys. Rev. B **50**, 5055 (1994). * [8] D. A. Young, _Phase Diagrams of the Elements_ (Univ of California Press, Berkeley, 1991). * [9] H. Schober and P. H. Dederichs, in _Metals: Phonon States, Electron States and Fermi Surfaces_, edited by K.-H. Hellwege and J. L. Olsen, Landolt-Bornstein, New Series, Group III, Vol. 13, Pt. a (Springer-Verlag, Berlin, 1981). * [10] A. H. Wilson, _The Theory of Metals_, 2nd ed. (Cambridge Univ Press, Cambridge, 1954). * [11] J. McDougall and F. Stoner, Philos. Trans. R. Soc. London, Ser. A **237**, 67 (1938). * [12] P. Rhodes, Proc. Roy. Soc. London, Ser. A **204**, 396 (1950). * [13] R. Boehler and M. Ross, Earth Planet. Sci. Lett. **153**, 23 (1997). * [14] R. G. McQueen, J. N. Fritz, and C. E. Morris, in _Shock Waves in Condensed Matter 1983_, edited by J. R. Asay, R. A. Graham, and G. K. Straub (Elsevier Science Publishers, New York, 1984). * [15] A. Hanstrom and P. Lazor, J. Alloys Compd. **305**, 209 (2000). * [16] J. L. Pelissier, Physica **128A**, 363 (1984). * [17] J. A. Moriarty, D. A. Young, and M. Ross, Phys. Rev. B **30**, 578 (1984). * [18] N. W. Ashcroft and D. Stroud, in _Solid State Physics_, edited by F. Seitz, D. Turnbull, and H. Ehrenreich, Vol. 33, p. 1 (Academic, New York, 1978). * [19] L. V. Al'tshuler, S. B. Kormer, A. A. Bakanova, and R. F. Trunin, Sov. Phys. JETP **11**, 573 (1960). * [20] S. B. Kormer, A. I. Funtikov, V. D. Urlin, and A. N. Kolesnikova, Sov. Phys. JETP **15**, 477 (1962). * [21]_LASL Shock Hugoniot Data_, edited by S. P. Marsh (Univ of California Press, Berkeley, 1980). * [22] L. V. Al'tshuler, A. A. Bakanova, I. P. Dudoladov, E. A. Dynin, R. F. Trunin, and B. S. Chekin, J. Appl. Mech. Techn. Phys. **22**, 145 (1981). * [23] A. C. Mitchell and W. J. Nellis, J. Appl. Phys. **52**, 3363 (1981). * [24] M. D. Knudson, R. W. Lemke, C. A. Hall, C. Deeney and J. R. Asay, J. Appl. Phys. (in press). * [25] I. C. Skidmore and E. Morris, in _Thermodynamics of Nuclear Materials_, p. 173 (International Atomic Energy Association, Vienna, 1962). * [26] L. V. Al'tshuler, N. N. Kalitkin, L. V. Kuz'mina, and B. S. Chekin, Sov. Phys. JETP **45**, 167 (1977). * [27] R. F. Trunin, Bull. Acad. Sci. USSR, Phys. Ser. (Engl. Transl.) **22**, 103 (1986). * [28] B. L. Glushak, A. P. Zharkov, M. V. Zhernokletov, V. Ya. Ternovoi, A. S. Filimonov, and V. E. Fortov, Sov. Phys. JETP **69**, 739 (1989). * [29] V. A. Simonenko, N. P. Voloshin, A. S. Vladimirov, A. P. Nagibin, V. N. Nogin, V. A. Popov, V. A. Vasilenko, and Yu. A. Shoidin, Sov. Phys. JETP **61**, 869 (1985). * [30] A. P. Volkov, N. P. Voloshin, A. S. Vladimirov, V. N. Nogin, and V. A. Simonenko, JETP Lett. **31**, 588 (1980). * [31] C. E. Ragan, Phys. Rev. A **25**, 3360 (1982). * [32] C. E. Ragan, Phys. Rev. A **29**, 1391 (1984). * [33] R. P. Feynman, N. Metropolis, and E. Teller, Phys. Rev. **75**, 1561 (1949). * [34] W. G. Zittel, J. Meyer-ter-Vehn, J. C. Boettger, and S. B. Trickey, J. Phys. F: Met. Phys. **15**, L247 (1985).
We propose a means for constructing highly accurate equations of state (EOS) for elemental solids and liquids essentially from first principles, based upon a particular decomposition of the underlying condensed matter Hamiltonian for the nuclei and electrons. We also point out that at low pressures the neglect of anharmonic and electron-phonon terms, both contained in this formalism, results in errors of less than 5% in the thermal parts of the thermodynamic functions. Then we explicitly display the forms of the remaining terms in the EOS, commenting on the use of experiment and electronic structure theory to evaluate them. We also construct an EOS for Aluminum and compare the resulting Hugoniot with data up to 5 Mbar, both to illustrate our method and to see whether the approximation of neglecting anharmonicity et al. remains viable to such high pressures. We find a level of agreement with experiment that is consistent with the low-pressure results. LA-UR-03-3264
Write a summary of the passage below.
arxiv-format/0211015v1.md
# Low Mass Neutron Stars and the Equation of State of Dense Matter J. Carriere \\({}^{*}\\) and C. J. Horowitz\\({}^{\\dagger}\\) Nuclear Theory Center and Dept. of Physics, Indiana University, Bloomington, IN 47405 J. Piekarewicz \\({}^{\\ddagger}\\) Department of Physics, Florida State University, Tallahassee, FL 32306 November 4, 2021 ## I Introduction The structure of neutron stars, particularly their masses and radii, depend critically on the equation of state (EOS) of dense matter [1]. New measurements of masses and radii by state-of-the-art observatories should place important constraints on the EOS. Observing a rapid change of the EOS with density could signal a transition to an exotic phase of matter. Possibilities for new high density phases include pion or kaon condensates [2], strange quark matter [3], and/or a color superconductor [4, 5]. Measuring neutron-star radii \\(R(M)\\) for a large range of neutron star masses \\(M\\) is attractive as it would allow one to directly deduce the EOS [6], that is, the pressure as a function of the energy density \\(P(\\epsilon)\\). While the masses of various neutron stars are accurately known [7], precise measurements of their radii do not yet exist. Therefore, several groups are devoting considerable effort at measuring neutron-star radii. Often one deduces the surface temperature \\(T_{\\infty}\\) and the luminosity \\(L\\) of the star from spectral and distance measurements, respectively. Assuming a black-body spectrum, these measurements determine the surface area, and thus the effective radius \\(R_{\\infty}\\), of the star from the Stefan-Boltzmann law: \\[L=4\\pi\\sigma R_{\\infty}^{2}T_{\\infty}^{4}. \\tag{1}\\] Opportunities for precision measurements on neutron-star radii include the isolated neutron star RX J185635-3754 [8, 9, 10] and quiescent neutron stars in globular clusters, such as CXOU 132619.7-472910.8 [11], where distances are accurately known. Moreover, Sanwal and collaborators have recently detected absorption features in the radio-quiet neutron star 1E 1207.4-5209 [12] that may provide the mass-to-radius ratio of the star through the determination of the gravitational redshift of the spectral lines. These observations are being complemented by studies that aim at constraining the composition of the neutron-star atmosphere [13]. Finally, models of rotational glitches place a lower limit on the radius of the Vela pulsar at \\(R_{\\infty}\\gtrsim 12\\) km [14]. While a determination of the mass-radius relation \\(R(M)\\) for a variety of neutron stars would uniquely determine the equation of state, unfortunately all accurately determined masses to date fall within a very small range. Indeed, a recent compilation by Thorsett and Chakrabarty of several radio binary pulsars place their masses in the narrow range of \\(1.25-1.44\\ M_{\\odot}\\)[7]. Note that several X-ray binaries appear to have larger masses, perhaps because of accretion. These include Cyg X-2 with a mass of \\(1.8\\pm 0.2M_{\\odot}\\)[15], Vela X-1 with \\(1.9\\ M_{\\odot}\\)[16], and 4U 1700-37 [17]. If confirmed, they could provide additional information on the high density EOS. However, these mass determinations are not without controversy [18, 19]. On the other hand, it may be difficult to form low mass neutron stars from the collapse of heavier Chandrasekhar mass objects. If so, information on the low density EOS may not be directly available from neutron stars. Thus, it is important to make maximum use of any \\(R(M)\\) measurements even if these are available for only a limited range of masses. Additional information on the low density EOS may be obtained from precision measurements on atomic nuclei. For example, the neutron radius of a heavy nucleus, such as \\({}^{208}\\)Pb, is closely related to the pressureof neutron rich matter [20]. Indeed, heavy nuclei develop a neutron-rich skin in response to this pressure. The higher the pressure the further the neutrons are pushed out against surface tension, thereby generating a larger neutron radius. However, nuclear properties depend only on the EOS at normal (in the interior) and below (in the surface) nuclear-matter saturation density (\\(\\rho_{0}\\approx 0.15\\) nucleons/fm\\({}^{3}\\)). This is in contrast to conventional 1.4 \\(M_{\\odot}\\) neutron stars that, with central densities of several times \\(\\rho_{0}\\), also depend on the high-density component of the EOS. This is not the case for low mass neutron stars (of about \\(1/2\\)\\(M_{\\odot}\\)). Reaching central densities near \\(\\rho_{0}\\), _low mass neutron stars probe the EOS at similar densities as atomic nuclei_. Therefore, one could infer the radius of low mass neutron stars from detailed measurements on atomic nuclei. Having inferred the radii of low mass neutron stars from a nuclear measurement, combined with the measured radius of a \\(1.4M_{\\odot}\\) star, may enable one to deduce the density dependence of the EOS. (For a recent discussion on the minimum stable mass of a neutron star see Ref. [21].) The parity radius experiment at Jefferson Laboratory [22] aims to measure accurately and model independently the root-mean-square neutron radius (\\(R_{n}\\)) of \\({}^{208}\\)Pb via parity violating elastic electron scattering [23]. Such an experiment probes neutron densities because the weak vector charge of a neutron is much larger than that of a proton. The goal of the experiment is to measure \\(R_{n}\\) to a 1% accuracy (within \\(\\approx\\pm 0.05\\) fm). The outline of the paper is as follows. In Sec. II we present a relativistic effective field theory formalism to study relationships between the neutron radius of \\({}^{208}\\)Pb and the radii of neutron stars. Uncertainties in these relationships are estimated by considering a wide range of effective field theory parameters, all of them constrained by known nuclear properties. This formalism has been used previously to study correlations between the neutron radius of \\({}^{208}\\)Pb and the properties of the neutron-star crust [24], the radii of 1.4 \\(M_{\\odot}\\) neutron stars [25], and the direct URCA cooling of neutron stars [26]. As the radius of low mass neutron stars depends on the solid crust of nonuniform matter, a treatment of the crust is discussed in Sec. III. Our results, presented in Sec. IV, show a strong correlation between the radius of a low mass neutron star and the neutron radius \\(R_{n}\\) of \\({}^{208}\\)Pb that is essentially model independent. This is because the structure of both objects depend on the EOS at similar densities. In contrast, the radius of a 1.4 \\(M_{\\odot}\\) neutron star shows a considerable model dependence. This is because a 1.4 \\(M_{\\odot}\\) neutron star is also sensitive to the EOS at higher densities, and the high density EOS is not constrained by nuclear observables. In Sec. V we conclude that properties of low mass neutron stars can be inferred from measuring properties of atomic nuclei. In particular, the radius of a 1/2 \\(M_{\\odot}\\) neutron star can be deduced from a measurement of \\(R_{n}\\) in \\({}^{208}\\)Pb. One will then be able to directly compare this inferred radius to the measured radius of an \\(\\approx 1.4\\)\\(M_{\\odot}\\) neutron star to gain information on the density dependence of the EOS. Thus, even if a mass-radius measurement for a single (\\(\\approx 1.4\\)\\(M_{\\odot}\\)) neutron star is available, one can use the atomic nucleus to gain information on the density dependence of the EOS. This should provide the most precise determination of the density dependence of the EOS to date and should indicate whether a transition to a high density exotic phase of matter is possible or not. ## II Formalism Our starting point is the relativistic effective-field theory of Ref. [27] supplemented with new couplings between the isoscalar and the isovector mesons. This allows us to correlate nuclear observables, such as the neutron radius of \\({}^{208}\\)Pb, with neutron star properties. We will explore uncertainties in these correlations by considering a range of model parameters. The model has been introduced and discussed in detail in several earlier references [24, 25, 26], yet a brief summary is included here for completeness. The interacting Lagrangian density is given by [24, 27] \\[{\\cal L}_{\\rm int} = \\bar{\\psi}\\left[g_{\\rm s}\\phi\\!-\\!\\left(g_{\\rm v}V_{\\mu}\\!+\\! \\frac{g_{\\rho}}{2}\\mathbf{\\tau}\\cdot{\\bf b}_{\\mu}\\!+\\!\\frac{e}{2}(1\\! +\\!\\tau_{3})A_{\\mu}\\right)\\gamma^{\\mu}\\right]\\psi \\tag{2}\\] \\[- \\frac{\\kappa}{3!}(g_{\\rm s}\\phi)^{3}\\!-\\!\\frac{\\lambda}{4!}(g_{ \\rm s}\\phi)^{4}\\!+\\!\\frac{\\zeta}{4!}g_{\\rm v}^{4}(V_{\\mu}V^{\\mu})^{2}\\] \\[+ g_{\\rho}^{2}\\,{\\bf b}_{\\mu}\\cdot{\\bf b}^{\\mu}\\left[\\Lambda_{\\rm s }g_{\\rm s}^{2}\\phi^{2}+\\Lambda_{\\rm v}g_{\\rm v}^{2}V_{\\mu}V^{\\mu}\\right]\\.\\] The model contains an isodoublet nucleon field (\\(\\psi\\)) interacting via the exchange of two isoscalar mesons, the scalar sigma (\\(\\phi\\))and the vector omega (\\(V^{\\mu}\\)), one isovector meson, the rho (\\({\\bf b}^{\\mu}\\)), and the photon (\\(A^{\\mu}\\)). In addition to meson-nucleon interactions the Lagrangian density includes scalar and vector self-interactions. Omega-meson self-interactions \\(\\zeta\\) soften the equation of state at high density. Finally, the nonlinear couplings \\(\\Lambda_{\\rm s}\\) and \\(\\Lambda_{\\rm v}\\) are included to modify the density-dependence of the symmetry energy \\(a_{\\rm sym}(\\rho)\\)[24, 25, 26]. We employ Eq. (2) in a mean field approximation where the meson fields are replaced by their ground state expectation values. The couplings constants in Eq. (2) are fit to nuclear matter and finite nuclei properties. All of the parameter sets considered here, namely, NL3 [28], S271 [24], and Z271 [24] reproduce the following properties of symmetric nuclear matter: saturation at a Fermi momentum of \\(k_{F}=1.30\\) fm\\({}^{-1}\\) with a binding energy per nucleon of \\(-16.24\\) MeV and an incompressibility of \\(K=271\\) MeV. The various parameter sets differ in their effective masses at saturation density, in their \\(\\omega\\)-meson self interactions (which are included for Z271 and neglected for NL3 and S271) and in the nonlinear couplings \\(\\Lambda_{\\rm s}\\) and \\(\\Lambda_{\\rm v}\\) (see Table 1). Note that the NL3 parametrization has been used extensively to reproduce a variety of nuclear properties [28]. The symmetry energy at saturation density is not well constrained experimentally. However, an average of the symmetry energy at saturation density and the surface symmetry energy is constrained by the binding energy of nuclei. Thus, the following prescription has been adopted: the value of the \\(NN\\rho\\) coupling constant is adjusted so that all parameter sets have a symmetry energy of 25.67 MeV at \\(k_{F}\\!=\\!1.15\\) fm\\({}^{-1}\\). This insures accurate binding energies for heavy nuclei, such as \\({}^{208}\\)Pb. Following this prescription the symmetry energy at saturation density is predicted to be 37.3, 36.6, and 36.3 MeV for parameter sets NL3, S271, and Z271, respectively (for \\(\\Lambda_{\\rm s}\\!=\\!\\Lambda_{\\rm v}\\!=\\!0\\)). Changing \\(\\Lambda_{\\rm s}\\) or \\(\\Lambda_{\\rm v}\\) changes the density dependence of the symmetry energy by changing the effective rho-meson mass. In general increasing either \\(\\Lambda_{\\rm s}\\) or \\(\\Lambda_{\\rm v}\\) causes the symmetry energy to grow more slowly with density. The neutron radius of \\({}^{208}\\)Pb depends on the density dependence of the symmetry energy. A large pressure for neutron matter pushes neutrons out against surface tension and leads to a large neutron radius. The pressure depends on the derivative of the energy of symmetric matter with respect to density (which is approximately known) and the derivative of the symmetry energy, \\(da_{\\rm sym}/d\\rho\\). Thus parameter sets with a large \\(da_{\\rm sym}/d\\rho\\) yield a large neutron radius in \\({}^{208}\\)Pb. Note that all parameter sets approximately reproduce the observed proton radius and binding energy of \\({}^{208}\\)Pb. Therefore changing \\(\\Lambda_{\\rm s}\\) or \\(\\Lambda_{\\rm v}\\) allows one to change the density dependence of the symmetry energy \\(da_{\\rm sym}/d\\rho\\), while keeping many other properties fixed. Once the model parameters have been fixed, it is a simple matter to calculate the EOS for uniform matter in beta equilibrium, where the chemical potentials of the neutrons \\(\\mu_{n}\\), protons \\(\\mu_{p}\\), electrons \\(\\mu_{\\rm e}\\), and muons \\(\\mu_{\\mu}\\) satisfy, \\[\\mu_{n}-\\mu_{p}=\\mu_{e}=\\mu_{\\mu}\\;. \\tag{3}\\] Note that the high density interior of a neutron star is assumed to be a uniform liquid; possible transitions to a quark- or meson-condensate phase are neglected. ## III Boundary between crust and interior Neutron stars are expected to have a solid inner crust of nonuniform neutron-rich matter above a liquid mantle. The phase transition from solid to liquid is thought to be weakly first order and can be found by comparing a detailed model of the nonuniform crust to the liquid (see for example [29]). Yet in practice, model calculations yield very small density discontinuities at the transition. Therefore a good approximation is to search for the density where the uniform liquid first becomes unstable to small amplitude density oscillations (see for example [30]). This method would yield the exact transition density for a second order phase transition. The stability analysis of the uniform ground state is based on the relativistic random-phase-approximation (RPA) of Ref. [31] for a system of electrons, protons, and neutrons. The approach is generalized here to accommodate the various nonlinear couplings among the meson fields. We start by considering a plane wave density fluctuation of momentum \\(q=|{\\bf q}|\\) and zero energy \\(q_{0}\\!=\\!0\\). To describe small amplitude particle-hole (or particle-antiparticle) excitations of the fermions we compute the longitudinal polarization matrix that is defined as follows: \\[\\Pi_{L}=\\left(\\begin{array}{cccc}\\Pi_{00}^{*}&0&0&0\\\\ 0&\\Pi_{s}^{*}+\\Pi_{s}^{p}&\\Pi_{m}^{p}&\\Pi_{m}^{n}\\\\ 0&\\Pi_{m}^{p}&\\Pi_{00}^{p}&0\\\\ 0&\\Pi_{m}^{n}&0&\\Pi_{00}^{n}\\end{array}\\right)\\;. \\tag{4}\\] Here the one-one entry describes electrons, the two-two entry protons plus neutrons interacting via scalar mesons, the three-three entry protons interacting with vector mesons and the four-four entry neutrons interacting with vector mesons. The individual polarization insertions are given by \\[i\\Pi_{s}(q,q_{0}) = \\int\\frac{d^{4}p}{(2\\pi)^{4}}{\\rm Tr}\\Big{[}G(p)G(p+q)\\Big{]}\\;, \\tag{5a}\\] \\[i\\Pi_{m}(q,q_{0}) = \\int\\frac{d^{4}p}{(2\\pi)^{4}}{\\rm Tr}\\Big{[}G(p)\\gamma_{0}G(p+q) \\Big{]}\\;,\\] (5b) \\[i\\Pi_{00}(q,q_{0}) = \\int\\frac{d^{4}p}{(2\\pi)^{4}}{\\rm Tr}\\Big{[}G(p)\\gamma_{0}G(p+q) \\gamma_{0}\\Big{]}\\;, \\tag{5c}\\] where \\({\\rm Tr}\\) indicates a trace over Dirac indices. Note that the fermion Green's function has been defined as \\[G(p)=(p\\!\\!\\!/+M^{*})\\left(\\frac{1}{{p^{*}}^{2}-{M^{*}}^{2}}+\\frac{i\\pi}{E_{p }^{*}}\\delta(p_{0}^{*}-E_{p}^{*})\\theta(k_{F}-|{\\bf p}|)\\right)\\;. \\tag{6}\\] Here \\(k_{F}\\) is the Fermi momentum, \\(M^{*}=M\\!-\\!g_{s}\\phi_{0}\\) is the nucleon effective mass, \\(E_{p}^{*}=(p^{2}+{M^{*}}^{2})^{1/2}\\), and \\(p_{\\mu}^{*}=p_{\\mu}-(g_{v}V_{\\mu}\\pm g_{\\rho}b_{\\mu}/2)\\) (with the plus sign for protons \\begin{table} \\begin{tabular}{l c c c c c c} Model & \\(m_{\\rm s}\\) & \\(g_{\\rm s}^{2}\\) & \\(g_{\\rm v}^{2}\\) & \\(\\kappa\\) & \\(\\lambda\\) & \\(\\zeta\\) \\\\ NL3 & 508.194 & 104.3871 & 165.5854 & 3.8599 & \\(-0.0159049\\) & 0 \\\\ S271 & 505 & 81.1071 & 116.7655 & 6.68344 & \\(-0.01580\\) & 0 \\\\ Z271 & 465 & 49.4401 & 70.6689 & 6.16960 & \\(+0.156341\\) & 0.06 \\\\ \\end{tabular} \\end{table} Table 1: Model parameters used in the calculations. The parameter \\(\\kappa\\) and the scalar mass \\(m_{\\rm s}\\) are given in MeV. The nucleon, rho, and omega masses are kept fixed at \\(M\\!=\\!939\\), \\(m_{\\rho}\\!=\\!763\\), and \\(m_{\\omega}\\!=\\!783\\) MeV, respectively — except in the case of the NL3 model where it is fixed at \\(m_{\\omega}\\!=\\!782.5\\) MeV. and the minus sign for neutrons). Note that in the case of the electrons \\(M^{*}=m_{e}\\) and \\(p_{\\mu}^{*}=p_{\\mu}\\). Explicit analytic formulas for \\(\\Pi_{00}\\), \\(\\Pi_{s}\\), and \\(\\Pi_{m}\\) in the static limit (\\(q_{0}\\!=\\!0\\)) are given in the appendix. The lowest order meson propagator \\(D_{L}^{0}\\) is computed in Ref. [31] in the absence of nonlinear meson couplings. It is given by, \\[D_{L}^{0}=\\left(\\begin{array}{cccc}d_{g}&0&-d_{g}&0\\\\ 0&-d_{\\rm s}^{0}&0&0\\\\ -d_{g}&0&d_{g}+d_{\\rm v}^{0}+d_{\\rho}^{0}&d_{\\rm v}^{0}-d_{\\rho}^{0}\\\\ 0&0&d_{\\rm v}^{0}-d_{\\rho}^{0}&d_{\\rm v}^{0}+d_{\\rho}^{0}\\end{array}\\right)\\;. \\tag{7}\\] Expressions for the photon and for the various meson propagators in the limit of no nonlinear meson couplings are given as follows: \\[d_{g} = \\frac{e^{2}}{q^{2}}=\\frac{4\\pi\\alpha}{q^{2}}\\;, \\tag{8a}\\] \\[d_{\\rm s}^{0} = \\frac{g_{\\rm s}^{2}}{q^{2}+m_{\\rm s}^{2}}\\;,\\] (8b) \\[d_{\\rm v}^{0} = \\frac{g_{\\rm v}^{2}}{q^{2}+m_{\\rm v}^{2}}\\;,\\] (8c) \\[d_{\\rho}^{0} = \\frac{g_{\\rho}^{2}/4}{q^{2}+m_{\\rho}^{2}}\\;. \\tag{8d}\\] The appearance of a minus sign in the one-three element of \\(D_{L}^{0}\\) relative to the one-one element is because electrons and protons have opposite electric charges. The addition of nonlinear couplings in the Lagrangian leads to a modification of the meson masses. Effective meson masses are defined in terms of the quadratic fluctuations of the meson fields around their static, mean-field values (the linear fluctuations vanish by virtue of the mean-field equations). That is, \\[m_{\\rm s}^{*2}=-\\frac{\\partial^{2}{\\cal L}}{\\partial\\phi_{0}^{2}}\\;,\\quad m_{ \\rm v}^{*2}=+\\frac{\\partial^{2}{\\cal L}}{\\partial V_{0}^{2}}\\;,\\quad m_{\\rho}^ {*2}=+\\frac{\\partial^{2}{\\cal L}}{\\partial b_{0}^{2}}\\;. \\tag{9}\\] This yields the following expressions for the effective meson masses in terms of the static meson fields and the coupling constants defined in the interacting Lagrangian of Eq. (2): \\[m_{\\rm s}^{*2} = m_{\\rm s}^{2}+g_{\\rm s}^{2}\\left(\\kappa\\Phi_{0}+\\frac{\\lambda}{2 }\\Phi_{0}^{2}-2\\Lambda_{\\rm s}B_{0}^{2}\\right)\\;, \\tag{10a}\\] \\[m_{\\rm v}^{*2} = m_{\\rm v}^{2}+g_{\\rm v}^{2}\\left(\\frac{\\zeta}{2}W_{0}^{2}+2 \\Lambda_{\\rm v}B_{0}^{2}\\right)\\;,\\] (10b) \\[m_{\\rho}^{*2} = m_{\\rho}^{2}+g_{\\rho}^{2}\\left(2\\Lambda_{\\rm s}\\Phi_{0}^{2}+2 \\Lambda_{\\rm v}W_{0}^{2}\\right)\\;. \\tag{10c}\\] Note that the following definitions have been introduced: \\(\\Phi_{0}\\!\\equiv\\!g_{\\rm s}\\phi_{0}\\), \\(W_{0}\\!\\equiv\\!g_{\\rm v}V_{0}\\), and \\(B_{0}\\!\\equiv\\!g_{\\rho}b_{0}\\). Further, the new couplings between isoscalar and isovector mesons (\\(\\Lambda_{\\rm s}\\) and \\(\\Lambda_{\\rm v}\\)) lead to additional off diagonal terms in the meson propagator. These arise because the quadratic fluctuations around the static solutions generate terms of the form \\[\\frac{\\partial^{2}{\\cal L}}{\\partial\\phi_{0}\\partial b_{0}}\ eq 0\\quad{\\rm and} \\quad\\frac{\\partial^{2}{\\cal L}}{\\partial V_{0}\\partial b_{0}}\ eq 0\\;. \\tag{11}\\] For simplicity we only consider here the following two cases: _i)_ (\\(\\Lambda_{\\rm s}\\!\ eq 0\\) and \\(\\Lambda_{\\rm v}\\!=\\!0\\)) or _ii)_ (\\(\\Lambda_{\\rm s}\\!=\\!0\\) and \\(\\Lambda_{\\rm v}\ eq 0\\)), and neglect the (slightly) more complicated case in which both coupling constants are different from zero. For the first case of (\\(\\Lambda_{\\rm s}\ eq 0\\) and \\(\\Lambda_{\\rm v}\\!=\\!0\\)) the new components of the longitudinal meson propagator become \\[d_{\\rm v} = \\frac{g_{\\rm v}^{2}}{q^{2}+m_{\\rm v}^{*2}}\\;,\\] (12a) \\[d_{\\rm s} = \\frac{g_{\\rm s}^{2}(q^{2}+m_{\\rho}^{*2})}{(q^{2}+m_{\\rm s}^{*2})( q^{2}+m_{\\rho}^{*2})+(4g_{\\rm s}g_{\\rho}\\Lambda_ We estimate the transition density (\\(\\rho_{c}\\)) between the inner crust and the liquid interior as the largest density for which Eq. (16) has a solution. Our results for \\(\\rho_{c}\\) are listed in Tables II-V and also shown in Fig. 1. We find a strong correlation between the neutron skin of \\({}^{208}\\)Pb and At the lower densities of the inner crust the system is nonuniform and may have a very complex structure that may include spherical, cylindrical, and plate-like nuclei, bubbles, rods, plates, _etc._[32, 33]. At present we do not have microscopic calculations of these structures in our models. Therefore we adopt a simple interpolation formula to estimate the equation of state in the inner crust. That is, we assume a polytropic form for the EOS in which the pressure is approximately given by [14], \\[P(\\epsilon)=A+B\\epsilon^{4/3}, \\tag{17}\\] where \\(\\epsilon\\) is the mass-energy density. The two constants \\(A\\) and \\(B\\) in Eq. (17) are chosen so that the pressure is continuous at the boundary between the inner crust and the liquid interior (determined from the RPA analysis) and at the boundary between the inner and the outer crusts. For the low density outer crust we assume the EOS of Baym, Pethick, and Sutherland(BPS) [34] up to a baryon density of \\(\\rho_{\\rm outer}\\!=\\!2.57\\times 10^{-4}\\) fm\\({}^{-3}\\) which corresponds to an energy density of \\(\\epsilon_{\\rm outer}\\!=4.30\\times 10^{11}\\) g/cm\\({}^{3}\\) (or 0.24 MeV/fm\\({}^{3}\\)) and a pressure of \\(P_{\\rm outer}\\!=\\!4.87\\times 10^{-4}\\) MeV/fm\\({}^{3}\\). Thus the two constants \\(A\\) and \\(B\\) of Eq. (17) are adjusted to reproduce \\(P_{\\rm outer}\\) at \\(\\epsilon_{\\rm outer}\\) and the pressure of the uniform liquid, calculated within the relativistic mean-field (RMF) approach, at \\(\\epsilon_{c}\\) which is the energy density corresponding to \\(\\rho_{c}\\). That is, \\[P(\\epsilon)=\\left\\{\\begin{array}{ll}P_{\\rm BPS}(\\epsilon)\\;,&\\quad{\\rm for} \\ \\epsilon_{\\rm min}\\leq\\epsilon\\leq\\epsilon_{\\rm outer}\\;;\\\\ A+B\\epsilon^{4/3}\\;,&\\quad{\\rm for}\\ \\epsilon_{\\rm outer}<\\epsilon\\leq \\epsilon_{c}\\;;\\\\ P_{\\rm RMF}(\\epsilon)\\;,&\\quad{\\rm for}\\ \\epsilon_{c}<\\epsilon\\;.\\end{array}\\right. \\tag{18}\\] Note that \\(\\epsilon_{\\rm min}\\!=\\!5.86\\times 10^{-9}\\) MeV/fm\\({}^{3}\\) is the minimum value of the energy density included in the equation of state. This value corresponds to a minimum pressure of \\(P_{\\rm min}\\!=\\!6.08\\times 10^{-15}\\) MeV/fm\\({}^{3}\\), which is the value at which we stop integrating the Tolman-Oppenheimer-Volkoff equations. That is, the radius \\(R\\) of a neutron star (see Sec. IV) is defined by the expression \\(P(R)\\!=\\!P_{\\rm min}\\). For the relativistic mean field interaction (TM1) of Ref. [35], the relatively simple procedure presented here is a good approximation to the more complicated explicit calculation of the EOS in the inner crust [36]. ## IV Results Figures 2-5 and Tables II-V show the radii of neutron stars of mass \\(1/3,1/2,3/4\\) and \\(1.4\\ M_{\\odot}\\) as a function of the neutron skin (\\(R_{n}\\)-\\(R_{p}\\)) of \\({}^{208}\\)Pb. One might expect a strong correlation between the radius of a neutron star and the neutron radius of \\({}^{208}\\)Pb, as the same pressure of neutron rich matter that pushes neutrons out against surface tension in \\({}^{208}\\)Pb pushes neutrons out against gravity in a neutron star [25]. However, the central density of a \\(1.4M_{\\odot}\\) neutron star is a few times larger than normal nuclear-matter saturation density \\(\\rho_{0}\\). Thus, \\(R(1.4M_{\\odot})\\) depends on the EOS at low and high densities while \\(R_{n}\\) only depends on the EOS for \\(\\rho\\!\\leq\\!\\rho_{0}\\). Softening the EOS (_i.e.,_ decreasing the pressure) at high densities will decrease \\(R(1.4M_{\\odot})\\) without changing \\(R_{n}\\). Hence, while Fig. 5 shows a definite correlation--\\(R(1.4M_{\\odot})\\) grows with increasing \\(R_{n}\\)-\\(R_{p}\\)--there is a strong model dependence. In contrast, the central density of a \\(\\frac{1}{2}M_{\\odot}\\) star is of the order of \\(\\rho_{0}\\), so \\(R(\\frac{1}{2}M_{\\odot})\\) and \\(R_{n}\\) depend on the EOS over a comparable density range. As a result, we find a strong correlation and weak model dependence in Fig. 3. For example, if \\(R_{n}\\)-\\(R_{p}\\) in \\({}^{208}\\)Pb is relatively large, _e.g,_\\(R_{n}\\!-\\!R_{p}\\!\\approx\\!0.25\\) fm, then \\(R(\\frac{1}{2}M_{\\odot})\\!\\approx\\!16\\) km. Alternatively, if \\(R_{n}\\!-\\!R_{p}\\!\\approx\\!0.15\\) fm, then \\(R(\\frac{1}{2}M_{\\odot})\\lesssim 13\\) km. This is an important result. It suggest that even if observations of low mass neutron stars are not feasible, one could still infer their radii from a single nuclear measurement. Note that the results for a \\(\\frac{3}{4}M_{\\odot}\\) neutron star (Fig. 4) follow a similar trend. We conclude this section with a comment on \\(\\frac{1}{3}M_{\\odot}\\) neutron stars. Parameter sets that generate very large values for \\(R_{n}\\!-\\!R_{p}\\) have large pressures near \\(\\rho_{0}\\). This implies that the energy of neutron rich matter rises rapidly with density. In turn, this leads (because all parameter sets are constrained to have the same symmetry energy at \\(\\rho\\!=\\!0.1\\) fm\\({}^{-3}\\)) to lower energies and _lower pressures_ at very low density as compared with parameter sets with smaller values for \\(R_{n}\\!-\\!R_{p}\\). This low-density region is important for low mass neutron stars and could explain why \\(R(\\frac{1}{3}M_{\\odot})\\) in Fig. 2 actually decreases with increasing neutron skin for \\(R_{n}\\!-\\!R_{p}\\gtrsim 0.23\\) fm. Figure 1: The transition density \\(\\rho_{c}\\) at which uniform matter becomes unstable to density oscillations as a function of the neutron skin in \\({}^{208}\\)Pb. The solid line is for the Z271 parameter set with \\(\\Lambda_{\\rm v}\ eq 0\\) while the dashed curve uses Z271 with \\(\\Lambda_{\\rm s}\ eq 0\\). The dotted curve is for the S271 set and the dot-dashed curve for NL3, both of these with \\(\\Lambda_{\\rm v}\ eq 0\\). Figure 3: Radius of a neutron star of mass 1/2 \\(M_{\\odot}\\) as a function of the neutron skin in \\({}^{208}\\)Pb. The solid line is for the Z271 parameter set with \\(\\Lambda_{\\rm v}\ eq 0\\) while the dashed curve uses Z271 with \\(\\Lambda_{\\rm s}\ eq 0\\). The dotted curve is for the S271 set and the dot-dashed curve for NL3, both of these with \\(\\Lambda_{\\rm v}\ eq 0\\). Figure 4: Radius of a neutron star of mass 3/4 \\(M_{\\odot}\\) as a function of the neutron skin in \\({}^{208}\\)Pb. The solid line is for the Z271 parameter set with \\(\\Lambda_{\\rm v}\ eq 0\\) while the dashed curve uses Z271 with \\(\\Lambda_{\\rm s}\ eq 0\\). The dotted curve is for the S271 set and the dot-dashed curve for NL3 both of these with \\(\\Lambda_{\\rm v}\ eq 0\\). Figure 5: Radius of a neutron star of mass 1.4 \\(M_{\\odot}\\) as a function of the neutron skin in \\({}^{208}\\)Pb. The solid line is for the Z271 parameter set with \\(\\Lambda_{\\rm v}\ eq 0\\) while the dashed curve uses Z271 with \\(\\Lambda_{\\rm s}\ eq 0\\). The dotted curve is for the S271 set and the dot-dashed curve for NL3, both of these with \\(\\Lambda_{\\rm v}\ eq 0\\). Figure 2: Radius of a neutron star of mass 1/3 \\(M_{\\odot}\\) as a function of the neutron skin in \\({}^{208}\\)Pb. The solid line is for the Z271 parameter set with \\(\\Lambda_{\\rm v}\ eq 0\\) while the dashed curve uses Z271 with \\(\\Lambda_{\\rm s}\ eq 0\\). The dotted curve is for the S271 set and the dot-dashed curve for NL3, both of these with \\(\\Lambda_{\\rm v}\ eq 0\\). \\begin{table} \\begin{tabular}{l c c c c c c c} \\(\\Lambda_{\\rm v}\\) & \\(g_{\\rho}^{2}\\) & \\(R_{n}-R_{p}(^{208}\\)Pb) & R(\\(\\frac{1}{3}M_{\\odot}\\)) & R(\\(\\frac{1}{2}M_{\\odot}\\)) & R(\\(\\frac{3}{4}M_{\\odot}\\)) & R(\\(1.4M_{\\odot}\\)) & \\(\\rho_{c}\\) \\\\ 0.030 & 127.0 & 0.1952 & 16.766 & 14.789 & 14.142 & 14.175 & 0.0854 \\\\ 0.025 & 115.6 & 0.209 & 18.37 & 15.59 & 14.60 & 14.38 & 0.0808 \\\\ 0.020 & 106.0 & 0.223 & 19.49 & 16.15 & 14.93 & 14.52 & 0.0746 \\\\ 0.015 & 97.9 & 0.237 & 19.73 & 16.39 & 15.10 & 14.61 & 0.0675 \\\\ 0.010 & 90.9 & 0.251 & 19.31 & 16.40 & 15.20 & 14.68 & 0.0610 \\\\ 0.005 & 84.9 & 0.265 & 18.70 & 16.35 & 15.31 & 14.81 & 0.0558 \\\\ 0.000 & 79.6 & 0.280 & 18.10 & 16.27 & 15.47 & 15.05 & 0.0519 \\\\ \\end{tabular} \\end{table} Table 2: Results for the NL3 parameter set with \\(\\Lambda_{\\rm s}\\!=\\!0\\). The \\(NN\\rho\\) coupling constant \\(g_{\\rho}^{2}\\) and the neutron minus proton root mean square radius for \\({}^{208}\\)Pb (in fm) are given along with the radii of 1/3, 1/2, 3/4 and 1.4 \\(M_{\\odot}\\) neutron stars in km. Finally, the transition density \\(\\rho_{c}\\) between the inner crust and liquid interior is given in fm\\({}^{-3}\\). \\begin{table} \\begin{tabular}{l c c c c c c c} \\(\\Lambda_{\\rm v}\\) & \\(g_{\\rho}^{2}\\) & \\(R_{n}-R_{p}(^{208}\\)Pb) & R(\\(\\frac{1}{3}M_{\\odot}\\)) & R(\\(\\frac{1}{2}M_{\\odot}\\)) & R(\\(\\frac{3}{4}M_{\\odot}\\)) & R(\\(1.4M_{\\odot}\\)) & \\(\\rho_{c}\\) \\\\ 0.14 & 139.3368 & 0.1525 & 13.799 & 12.709 & 12.293 & 11.616 & 0.0974 \\\\ 0.12 & 129.2795 & 0.1650 & 15.012 & 13.379 & 12.688 & 11.748 & 0.0936 \\\\ 0.10 & 119.5245 & 0.1771 & 16.219 & 14.039 & 13.080 & 11.880 & 0.0890 \\\\ 0.08 & 112.9710 & 0.1900 & 17.405 & 14.696 & 13.481 & 12.016 & 0.0845 \\\\ 0.06 & 106.2682 & 0.2026 & 18.31 & 15.28 & 13.89 & 12.18 & 0.0796 \\\\ 0.05 & 103.2065 & 0.2090 & 18.61 & 15.53 & 14.09 & 12.29 & 0.0771 \\\\ 0.04 & 100.3162 & 0.2154 & 18.82 & 15.76 & 14.30 & 12.43 & 0.0747 \\\\ 0.03 & 97.5834 & 0.2218 & 18.95 & 15.96 & 14.53 & 12.62 & 0.0725 \\\\ 0.02 & 94.9956 & 0.2282 & 18.98 & 16.12 & 14.75 & 12.89 & 0.0703 \\\\ 0.01 & 92.5415 & 0.2347 & 18.94 & 16.25 & 14.98 & 13.27 & 0.0683 \\\\ 0.00 & 90.2110 & 0.2413 & 18.88 & 16.36 & 15.20 & 13.77 & 0.0665 \\\\ \\end{tabular} \\end{table} Table 4: Results for the Z271 parameter set with \\(\\Lambda_{\\rm s}\\!=\\!0\\). The \\(NN\\rho\\) coupling constant \\(g_{\\rho}^{2}\\) and the neutron minus proton root mean square radius for \\({}^{208}\\)Pb (in fm) are given along with the radii of 1/3, 1/2, 3/4 and 1.4 \\(M_{\\odot}\\) neutron stars in km. Finally, the transition density \\(\\rho_{c}\\) between the inner crust and liquid interior is given in fm\\({}^{-3}\\). \\begin{table} \\begin{tabular}{l c c c c c c} \\(\\Lambda_{\\rm v}\\) & \\(g_{\\rho}^{2}\\) & \\(R_{n}-R_{p}(^{208}\\)Pb) & R(\\(\\frac{1}{3}M_{\\odot}\\)) & R(\\(\\frac{1}{2}M_{\\odot}\\)) & R(\\(\\frac{3}{4}M_{\\odot}\\)) & R(\\(1.4M_{\\odot}\\)) & \\(\\rho_{c}\\) \\\\ 0.05 & 127.8389 & 0.1736 & 15.88 & 14.06 & 13.43 & 13.25 & 0.0923 \\\\ 0.04 & 116.2950 & 0.1895 & 17.75 & 15.00 & 13.96 & 13.47 & 0.0866 \\\\ 0.03 & 106.6635 & 0.2054 & 19.25 & 15.77 & 14.42 & 13.65 & 0.0794 \\\\ 0.02 & 98.5051 & 0.2215 & 19.80 & 16.24 & 14.76 & 13.82 & 0.0717 \\\\ 0.01 & 91.5061 & 0.2378 & 19.54 & 16.46 & 15.08 & 14.07 & 0.0648 \\\\ 0.00 & 85.4357 & 0.2543 & 18.95 & 16.53 & 15.43 & 14.56 & 0.0594 \\\\ \\end{tabular} \\end{table} Table 3: Results for the S271 parameter set with \\(\\Lambda_{\\rm s}\\!=\\!0\\). The \\(NN\\rho\\) coupling constant \\(g_{\\rho}^{2}\\) and the neutron minus proton root mean square radius for \\({}^{208}\\)Pb (in fm) are given along with the radii of 1/3, 1/2, 3/4 and 1.4 \\(M_{\\odot}\\) neutron stars in km. Finally, the transition density \\(\\rho_{c}\\) between the inner crust and liquid interior is given in fm\\({}^{-3}\\). ## V Discussion and Conclusions A number of relativistic effective field theory parameter sets have been used to study correlations between the radii of neutron stars and the neutron radius \\(R_{n}\\) of \\({}^{208}\\)Pb. An RPA stability analysis was employed to find the transition density between the nonuniform inner crust and the uniform liquid interior. For the nonuniform outer crust we invoked the EOS of Baym, Pethick, and Sutherland [34]. Then, a simple polytropic formula for the EOS, approximately valid for most of the crust [14], was used to interpolate between the outer crust and the liquid interior. This simple, yet fairly accurate, procedure allows us to study the EOS for a variety of parameter sets that predict a wide range of values for the neutron radius of \\({}^{208}\\)Pb. For a \"normal\" \\(1.4M_{\\odot}\\) neutron star we find central densities of several times normal nuclear matter saturation density (\\(\\rho_{0}\\!\\approx\\!0.15\\) fm\\({}^{3}\\)). Because the neutron radius of \\({}^{208}\\)Pb does not constrain the high-density component of the EOS, we find a strong model dependence between the radius of a \\(1.4M_{\\odot}\\) neutron star and \\(R_{n}\\). In contrast, the central density of a low mass neutron star is close to \\(\\rho_{0}\\). Therefore properties of the low mass star are sensitive to the EOS over the same density range as \\(R_{n}\\). As a result, we find a strong correlation and a weak model dependence between the radius of a \\(0.5M_{\\odot}\\) neutron star and \\(R_{n}\\) (see Fig. 3). Thus, it should be possible to infer some properties of low mass neutron stars from detailed measurements in atomic nuclei. Understanding the density dependence of the equation of state is particularly interesting. A softening of the EOS at high density (where the pressure rises slower than expected) could signal the transition to an exotic phase, such as pion/kaon condensates, strange quark matter, and/or a color superconductor. Yet obtaining definitive results on the density dependence of the EOS may require measuring the radius of neutron stars for a broad range of masses. This may be difficult as most compilations to date find neutron star masses in the very narrow range of \\(1.25-1.44\\)\\(M_{\\odot}\\)[7]. Further, important ambiguities will remain if the radius of a \\(1.4M_{\\odot}\\) neutron star proves to be moderately small. Would a small radius be an indication that the EOS is relatively soft at all densities and there is no phase transition, or is the EOS stiff at low density and undergoes an abrupt softening at high density from a phase transition? Therefore, it is important to also make measurements that are exclusively sensitive to the low density EOS. One obvious possibility is the radius of a low mass (\\(\\approx 0.5M_{\\odot}\\)) neutron star as it central density, close to \\(\\rho_{0}\\), is much smaller than that of a \\(1.4M_{\\odot}\\) star. However such low mass stars may be very rare because they are hard to form. Notably, the neutron radius of a heavy nucleus, such as \\({}^{208}\\)Pb, contains similar information. Indeed, we find a strong correlation and a weak model dependence between the neutron radius of \\({}^{208}\\)Pb and the radius of a \\(0.5M_{\\odot}\\) neutron star. This allows one to use nuclear information to infer the radius of a low mass neutron star. Hence, comparing this inferred radius to the measured radius of a \\(\\simeq\\!1.4M_{\\odot}\\) neutron star, should provide the most complete information to date on the density dependence of the equation of state. This work was supported in part by DOE grants DE-FG02-87ER40365 and DE-FG05-92ER40750. \\begin{table} \\begin{tabular}{l c c c c c c} \\hline \\hline \\(\\Lambda_{\\rm s}\\) & \\(g_{p}^{2}\\) & \\(R_{n}-R_{p}(^{208}\\)Pb) & R(\\(\\frac{1}{3}M_{\\odot}\\)) & R(\\(\\frac{3}{2}M_{\\odot}\\)) & R(\\(1.4M_{\\odot}\\)) & \\(\\rho_{c}\\) \\\\ 0.06 & 146.6988 & 0.1640 & 13.34 & 12.71 & 12.52 & 11.98 & 0.0844 \\\\ 0.05 & 132.8358 & 0.1775 & 14.63 & 13.48 & 13.01 & 12.19 & 0.0830 \\\\ 0.04 & 121.3666 & 0.1907 & 15.99 & 14.27 & 13.51 & 12.42 & 0.0807 \\\\ 0.03 & 111.7205 & 0.2036 & 17.28 & 15.02 & 14.00 & 12.68 & 0.0777 \\\\ 0.02 & 103.4949 & 0.2163 & 18.27 & 15.66 & 14.46 & 12.97 & 0.0741 \\\\ 0.01 & 96.3974 & 0.2288 & 18.79 & 16.10 & 14.86 & 13.32 & 0.0702 \\\\ 0.00 & 90.2110 & 0.2413 & 18.88 & 16.36 & 15.20 & 13.77 & 0.0665 \\\\ \\hline \\hline \\end{tabular} \\end{table} Table 5: Results for the Z271 parameter set with \\(\\Lambda_{\\rm v}\\!=\\!0\\). The \\(NN\\rho\\) coupling constant \\(g_{p}^{2}\\) and the neutron minus proton root mean square radius for \\({}^{208}\\)Pb (in fm) are given along with the radii of 1/3, 1/2, 3/4 and 1.4 \\(M_{\\odot}\\) neutron stars in km. Finally, the transition density \\(\\rho_{c}\\) between the inner crust and liquid interior is given in fm\\({}^{-3}\\). ## VI Appendix The polarizations are defined in Eqs. (5a,5b,5c) and describe particle-hole or particle antiparticle excitations. In the static limit (energy transfer \\(q_{0}\\!=\\!0\\)) the scalar polarization for momentum transfer \\(q\\) is given by \\[\\Pi_{s}(q,0)=\\frac{1}{2\\pi^{2}}\\left\\{k_{F}E_{F}-\\left(3{M^{*}}^{2}+\\frac{q^{2}} {2}\\right)\\ln\\frac{k_{F}+E_{F}}{M^{*}}+\\frac{2E_{F}E^{2}}{q}\\ln\\left|\\frac{2k_ {F}-q}{2k_{F}+q}\\right|-\\frac{2E^{3}}{q}\\ln\\left|\\frac{qE_{F}-2k_{F}E}{qE_{F}+ 2k_{F}E}\\right|\\right\\}, \\tag{19}\\] with Fermi momentum \\(k_{F}\\), nucleon effective mass \\(M^{*}\\), \\(E_{F}=(k_{F}^{2}+{M^{*}}^{2})^{1/2}\\), and \\(E=(q^{2}/4+{M^{*}}^{2})^{1/2}\\). Likewise, the longitudinal polarization is given by \\[\\Pi_{00}(q,0)=-\\frac{1}{\\pi^{2}}\\left\\{\\frac{2}{3}k_{F}E_{F}-\\frac{q^{2}}{6} \\ln\\frac{k_{F}+E_{F}}{M^{*}}-\\frac{E_{F}}{3q}({M^{*}}^{2}+k_{F}^{2}-\\frac{3}{ 4}q^{2})\\ln\\left|\\frac{2k_{F}-q}{2k_{F}+q}\\right|+\\frac{E}{3q}({M^{*}}^{2}- \\frac{q^{2}}{2})\\ln\\left|\\frac{qE_{F}-2k_{F}E}{qE_{F}+2k_{F}E}\\right|\\right\\}, \\tag{20}\\] while the mixed scalar-vector polarization becomes \\[\\Pi_{m}(q,0)=\\frac{M^{*}}{2\\pi^{2}}\\left\\{k_{F}-(\\frac{k_{F}^{2}}{q}-\\frac{q} {4})\\ln\\left|\\frac{2k_{F}-q}{2k_{F}+q}\\right|\\right\\}. \\tag{21}\\] ## References * [1] J. M. Lattimer and M. Prakash, ApJ. **550** (2001) 426. * [2] J.A. Pons, J.A. Miralles, M. Prakash, J.M. Lattimer, ApJ. **553**, (2001) 382. * [3] P. Jaikumar and M. Prakash, astro-ph/0105225. M. Prakash, Nucl. Phys. **A698**, (2002) 440. * [4] H. Heiselberg, _Proceedings of the Conference on Compact Stars in the QCD Phase Diagram_, (2001) 3; astro-ph/0201465. * [5] M. Alford, _Proceedings of the Conference on Compact Stars in the QCD Phase Diagram_, (2001) 137; hep-ph/0110150. * [6] L. Lindblom, ApJ. **398**, (1992) 569. * [7] S. E. Thorsett and D. Chakrabarty, ApJ. **512** (1999) 288. * [8] F. M. Walter and J. Lattimer, astro-ph/0204199. * [9] J.J. Drake et al., ApJ in press, astro-ph/0204159. * [10] J. A. Pons, F. M. Walter, J. M. Lattimer, M. Prakash, R. Neuhauser, and P. An, ApJ. **564** (2002) 981. * [11] E. F. Brown, L. Bildsten, and R. E. Rutledge, ApJ. **504** (1998) L95; R.E. Rutledge, L. Bildsten, E.F. Brown, G.G. Pavlov, and V.E. Zavlin, astro-ph/0105405. * [12] D. Sanwal, G. G. Pavlov, V. E. Zavlin, and M. A. Teter, ApJ. **574** (2002) L61. * [13] C. J. Hailey and K. Mori, astro-ph/0207590. * [14] B. Link, R. I. Epstein, and J. M. Lattimer, Phys Rev. Lett. **83** (1999) 3362. * [15] J. A. Orosz and E. Kuulkers, MNRAS **305** (1999) 132. * [16] J.H. van Kerkwijk, J van Paradijs, and E.J. Zuiderwijk, A&A **303** 497. * [17] S.R. Heap and M.F. Corcoran, ApJ. **387** (1992) 340. * [18] D. Stickland, C. Lloyd, and A. Radzuin-Woodham, MNRAS **286** (1997) L21. * [19] G.E. Brown, J.C. Weingartner, and R.A.M.J. Wijers, ApJ. **463** (1996) 297. * [20] B. Alex Brown, Phys. Rev. Lett. **85** (2000) 5296. * [21] P. Haensel, J. L. Zdunik, and F. Douchin, astro-ph/0201434. * [22] Jefferson Laboratory Experiment E-00-003, Spokespersons R. Michaels, P. A. Souder and G. M. Urciuoli. * [23] C. J. Horowitz, S. J. Pollock, P. A. Souder and R. Michaels, Phys. Rev. C **63**, 025501 (2001). * [24] C.J. Horowitz and J. Piekarewicz, Phys. Rev. Lett. **86**, (2001) 5647. * [25] C.J. Horowitz and J. Piekarewicz, Phys. Rev. C **64** (2001) 062802. * [26] C.J. Horowitz and J. Piekarewicz, nucl-th/0207067. * [27] H. Muller and B. D. Serot, Nucl. Phys. **A606**, 508 (1996). * [28] G. A. Lalazissis, J. Konig and P. Ring, Phys. Rev. C **55** (1997) 540. * [29] F. Douchin and P. Haensel, astro-ph/0111092. * [30] F. Douchin and P. Haensel, PLB **485** (2000) 107. * [31] C.J. Horowitz and K. Wehrberger, NP **A531** (1991) 665. * [32] C.P. Lorenz, D.G. Ravenhall, and C.J. Pethick, Phys. Rev. Lett. **70** (1993) 379. * [33] K. Oyamatsu, Nucl. Phys. **A561**, (1993) 431. * [34] G. Baym, C. Pethick, and P. Sutherland, ApJ. **170** (1971) 299. * [35] Y. Sugahara and H. Toki, NP **A579** (1994) 557. * [36] H. Shen, H. Toki, K. Oyamatsu, and K. Sumiyoshi, NP **A637** (1998) 435.
Neutron-star radii provide useful information on the equation of state of neutron rich matter. Particularly interesting is the density dependence of the equation of state (EOS). For example, the softening of the EOS at high density, where the pressure rises slower than anticipated, could signal a transition to an exotic phase. However, extracting the density dependence of the EOS requires measuring the radii of neutron stars for a broad range of masses. A \"normal\" \\(1.4M_{\\odot}\\) (\\(M_{\\odot}\\)=solar mass) neutron star has a central density of a few times nuclear-matter saturation density (\\(\\rho_{0}\\)). In contrast, low mass (\\(\\simeq 0.5M_{\\odot}\\)) neutron stars have central densities near \\(\\rho_{0}\\) so its radius provides information on the EOS at low density. Unfortunately, low-mass stars are rare because they may be hard to form. Instead, a precision measurement of nuclear radii on atomic nuclei may contain similar information. Indeed, we find a strong correlation between the neutron radius of \\({}^{208}\\)Pb and the radius of a \\(0.5M_{\\odot}\\) neutron star. Thus, the radius of a \\(0.5M_{\\odot}\\) neutron star can be inferred from a measurement of the the neutron radius of \\({}^{208}\\)Pb. Comparing this value to the measured radius of a \\(\\simeq\\)\\(1.4M_{\\odot}\\) neutron star should provide the strongest constraint to date on the density dependence of the equation of state. pacs: PACS numbers: 14.70.-k, 14.70.-k, 14.70.-k
Provide a brief summary of the text.
arxiv-format/0212024v2.md
# Incompressibility of strange matter \\({}^{*}\\). Monika Sinha \\({}^{1,2,3}\\), Manjari Bagchi \\({}^{1,2}\\), Jishnu Dey \\({}^{4,5**}\\) Mira Dey \\({}^{1,5**}\\), Subharthi Ray \\({}^{6}\\) and Siddhartha Bhowmick \\({}^{7}\\) ## 1 Introduction An exciting issue of modern astrophysics is the possible existence of a family of compact stars made entirely of deconfined u,d,s quark matter or \"strange matter\" (SM) and thereby denominated strange stars (SS). They differ from neutron stars, where quarks are confined within neutrons, protons and eventually within other hadrons (hadronic matter stars). The possibleexistence of SS is a direct consequence of the so called strange matter hypothesis[2], according to which the energy per baryon of SM would be less than the lowest energy per baryon found in nuclei, which is about 930 MeV for \\(Fe^{56}\\). Also, the ordinary state of matter, in which quarks are confined within hadrons, is a metastable state. Of course, the hypothesis does not conflict with the existence of atomic nuclei as conglomerates of nucleons, or with the stability of ordinary matter[3, 4, 5]. The best observational evidence for the existence of quark stars come from some compact objects, the X-Ray burst sources SAX J1808.4\\(-\\)3658 (the SAX in short) and 4U 1728\\(-\\)34, the X-ray pulsar Her X-1 and the superburster 4U 1820\\(-\\)30. The first is the most stable pulsating X-ray source known to man as of now. This star is claimed to have a mass \\(M*\\sim 1.4~{}M_{\\odot}\\) and a radius of about 7 kms [6]. Coupled to this claim are the various other evidences for the existence of SS, such as the possible explanation of the two kHz quasi-periodic oscillations in 4U 1728 - 34 [7] and the quark-nova explanation for \\(\\gamma\\) ray bursts [8]. The expected behaviour of SS is directly opposite to that of a neutron star as Fig.(1) shows. The mass of 4U 1728\\(-\\)34 is claimed to be less than 1.1 \\(M_{\\odot}\\) in Li et al. [7], which places it much lower in the M-R plot and thus it could be still gaining mass and is not expected to be as stable as the SAX. So for example, there is a clear answer [9] to the question posed by Franco [10]: why are the pulsations of SAX not attenuated, as they are in 4U 1728\\(-\\)34? From a basic point of view the equation of state for SM should be calculated solving QCD at finite density. As we know, such a fundamental approach is presently not feasible even if one takes recourse to the large colour philosophy of 't Hooft [11]. A way out was found by Witten [12] when he suggested that one can borrow a phenomenological potential from the meson sector and use it for baryonic matter. Therefore, one has to rely on phenomenological models. In this work, we use different equations of state (EOS) of SM proposed by Dey et al [1] using the phenomenological Richardson potential. Other variants are now being proposed, for example the chromo dielectric model calculations of Malheiro et al.[13]. Fig.(2) shows the energy per baryon for the EOS of [1]. One of them (eos1, SS1 of [6]) has a minimum at E/A = 888.8 \\(MeV\\) compared to 930.4 of \\(Fe^{56}\\), i.e., as much as 40 \\(MeV\\) below. The other two have this minimum at 911 \\(MeV\\) and 926 \\(MeV\\), respectively, both less than the normal density of nuclear matter. The pressure at this point is zero and this marks the surface of the star in the implementation of the well known TOV equation. These curves clearly show that the system can fluctuate about this minimum, so that the zero pressure point can vary. ## 2 Incompressibility : its implication for Witten's Cosmic Separation of Phase scenario. In nuclear physics incompressibility is defined as[14] \\[K~{}=~{}9\\frac{\\partial}{\\partial n}\\left(n^{2}\\frac{\\partial\\varepsilon}{ \\partial n}\\right)\\;, \\tag{1}\\]Figure 1: Mass and radius of stable stars with the strange star EOS (left curve) and neutron star EOS (right curve), which are solutions of the Tolman-Oppenheimer-Volkoff (TOV) equations of general relativity. Note that while the self sustained strange star systems can have small masses and radii, the neutron stars have larger radii for smaller masses since they are bound by gravitation alone. Figure 2: Strange matter EOS employed by D98 show respective stable points. The solid line for is EOS1, the dotted line for EOS2 and the dashed line for EOS3. All have the minimum at energy per baryon less than that of \\(Fe^{56}\\) where \\(\\varepsilon~{}=~{}E/A\\) is the energy per particle of the nuclear matter and \\(n\\) is the number density. The relation of K with bulk modulus B is \\[K=\\frac{9B}{n}~{}~{}. \\tag{2}\\] \\(K\\) has been calculated in many models. In particular, Bhaduri et al [14] used the non - relativistic constituent quark model, as well as the bag model, to calculate \\(K_{A}\\) as a function of \\(~{}n\\) for the nucleon and the delta. They found that the nucleon has an incompressibility \\(K_{N}\\) of about 1200 MeV, about six times that of nuclear matter. They also suggested that at high density \\(K_{A}\\) matches onto quark gas incompressibility. The velocity of sound in units of light velocity c is given by \\[v=\\sqrt{K/9\\varepsilon}. \\tag{3}\\] The simple models of quark matter considered in [1] use a Hamiltonian with an interquark potential with two parts, a scalar component (the density dependent mass term) and a vector potential originating from gluon exchanges. In the absence of an exact evaluation from QCD, this vector part is borrowed from meson phenomenology [15]. In common with the phenomenological bag model, it has built in asymptotic freedom and quark confinement (linear). In order to restore the approximate chiral symmetry of QCD at high densities, an _ansatz_ is used for the constituent masses, viz., \\[M(n)=M_{Q}~{}sech\\left(\ u\\frac{n}{n_{0}}\\right), \\tag{4}\\] where \\(n_{0}\\) is normal nuclear matter density and \\(\ u\\) is a parameter. There may be several EOS's for different choices of parameters employed to obtain the EOS. Some of them are given in the table (1). There \\(\\alpha_{s}\\) is perturbative quark gluon coupling and \\(\\Lambda\\) is the scale parameter appeared in the vector potential. Changing other parameters one can obtain more EOS's. The table (1) also shows the masses(\\(M_{G}\\)),radii (R) and the number density(\\(n_{s}\\)) at star surface of maximum mass strange stars obtained from the corresponding EOS's. The surface of the stars occur at the minimum \\(\\varepsilon\\) where pressure is zero. The general behaviour of the curves is relatively insensitive to the parameter \\(\ u\\) in \\(M(n)\\) as well as the gluon mass, as evident from the figure(3). \\begin{table} \\begin{tabular}{l c c c c c c c} EOS & \\(\ u\\) & \\(\\alpha_{s}\\) & \\(\\Lambda\\) & \\(M_{Q}\\) & \\(M_{G}/M_{\\odot}\\) & R & \\(n_{s}/n_{o}\\) \\\\ & & & & MeV & & (km) & \\\\ \\hline EOS1 & 0.333 & 0.20 & 100 & 310 & 1.437 & 7.055 & 4.586 \\\\ \\hline EOS2 & 0.333 & 0.25 & 100 & 310 & 1.410 & 6.95 & 4.595 \\\\ \\hline EOS3 & 0.286 & 0.20 & 100 & 310 & 1.325 & 6.5187 & 5.048 \\\\ \\end{tabular} \\end{table} Table 1: Parameters for the three EOSIn figure (4), \\(K\\) with three values of \\(M_{Q}\\) implying different running masses, \\(M(n)\\), is plotted as a function of the density expressed by its ratio to \\(n_{0}\\). Given for comparison, is the incompressibility \\(K_{q}\\) of a perturbative massless three flavour quark gas consisting of zero mass current quarks [14] using the energy expression given in [16] to order \\(\\alpha_{s}^{2}\\). It can be seen that as \\(M_{Q}\\) decreases, the nature of the relation approaches the perturbative case of [14]. At high density our incompressibility and that due to Baym [16] matches, showing the onset of chiral symmetry restoration. In EOS1 for uds matter, the minimum of \\(\\varepsilon\\) occurs at about 4.586 \\(n_{0}\\). nucleation may occur at a density less than this value of \\(n\\). This corresponds Figure 4: Incompressibility as a function of density ratio for EOS’s with different constituent mass as parameter. Dashed lines correspond to perturbative massless three flavour quark gas with different values of \\(\\alpha_{s}\\) (see [14, 16]). Figure 3: Incompressibility as a function of density ratio for EOS’s for different EOS’s given in table(1). The solid curve for EOS1, the dotted curve for EOS2 and the dashed curve for EOS3. to a radius of about \\(0.67~{}fm\\) for a baryon. For EOS1 we find \\(K\\) to be \\(1.293~{}GeV\\) per quark at the star surface. It is encouraging to see that this roughly matches with the compressibility \\(K_{N}\\) so that no 'phase expands explosively'. In the Cosmic Separation of Phase scenario, Witten [12] had indicated at the outset that he had assumed the process of phase transition to occur smoothly without important departure from equilibrium. If the two phases were compressed with significantly different rates, there would be inhomogenieties set up. But near the star surface at \\(n~{}\\sim~{}4\\) to \\(5~{}n_{0}\\) the matter is more incompressible showing a stiffer surface. This is in keeping with the stability of strange stars observed analytically with the Vaidya-Tikekar metric by Sharma, Mukherjee, Dey and Dey [17]. The velocity of sound, \\(v_{s}\\) peaks somewhere around the middle of the star and then falls off. We show it for three different EOS in fig.(5) with parameters given in Table (1). Next we turn to the model of Zimanyi and Moskowski [18], where the coupling constants are density dependent. It was shown [19] that the quark condensate derived from this model via the Hellmann-Feynman theorem is physically acceptable in this model. Details may be found in [19] but the essence is that in the more conventional Walecka model the condensate increases with density whereas in QCD a decrease is expected. In a recent paper [20] Krein and Vizcarra (KV in short) have put forward an EOS for nuclear matter which exhibits a transition from hadronic to quark matter. KV start from a microscopic quark-meson coupling Hamiltonian with a density dependent quark-quark interaction and construct an effective quark-hadron Hamiltonian which contains interactions that lead to quark deconfinement at sufficiently high densities. At low densities, their model is equivalent to a nuclear matter with confined quarks, i.e., a system of non-overlapping baryons interacting through effective scalar and vector meson degrees of freedom, while at very high densities it is not quark matter. The \\(K_{NM}\\) at the saturation density is fitted to be \\(248~{}MeV\\). This EOS also Figure 5: Velocity of sound, \\(v_{s}\\) as a function of density ratio. The solid line is for EOS1, the dotted for EOS2 and the dashed for EOS3. Figure 6: Incompressibility of neutron matter(ZM model) with three strange matter EOS’s for different constituent mass as parameter as a function of density ratio. The solid line is for neutron matter the dotted for \\(M_{Q}=350MeV\\), the dash-dot for \\(M_{Q}=310MeV\\) and the dashed for \\(M_{Q}=200MeV\\). Figure 7: Velocity of sound, \\(v_{s}\\) as a function of density ratio for ZM neutron matter. gives a smooth phase transition of quark into nuclear matter and thus, conforms to Witten's assumption. Interestingly enough, the transition takes place at about \\(\\sim 5n_{0}\\). The KV model does not incorporate strange quarks so that comparison with our EOS is not directly meaningful. However it is quite possible that the signal results of the KV calculations mean that quark degrees of freedom lower the energy already at the ud level and once the possibility of strange quarks is considered the binding exceeds that of \\(Fe^{56}\\). An extension of KV with strange quarks is in progress1. Footnote 1: G. Krein, by e-mail Results obtained from the KV calculation are presented simultaneously. The incompressibility shows softening and the velocity of sound decreases when quark degrees of freedom open up, as expected. At \\(\\sim~{}5n_{0}~{}K_{qN}\\) is about \\(2~{}GeV\\) in qualitative agreement with our value. Note that in our EOS we also have strange quarks reducing the value of \\(K\\). Fig (9) shows the \\(v_{s}\\) as a function of \\(n_{B}/n_{0}\\) for both kinds of EOS. The EOS with quarks shows a lowering of \\(v_{s}\\). ## 3 Discussions. In our model the density is large even at the surface (4.586 \\(n_{0}\\) for \\(EOS1\\)) so that the quarks in the fermi sea have high momentum on the average. Therefore, they often come close together. It was shown by Bailin and Love that in certain colour channels [21], the short range one gluon exchange spin-spin interaction can produce extra attraction, forming diquarks. In s-state the pair can be formed only in flavour antisymmetric state. The flavour antisymmetric diquark may form in colour symmetric(6) and spin symmetric (triplet) state, or in colour antisymmetric (\\(\\bar{3}\\)) and spin antisymmetric (singlet) state. The potential strength of the latter is 6 times the Figure 8: Incompressibility as a function of density ratio for pure nuclear matter and quark-nucleon system. former. The N-\\(\\Delta\\) mass splitting is obtainable from the one gluon exchange potential, the delta function part of which is smeared to a Gaussian, \\[H_{i,j}\\ =\\ -80.51\\ \\sigma^{3}(\\lambda_{i}.\\lambda_{j})(S_{i}.S_{j})\\ e^{- \\sigma^{2}r_{ij}^{2}}. \\tag{5}\\] In the above, the strength of this spin dependent part is taken from Dey and Dey(1984) [22]. The pairing energy of ud pairs in the spin singlet and colour \\(\\bar{3}\\) state is \\(-3.84\\ MeV\\)[23]2. Our model does not predict diquarks that are permanent - since the quarks must have high momentum transfer if they are to interact strongly with a force that is short-range. The formation and breaking of pairs give rise to endothermic and exothermic processes leading to fast cooling and superbursts respectively. Footnote 2: With this pairing energy superbursts observed from some astrophysical x-ray sources has been explained very well. We also note that the lowering of energy is not very large, as compared to the approximately \\(42\\ MeV\\) energy difference per baryon (\\(930.6\\ Fe^{56}\\) to \\(888\\ MeV\\)), seen in EOS1. So we do not expect any drastic change in the incompressibility or the velocity of sound due to diquark pairing, nor do we expect a phase transition to a colour-flavour locked state. We are grateful to the referee for inviting a comment on quark pairing in the strange matter scenario. ## References * [1] M. Dey, I. Bombaci, J. Dey, S. Ray & B. C. Samanta, Phys. Lett. B438 (1998) 123; Addendum B447 (1999) 352; Erratum B467 (1999) 303; Indian J. Phys. 73B (1999) 377. Figure 9: The velocity of sound in pure nuclear matter and in quark-nuclear system as a function of density ratio. For pure nuclear matter at \\(9n_{0}\\) the sound velocity is too close to that of light c, whereas for quark-nuclear system it is much less, about 0.5 c. * [2] N. Itoh, Prog. Theor. Phys. 44, 291 (1970), A. R. Bodmer, Phys. Rev. D 4, 1601 (1971); H. Terazawa, INS Rep. 336 (Univ. Tokyo, INS) (1979); E. Witten, Phys.Rev. D 30, 272(1984). * [3] E. Farhi and R. L. Jaffe, Phys. Rev. D 30,2379 (1984) * [4] J. Madsen, Lecture notes in Physics, Springer verlag, 516 (1999). * [5] I. Bombaci, Strange Quark Stars: structural properties and possible signatures for their existence, in \"Physics of neutron star interiors\", Eds. D. Blaschke, et al., Lecture notes in Physics, Springer verlag, 578, (2001). * [6] X-D. Li, I. Bombaci, M. Dey, J. Dey & E.P.J. van den Heuvel, Phys. Rev. Lett. 83 (1999) 3776. * [7] X-D. Li, S. Ray, J. Dey, M. Dey & I. Bombaci, Astrophys. J. Lett 527 (1999) L51. * [8] I. Bombaci and B. Datta, Astrophys. J. Lett 530 (2000) L69; R. Ouyed, J. Dey and M. Dey, astro-ph/0105109v3, Astron. & Astrophys. Lett. (in press). * [9] M. Sinha, J. Dey, M. Dey, S. Ray and S. Bhowmick, \"Stability of strange stars (SS) derived from a realistic equation of state\", Mod. Phys. Lett. A (in press). * [10] L. M. Franco, The Effect of Mass Accretion Rate on the Burst Oscillations in 4U 1728-34. astro-ph/0009189, Astrophys. J. Lett (2001). * [11] G. 't Hooft, Nucl. Phys. B72 (1974) 461; B75 (1974) 461. * [12] E. Witten, Nucl. Phys B160 (1979) 57. * [13] M. Malheiro, M. Fiolhais, A. R. Taurines, J. Phys. G 29 (2003) 1045; M. Malheiro, E. O. Azevedo, L. G. Nuss, M. Fiolhais and A. R. Taurines, astro-ph/0111148v1. * [14] R. K. Bhaduri, J. Dey and M. A. Preston, Phys. Lett. B 136 (1984) 289. * [15] J. L. Richardson, Phys. Lett. B82 (1979) 272. * [16] G. Baym, Statistical mechanics of quarks and hadrons. ed. H. Satz, (North holland, Amsterdam, 1981) p.17. * [17] R. Sharma, S. Mukherjee, M. Dey and J. Dey, Mod. Phys. Lett. A, 17 (2002) 827. * [18] J. Zimanyi and S. A. Mozkowski, Phys. Rev. C 44 (1990) 178. * [19] A. Delfino, J. Dey, M. Dey and M. Malheiro, Phys. Lett. B 363 (1995) 17. * [20] G. Krein & V. E. Vizcarra, arXiv:nucl-th/0206047 v1. Paper submitted to Proceedings of JJHF, University of Adelaide, Australia. * [21] D. Bailin and A. Love, Phys. Rep. C 107(1984) 325 * [22] Dey J. & Dey M. 1984, Phys. Lett. B 138, 200. * [23] M. Sinha, M. Dey, S. Ray and J. Dey, Mon. Not. R. Astron. Soc. 337 (2002) 1368-1372.
Strange stars (ReSS) calculated from a realistic equation of state (EOS), that incorporate chiral symmetry restoration as well as deconfinement at high density[1] show compact objects in the mass radius curve. We compare our calculations of incompressibility for this EOS with that of nuclear matter. One of the nuclear matter EOS has a continuous transition to ud-matter at about five times normal density. Another nuclear matter EOS incorporates density dependent coupling constants. From a look at the consequent velocity of sound, it is found that the transition to ud-matter seems necessary. Keywords compact stars - realistic strange stars - dense matter - elementary particles - equation of state \\({}^{1}\\) Dept. of Physics, Presidency College, 86/1 College Street, Kolkata 700073, India; \\({}^{2}\\) [email protected], [email protected] \\({}^{3}\\) CSIR NET Fellow; \\({}^{4}\\) UGC Research Professor, Dept. of Physics, Maulana Azad College, 8 Rafi Ahmed Kidwai Road, Kolkata 700013, India; \\({}^{5}\\) Associate, IUCAA, Pune, India; \\({}^{**}\\) [email protected] \\({}^{6}\\) Inter University Centre for Astronomy and Astrophysics, Post bag 4, Ganeshkhind, Pune 411007, India; [email protected] \\({}^{7}\\)Department of Physics, Barasat Govt. College, Kolkata 700 124, India. \\({}^{*}\\) Work supported in part by DST grant no. SP/S2/K-03/01, Govt. of India..
Give a concise overview of the text below.
arxiv-format/0212042v2.md
# Power-law persistence and trends in the atmosphere: A detailed study of long temperature records J. F. Eichner,\\({}^{1,2}\\) E. Koscielny-Bunde,\\({}^{1,3}\\) A. Bunde,\\({}^{1}\\) S. Havlin,\\({}^{2}\\) and H.-J. Schellnhuber\\({}^{4}\\) \\({}^{1}\\)Institut fur Theoretische Physik III, Universitat Giessen, D-35392 Giessen, Germany \\({}^{2}\\)Mireux Center and Department of Physics, Bar Ilan University, Ramat-Gan, Israel \\({}^{3}\\)Postdann Institute for Climate Research, D-14412 Potsdam, Germany \\({}^{4}\\)Tyndall Centre for Climate Change Research, University of East Anglia, Norwich NR 7TJ, United Kingdom submitted: 12 December 2002 ## I Introduction The persistence of weather states on short terms is a well-known phenomenon: A warm day is more likely to be followed by a warm day than by a cold day and vice versa. The trivial forecast, that the weather of tomorrow is the same as the weather of today, was in previous times often used as a \"minimum skill\" forecast for assessing the usefulness of short-term weather forecasts. The typical time scale for weather changes is about 1 week, a time period that corresponds to the average duration of so-called \"general weather regimes\" or \"Grosswetterlagen\", so this type of short-term persistence usually stops after about 1 week. On larger scales, other types of persistence occur. One of them is related to circulation patterns associated with blocking [1]. A blocking situation occurs when a very stable high pressure system is established over a particular region and remains in place for several weeks. As a result the weather in the region of the high remains fairly persistent throughout this period. It has been argued recently [2] that this short-term persistence regime may be linked to solar flare intermittency. Furthermore, transient low pressure systems are deflected around the blocking high so that the region downstream of the high experiences a larger than usual number of storms. On even longer terms, a source for weather persistence might be slowly varying external (boundary) forcing such as sea surface temperatures and anomaly patterns. On the scale of months to seasons, one of the most pronounced phenomena is the El Nino southern oscillation event which occurs every 3-5 years and which strongly affects the weather over the tropical Pacific as well as over North America [3]. The question is, _how_ the persistence that might be generated by very different mechanisms on different time scales decays with time \\(s\\). The answer to this question is not easy. Correlations, and in particular long-term correlations, can be masked by trends that are generated, e.g., by the well-known urban warming. Even uncorrelated data in the presence of long-term trends may look like correlated ones, and, on the other hand, long-term correlated data may look like uncorrelated data influenced by a trend. Therefore, in order to distinguish between trends and correlations one needs methods that can systematically eliminate trends. Those methods are available now: both wavelet techniques (WT) (see, e.g., Refs. [4, 5, 6, 7]) and detrended fluctuation analysis (DFA) (see, e.g., Refs. [8, 9, 10, 11]) can systematically eliminate trends in the data and thus reveal intrinsic dynamical properties such as distributions, scaling and long-range correlations very often masked by nonstationarities. In a previous study [12], we have used DFA and WT to study temperature correlations in different climatic zones on the globe. The analysis focused on 14 continental stations, several of them were located along coastlines. The results indicated that the temperature variations are long-range power-law correlated above some crossover time that is of the order of 10 days. Above the crossover time, the persistence, characterized by the autocorrelation \\(C(s)\\) of temperature variations separated by \\(s\\) days, decayed as \\[C(s)\\sim s^{-\\gamma},\\] where, most interestingly, the exponent \\(\\gamma\\) had roughly the same value \\(\\gamma\\cong 0.7\\) for all continental records. Equation (1) can be used as a test bed for global climate models [13]. More recently, DFA was applied to study temperature correlations in the sea surface temperatures [14]. It was found that the temperature autocorrelation function \\(C(s)\\) again decayed by a power law, but with an exponent \\(\\gamma\\) close to 0.4, pointing towards a stronger persistence in the oceans than in the continents. In this paper, we considerably extend our previous analysis to study systematically temperature records of 95 stations. Most of them are on the continents, and several of them are on islands. Our results are actually in line with both earlier papers and in agreement with conclusions drawn from independent type of analysis by several groups [15, 16, 17]. Wefind that the continental records, including those on coastlines, show power-law persistence with \\(\\gamma\\) close to 0.7, while the island records show power-law correlations with \\(\\gamma\\) around 0.4. By comparing different orders of DFA that differ in the way trends are eliminated, we could also study the presence of trends in the records that lead to a warming of the atmosphere. We find that pronounced trends occur mainly at big cities and can be probably attributed to urban growth. Trends that cannot be attributed to urban growth occur in half of the island stations considered and on summit stations in the Alps. A majority of the stations showed no indications of trends. The article is organized as follows. In Sec. II, we describe the detrending analysis used in this paper, the DFA. In Sec. III, we present the result of this analysis. Sec. IV concludes the paper with a discussion. ## II The methods of analysis Consider a record \\(T_{i}\\), where the index \\(i\\) counts the days in the record, \\(i=1,2, ,N\\). The \\(T_{i}\\) represent the maximum daily temperature, measured at a certain meteorological station. For eliminating the periodic seasonal trends, we concentrate on the departures of \\(T_{i}\\), \\(\\Delta T_{i}=T_{i}-\\overline{T}_{i}\\), from their mean daily value \\(\\overline{T}_{i}\\) for each calendar date \\(i\\), say, 2nd of March, which has been obtained by averaging over all years in the record. Quantitatively, correlations between two \\(\\Delta T_{i}\\) values separated by \\(n\\) days are defined by the (auto) correlation function \\[C(n)\\equiv\\langle\\Delta T_{i}\\Delta T_{i+n}\\rangle=\\frac{1}{N-n}\\sum_{i=1}^{N -n}\\Delta T_{i}\\Delta T_{i+n}. \\tag{2}\\] If \\(\\Delta T_{i}\\) are uncorrelated, \\(C(n)\\) is zero for \\(n\\) positive. If correlations exist up to a certain number of days \\(n_{\\times}\\), the correlation function will be positive up to \\(n_{\\times}\\) and vanish above \\(n_{\\times}\\). A direct calculation of \\(C(n)\\) is hindered by the level of noise present in the finite records, and by possible nonstationarities in the data. To reduce the noise we do not calculate \\(C(n)\\) directly, but instead study the \"profile\" \\[Y_{m}=\\sum_{i=1}^{m}\\Delta T_{i}.\\] We can consider the profile \\(Y_{m}\\) as the position of a random walker on a linear chain after \\(m\\) steps. The random walker starts at the origin and performs, in the \\(i\\)th step, a jump of length \\(\\Delta T_{i}\\) to the right, if \\(\\Delta T_{i}\\) is positive, and to the left, if \\(\\Delta T_{i}\\) is negative. The fluctuations \\(F^{2}(s)\\) of the profile, in a given time window of size \\(s\\), are related to the correlation function \\(C(s)\\). For the relevant case (1) of long-range power-law correlations, \\(C(s)\\sim s^{-\\gamma},\\quad 0<\\gamma<1\\), the mean-square fluctuations \\(\\overline{F^{2}(s)}\\), obtained by averaging over many time windows of size \\(s\\) (see below) asymptotically increase by a power law [18]: \\[\\overline{F^{2}(s)}\\sim s^{2\\alpha},\\quad\\alpha=1-\\gamma/2. \\tag{3}\\] For uncorrelated data (as well as for correlations decaying faster than \\(1/s\\)), we have \\(\\alpha=1/2\\). For the analysis of the fluctuations, we employ a hierarchy of methods that differ in the way the fluctuations are measured and possible trends are eliminated (for a detailed description of the methods we refer to Ref. [10]). (i) In the simplest type of fluctuation analysis (DFA0) (where trends are not going to be eliminated), we determine in each window the mean value of the profile. The variance of the profile from this constant value represents the square of the fluctuations in each window. (ii) In the _first order_ detrended fluctuation analysis (DFA1), we determine in each window the best linear fit of the profile. The variance of the profile from this straight line represents the square of the fluctuations in each window. (iii) In general, in the \\(n\\)th order DFA (DFAn) we determine in each window the best \\(n\\)th order polynomial fit of the profile. The variance of the profile from these best \\(n\\)th-order polynomials represents the square of the fluctuations in each window. By definition, DFA0 does not eliminate trends, while DFAn eliminates trends of order \\(n\\) in the profile and \\(n-1\\) in the original time series. Thus, from the comparison of fluctuation functions \\(F(s)\\) obtained from different methods one can learn about both, long-term correlations and the influence of trends. The DFA method is analogous to wavelet techniques that also eliminate polynomial trends systematically. For a detailed review of the method, see Refs. [6, 7]. The conventional techniques such as the direct evaluation of \\(C(n)\\), the rescaled range analysis (R/S) introduced by Hurst (for a review, see, e.g., Ref. [19]) or the power spectrum method [16, 17, 20, 21] can only be applied on stationary records. In the presence of trends they may overestimate the long-term persistence exponent. The R/S method is somewhat similar to the DFA0 analysis. ## III Analysis of temperature records Figure 1 shows the results of the DFA analysis of the daily temperatures (maximum or mean values) \\(T_{i}\\) of the following weather stations (the length of the records is written within the parentheses): (a) Vienna (A, 125 yr), (b) Perm (RUS, 113 yr), (c) Charleston (USA, 127 yr), and (d) Pusan (KOR, 91 yr). Vienna and Perm have continental climate, while Charleston and Pusan are close to coastlines. In the log-log plots the DFA1-3 curves are (except at small \\(s\\) values) approximately straight lines. For both the stations inside the continents and along coastlines the slope is \\(\\alpha\\cong 0.65\\). There exists a natural crossover (above the DFA crossovers at very small times) that can be best estimated from DFA0 [22]. As can be verified easily, the crossover occurs roughly at \\(s_{\\times}=10\\) days, which is the order of magnitude for a typical Grosswetterlage. Above \\(s_{\\times}\\), there exists long-range persistence expressed by the power-law decay of the correlation function with an exponent \\(\\gamma=2-2\\alpha\\cong 0.7\\). Figure 2 shows the results of the DFA analysis of the daily temperatures for two island stations: Wranengelja and Campbell Islands. Wranengelja Island is a large island between the East Siberian Sea and the Chukchi Sea. During the winter season, large parts of the water surrounding the island are usually frozen. Campbell Island is a small island belonging to New Zealand. Again, in the double logarithmic presentation, all DFA1-3 fluctuation functions are straight lines, but the slopes differ. While for Wranengelja the slope is 0.65, similar to the land stations shown before, the slope for Campbell Island is significantly larger, close to 0.8 (corresponding to \\(\\gamma=0.4\\)). It can be seen from Figs. 1 and 2 that sometimes the DFA0 curves have a larger slope than the DFA1-3 curves, and that usually the curves of DFA2 and DFA3 have the same slope for large \\(s\\) values. The fact that the DFA0 curve has a higher exponent indicates the existence of trends by which the long-term correlations are masked. Calculations using DFA0 alone will yield a higher correlation exponent and thus lead to a spurious overestimation of the long-term persistence. The fact that the DFA2 and DFA3 curves show the same asymptotic behavior indicates that possible nonlinearities in the trends are not significant. Otherwise the DFA2 curve (where only linear trends are eliminated) would show an asymptotic behavior different from DFA3. By comparing the DFA0 curves with the DFA2 curves, we can learn more about possible trends. Usually the effect of trends is seen as a crossover in the DFA0 curve. Below the crossover, the slopes of DFA0 and DFA2 are roughly the same, while above the crossover the DFA0 curve bends. Large trends are characterized by a short crossovertime \\(s_{\\times}\\) and a large difference in the slopes between DFA0 and DFA2 (for a general discussion see Refs. [10] and [11]). A nice example for this represents Vienna, where the DFA0 curve shows a pronounced crossover at about 3 yr. Above this crossover, the DFA0 curve bends up considerably, with an effective slope close to 0.8. For Pusan, the trend is less pronounced, and for P Figure 1: Analysis of daily temperature records of four representative weather stations on continents. The four figures show the fluctuation functions obtained by DFA0, DFA1, DFA2, and DFA3 (from top to bottom) for the four sets of data. The slopes are \\(0.64\\pm 0.02\\) (Vienna), \\(0.62\\pm 0.02\\) (Perm), \\(0.63\\pm 0.02\\) (Charleston), and \\(0.67\\pm 0.02\\) (Pusan). Lines with these slopes are plotted in the figures. The scale of the fluctuation functions is arbitrary. Figure 3: Fluctuation analysis by DFA0 and DFA2 of daily temperature records of 20 representative weather stations: (1) Thursday Island (AUS, 53 yr), (2) Koror Island (USA, 54 yr), (3) Raoul Island (USA, 54 yr), (4) Hong Kong (C, 111 yr), (5) Anadir (RUS, 101 yr), (6) Hamburg (D, 107 yr), (7) Plymouth (GB, 122 yr), (8) Feodosija (UA, 113 yr), (9) Wellington (NZ, 67 yr), (10) Jena (D, 175 yr), (11) Brno (Cz, 128 yr), (12) Chita (RUS, 114 yr), (13) Tashkent (USB, 119 yr), (14) Postdam (D, 115 yr), (15) Jinsk (WY, 113 yr), (16) Oxford (GB, 155 yr), (17) Cheyenne (USA, 123 yr), (18) Kunming (C, 49 yr), (19) Wuxqiaolino (C, 40 yr), and (20) Zugspütze (D, 98 yr). Stations 1–3 are on islands, stations 4–9 are on coastlines, and stations 10–20 are inland stations, among them two stations (19 and 20) are on summits. The scales are arbitrary. To reveal that the exponents \\(\\alpha\\) are close to 0.65, we have divided the fluctuation functions by \\(s^{0.65}\\). Figure 2: Analysis of daily temperature records of two representative weather stations on islands. The DFA curves are arranged as in Fig. 1. The slopes are \\(0.71\\pm 0.02\\) (Campbell) and \\(0.65\\pm 0.02\\) (Wranengelja). Lines with these slopes are plotted in the figures. not see indications of trends. To reveal the presence of long-term correlations and to point out possible trends, we have plotted in Fig. 3(a) the DFA0 curves and in Fig. 3(b) the DFA2 curves for 20 representative stations around the globe. For convenience, the fluctuation functions have been divided by \\(s^{0.65}\\). We do not show results for those stations that were analyzed in Ref. [12]. Figure 3(b) shows again that continental and coastline stations have roughly the same fluctuation exponent \\(\\alpha\\cong 0.65\\), while islands may also have higher exponents. It seems that stations at peaks of high mountains [here we show Zuggitze (D, 98 yr, no. 19) and Wuxqiaoling (C, 40 yr, no. 20)] have a slightly lower exponent. From the 26 stations shown in Figs. 1-3, 8 show a larger exponent in the DFA0 treatment than in the DFA2 treatment. These stations are Thursday Island (no. 1 in Fig. 3), Koror Island (no. 2 in Fig. 3), as well as Vienna [Fig. 1(a)], Pusan [Fig. 1(d)], Hong Kong (no. 4 in Fig. 3), Jena (no. 10 in Fig. 3), Cheyenne (no. 17 in Fig. 3), and Zuggitze (no. 19 in Fig. 3). The other 18 stations do not show a difference in the exponents for DFA0 and DFA2, which suggests that the trends are either zero or too small to be detected by this sensitive method. We observe the largest trends for Hong Kong, Vienna, and Jena, where in all cases the crossover in the DFA0 curve is around 3 yr and the final slope is between 0.75 and 0.8. It is obvious that the greatest part of this warming is due to the urban growth of theses cities. Regarding the two islands, Koror shows a pronounced trend with a crossover time below 1 yr, while the trend we observe for Thursday Island is comparatively weak. It is not likely that the trends on the islands can be attributed to urban warming. Figure 4 summarizes our results for all the stations analyzed. Fig. 4(a) shows the histogram for the values of the exponent \\(\\alpha\\) obtained by DFA0, while Fig. 4(b) shows the corresponding histogram obtained by DFA2. Both histograms are quite similar. For DFA2 the average exponent \\(\\alpha\\) is \\(0.66\\pm 0.06\\) and for DFA0 it is \\(0.68\\pm 0.07\\). The maxima become sharper when the islands are eliminated from the figures. The slight shift towards larger \\(\\alpha\\) values in DFA0 is due to trends. The magnitude of the trends can be roughly characterized by the difference \\(\\delta\\alpha\\) of the slopes of DFA0 and DFA2. We found that 7 of the 15 island stations and 54 of the 80 continental stations showed no significant trend, with \\(\\delta\\alpha\\leq 0.02\\). We observed a small trend, with \\(0.03\\leq\\delta\\alpha\\leq 0.05\\), for 3 island and 9 continental stations. A pronounced trend, with \\(\\delta\\alpha\\geq 0.06\\), was found for 5 island and 13 continental stations. Among these 13 stations are Hong Kong, Bordeaux, Prague, Seoul, Sydney, Urumchi, Swerdlowsk, and Vienna, where a large part of the warming can be attributed to the urban growth of the cities in the last century. Two of these stations [Santis (CH) and Sonnblick (A)] are on top of high mountains. Since the island stations have exponent \\(\\alpha\\) larger than the continental stations, it is likely that the long-term persistence originates by the coupling of the atmosphere to the oceans. Thus one might expect that for island stations \\(\\alpha\\) will increase with the distance to the continents, and for continental stations \\(\\alpha\\) will decrease with the distance to the coastline. To test if the exponent \\(\\alpha\\) depends on the distance \\(d\\) to the continental coastlines, we plotted in Fig. 5\\(\\alpha\\) as a function of \\(d\\) for both islands and continental stations. It is remarkable that islands far away from the continents do not show a larger exponent than islands close to the coastlines, and inner-continental stations far from the ocean do not show smaller exponents than coastline stations. This second result is in disagreement with a recent claim that \\(\\alpha=0.5\\) for inner-continental stations far away from the oceans [23]. Figure 4: Histogram of the values of the fluctuation exponents \\(\\alpha\\) obtained (a) from DFA0 where trends are not eliminated and (b) from DFA2 where linear trends are eliminated systematically on all time scales. Figure 5: The scaling exponent \\(\\alpha\\) as a function of the distance \\(d\\) between the stations and the continental coastlines, for island stations (\\(\\circ\\)), continental stations (\\(\\triangle\\)), and coastline stations (\\(\\times\\)). Many of the coastline stations (\\(d=0\\)) have the same \\(\\alpha\\) value, and we indicated their number in the figure. ## IV Discussion In this paper, we have used a hierarchy of detrending analysis methods (DFA0-DFA3) to study long temperature records around the globe. We concentrated mainly on those areas on the globe (North America, Europe, Asia and Australia) where long records are available. The main results of the study are the following. (i) The temperature persistence decays, after a crossover time that is typically of the order of the duration of a Gross-wetterlage, by a power law, with an exponent \\(\\alpha\\) that has a very narrow distribution for continental stations. The mean value of the exponent is close to 0.65, in agreement with earlier calculations based on different methods [12, 15, 16, 17]. (ii) On islands, the exponent shows a broader distribution, varying from 0.65 to 0.85, with an average value close to 0.8. This finding is in qualitative agreement with the results of a recent analysis of sea surface temperature records, where also long-term persistence with an average exponent close to 0.8 has been found [14]. Since the oceans cover more than 2/3 of the globe, one may expect that also the mean global temperature is characterized by long-term persistence, with an exponent close to 0.8. (iii) In the vast majority of stations we did not see indications for a global warming of the atmosphere. Exceptions are mountain stations in the Alps [Zugspitze (D), Santis (CH), and Sonnblick (A)], where urban warming can be excluded. Also, in half of the islands we studied, we found pronounced trends that most probably cannot be attributed to urban warming. Most of the continental stations where we observed significant trends are large cities where probably the fast urban growth in the last century gave rise to temperature increases. When analyzing warming phenomena in the atmosphere, it is essential to employ methods that can distinguish, in a systematic way, between trends and long-term correlations in contradistinction to a number of conventional schemes that have been applied in the past. These schemes run the risk of mixing up the correlatedness of natural climate system variability with entire regime shifts enforced by anthropogenic interference through greenhouse gas emissions. The fact that we found it difficult to discern warming trends at many stations that are not located in rapidly developing urban areas may indicate that the actual increase in global temperature caused by anthropogenic perturbation is less pronounced than estimated in the last IPCC (Intergovernmental Panel for Climate Change) report [24]. ###### Acknowledgements. We are grateful to Professor S. Brenner for very useful discussions. We would like to acknowledge financial support by the Deutsche Forschungsgemeinschaft and the Israel Science Foundation. ## References * [1] J.G. Charney and J. Devore, J. Atmos. Sci. **36**, 1205 (1979). * [2] N. Scafetta and B.J. West, Phys. Rev. Lett. **90**, 248701 (2003). * [3]_The Science of Disasters_, edited by A. Bunde, J. Kropp, and H.-J. Schellnhuber, (Springer, New York, 2002). * [4]_Wavelets: Theory and Applications_, edited by G. Erlebacher, M.Y. Hussaini, and L.M. Jameson (Oxford University Press, Oxford, 1996). * [5] M. Holschneider, _Wavelets: An Analysis Tool_ (Oxford University Press, Oxford, 1996). * [6] A. Arneodo, Y. d'Aubenton-Carafa, E. Bacry, P.V. Graves, J.F. Muzy, and C. Thermes, Physica D **96**, 291 (1996). * [7] A. Arneodo, B. Audit, N. Decoster, J.F. Muzy, and C. Vaillant, in _The Science of Disasters_ (Ref. [3]) p. 28. * [8] C.-K. Peng, S.V. Buldyrev, S. Havlin, M. Simons, H.E. Stanley, and A.L. Goldberger, Phys. Rev. E **49**, 1685 (1994). * [9] A. Bunde, S. Havlin, J.W. Kantelhardt, T. Penzel, J.H. Peter, and K. Voigt, Phys. Rev. Lett. **85**, 3736 (2000). * [10] J.W. Kantelhardt, E. Koscielny-Bunde, H.A. Rego, S. Havlin, and A. Bunde, Physica A **295**, 441 (2001). * [11] K. Hu, P.Ch. Ivanov, Z. Chen, P. Carpena, and H.E. Stanley, Phys. Rev. E **64**, 011114 (2001). * [12] E. Koscielny-Bunde, A. Bunde, S. Havlin, H.E. Roman, Y. Goldreich, and H.-J. Schellnhuber, Phys. Rev. Lett. **81**, 729 (1998). * [13] R.B. Govindan, D. Vyushin, A. Bunde, S. Brenner, S. Havlin, and H.-J. Schellnhuber, Phys. Rev. Lett. **89**, 028501 (2002) * [14] R.A. Monetti, S. Havlin, and A. Bunde, Physica A **320**, 581 (2003). * [15] E. Koscielny-Bunde, A. Bunde, S. Havlin, and Y. Goldreich, Physica A **231**, 393 (1996). * [16] J.D. Pelletier and D.L. Turcotte, J. Hydrol. **203** 198 (1997); J.D. Pelletier, J. Clim. **10**, 1331 (1997); J.D. Pelletier and D.L. Turcotte, Adv. Geophys. **40**, 91 (1999). * [17] P. Talkner and R.O. Weber, Phys. Rev. E **62**, 150 (2000); P. Talkner and R.O. Weber, J. Geophys. Res. [Atmos.] **106**, 20131 (2001). * [18] A.-L. Barabasi and H.E. Stanley, _Fractal Concepts in Surface Growth_ (Cambridge University Press, Cambridge, 1995). * [19] J. Feder, _Fractals_ (Plenum, New York, 1989). * [20] D.L. Turcotte, _Fractals and Chaos in Geology and Geophysics_ (Cambridge University Press, Cambridge, 1992). * [21] S. Lovejoy and D. Scherter, _Nonlinear Variability in Geophysics: Scaling and Fractals_, (Kluwer Academic Publishers, Dordrecht, MA, 1991); D. Lavallee, S. Lovejoy, and D. Schertzer in _Fractals in Geography_, edited by L. DeCola and N. Lam (Prentice-Hall, Englewood Cliffs, NJ, 1993), pp. 158 to 192; G. Pandey, S. Lovejoy, and D. Schertzer, J. Hydrol., **208**, 62 (1998). * [22] The crossover increases, as an artefact of the DFA method, with increasing order of DFA. DFA0 and DFA1 give the best estimate for the crossover (see also Refs. [9-11]). * [23] K. Fraedrich and R. Blender, Phys. Rev. Lett. **90**, 108501 (2003). * [24]_Climate Change 2001: The Scientific Basis, Contribution of Working Group I to the Third Assessment Report of the Intergovernmental Panel on Climate Change (IPCC)_ edited by J.T. Houghton _et al_. (Cambridge University Press, Cambridge, 2001).
We use several variants of the detrended fluctuation analysis to study the appearance of long-term persistence in temperature records, obtained at 95 stations all over the globe. Our results basically confirm earlier studies. We find that the persistence, characterized by the correlation \\(C(s)\\) of temperature variations separated by \\(s\\) days, decays for large \\(s\\) as a power law, \\(C(s)\\sim s^{-\\gamma}\\). For continental stations, including stations along the coastlines, we find that \\(\\gamma\\) is always close to 0.7. For stations on islands, we find that \\(\\gamma\\) ranges between 0.3 and 0.7, with a maximum at \\(\\gamma=0.4\\). This is consistent with earlier studies of the persistence in sea surface temperature records where \\(\\gamma\\) is close to 0.4. In all cases, the exponent \\(\\gamma\\) does not depend on the distance of the stations to the continental coastlines. By varying the degree of detrending in the fluctuation analysis we obtain also information about trends in the temperature records. pacs: PACS numbers: 89.75.Da, 92.60.Wc, 05.45.Tp
Summarize the following text.
arxiv-format/0212336v1.md
# Captain Cook, the Terrestrial Planet Finder and the Search for Extraterrestrial Intelligence Charles Beichman ## 1 Historical Introduction This conference takes place in the Whitsunday group of islands discovered on Whitsunday (June 3), 1770, by the English explorer, Capt. James Cook. Surprisingly, the stated goal of Cook's voyage was astronomical in nature. As described in Richard Hough's excellent biography (1997) of Capt. Cook, the story has echoes in today's searches for other worlds. Johannes Kepler and Edmund Halley (1716) predicted that Venus would traverse across the face of the Sun in 1761 and 1769.1 These astronomers realized that comparing the timing of the transit from multiple locations would yield a measurement of cosmological importance, the Earth-Sun distance or Astronomical Unit (AU), then known to no better than a factor of two (Sellers, 2001; and _www.dsellers.demon.co.uk/venus/ven_ch4.htm_). Just as today's scientists argued that the Hubble Space Telescope was necessary to determine the Hubble constant, and thus the scale of the Universe, so too did the astronomers of the \\(18^{th}\\) century argue for an ###### Abstract The 1761 transit was poorly observed due to bad weather, in 1767 the Royal Society established a committee to \"report on places where it would be advisable to make the observations, the methods to be pursued, and the persons best suited to carry out the work\" of observing the 1769 transit. After a period of deliberation, the Royal Society, acting much like today's National Academy of Science, forwarded a proposal to King George III calling for an expedition to Tahiti to observe the transit. The proposal offered a remarkably modern set of arguments (Hough, 1994, p. 44): * It highlighted the practical applications of the results, noting that transit measurement would \"contribute greatly to the improvement of astronomy on which Navigation so much depends.\" * It noted with concern potential international competition: \"The French, Spaniards, Danes and Swedes are making the proper dispositions for the Observation thereof The Empress of Russia has given directions for having the same observed It would cast dishonor [on the British nation] should they neglect to have the correct observations made of this important phenomenon.\" * And finally, the proposal included an estimate of the cost of the expedition, exclusive of launch vehicle, that would prove to be an underestimate by a thoroughly modern factor of \\(\\pi\\). \"The expense would amount to \\(\\pounds 4,000\\), exclusive of the expense of the ship. The Royal Society is in no condition to defray the expense.\" King George approved the project. The Royal Navy provided the launch vehicle, the collier _Endeavour_, and Lieutenant James Cook. The Royal Society provided an observer, Charles Green, and the gentleman-natural Joseph Banks, who also contributed more than \\(\\pounds 10,000\\) of his funds. On August 26, 1768, Cook set off with 94 passengers and crew and supplies for 18 months including 604 gallons of rum and 4 tons of beer. They arrived in Tahiti one year later, six weeks before the transit, and set up an observatory at \"Point Venus.\" The observations were successfully made on June 3, 1769. The rest of the voyage was not without event as Cook undertook to fulfill the second (and secret) part of his Admiralty orders, namely to search for the mysterious southern continent. He circumnavigated New Zealand's North and South Islands and explored the East Coast of Australia from Cape Hicks, past Sydney and Botany Bay up to the Great Barrier Reef. On sailing up the Whitsunday Passage, just a few miles from Hamilton Island where we now sit, Cook noted \"This land is diversified by hill and valley, wood and lawn, with a green pleasant experience.\" A few days later, however, he ran aground on the reef, which he termed an \"Insane Labyrinth\", nearly sinking before limping to shore for a month of repairs.2Cook returned to England in 1771 with transit data which, when combined with measurements from other sites, led to the determination of the Astronomical Unit to within 3% of the modern value, a major advance in observational cosmology (Sellers 2001). ## 2 Scientific Introduction Two hundred years after Cook's voyage, scientists have started to consider the challenge of finding life on planets beyond our own. As summarized in Woolf and Angel (1998) and Beichman et al. (1999, 2000, and 2002), modern technology offers a realistic opportunity to address this ancient question. In March, 2000, the Terrestrial Planet Finder (TPF) project at JPL selected four university-industry teams to examine a broad range of instrument architectures capable of directly detecting radiation from terrestrial planets orbiting nearby stars, characterizing their surfaces and atmospheres, and searching for signs of life. Over the course of two years the four teams, incorporating more than 115 scientists from 50 institutions worked with more than 20 aerospace and engineering firms. In the first year of study, the contractors and the TPF Science Working Group (TPF-SWG) examined over 60 different ideas for planet detection. Four main concepts, including a number of variants, were selected for more detailed study. Of these concepts, two broad architectural classes appear sufficiently realistic to the TPF-SWG, to an independent Technology Review Board, and to the TPF project that further technological development is warranted in support of a new start around 2010. _The primary conclusion from the effort of the past two years is that with suitable technology investment, starting now, a mission to detect terrestrial planets around nearby stars could be launched within a decade._ The detection of Earth-like planets will not be easy. The targets are faint and located close to parent stars that are \\(>\\)1 million (in the infrared) to \\(>\\)1 billion times (in the visible) brighter than the planets. However, the detection problem is well defined and can be solved using technologies that can be developed within the next decade. We have identified two paths to the TPF goal of finding and characterizing planets around 150 stars out to distances of about 15 pc: \\(\\bullet\\) At visible wavelengths, a large telescope (a 4x10 m elliptical aperture in one design and an 8x8 m square aperture in another) equipped with a selection of advanced optics to reject scattered and diffracted starlight (apodizing pupil masks, coronagraphic stops, and deformable mirrors) offers the prospect of directly detecting reflected light from Earths. \\(\\bullet\\) At mid-IR wavelengths, nulling interferometer designs utilizing from three to five 3\\(\\sim\\)4 m telescopes located on either separated spacecraft or a large, 40 m boom can directly detect the thermal radiation emitted by Earths. The TPF-SWG established that observations in either the optical/near-infrared or thermal infrared wavelength region would provide important information on the physical characteristics of any detected planets, including credible signposts of life. In fact, the two wavelengths provide complementary information so that in the long run, both would be desirable. The choice of wavelength regime for TPF will, in the estimation of the TPF-SWG, be driven by the technological readiness of a particular technique. ## 3 Scientific Goals for The Terrestrial Planet Finder The TPF Science Working Group (TPF-SWG) established a Design Reference Program to give broad guidelines for defining architectures for TPF. The goals for TPF were set out at the December, 2000, meeting of the TPF-SWG: _Primary Goal for the Terrestrial Planet Finder (TPF)_: TPF must detect radiation from any Earth-like planets located in the habitable zones surrounding 150 solar type (spectral types F, G, and K) stars. TPF must: 1) characterize the orbital and physical properties of all detected planets to assess their habitability; and 2) characterize the atmospheres and search for potential biomarkers among the brightest Earth-like candidates. _The Broader Scientific Context_: Our understanding of the properties of terrestrial planets will be scientifically most valuable within a broader framework that includes the properties of all planetary system constituents, e.g. both gas giant and terrestrial planets, and debris disks. Some of this information, such as the properties of debris disks and the masses and orbital properties of gas giant planets, will become available with currently planned space or ground-based facilities. However, the spectral characterization of most giant planets will require observations with TPF. TPF's ability to carry out a program of comparative planetology across a range of planetary masses and orbital locations in a large number of new solar systems is by itself an important scientific motivation for the mission. _Astrophysics with TPF_: An observatory with the power to detect an Earth orbiting a nearby star will be able to collect important new data on many targets of general astrophysical interest. Architectural studies should address both the range of problems and the fundamental new insights that would be enabled with a particular design. ## 4 Biomarkers for TPF Early TPF-SWG discussions made it apparent that observations in either the visible or mid-infrared portions of the spectrum were technically feasible and scientifically important. A sub-committee of the TPF-SWG was established under the leadership of Dave Des Marais to address the wavelength regimes for TPF. The conclusions of their report (Des Marais _et al._ 2002) can be summarized briefly as follows: \\(\\bullet\\) Photometry and spectroscopy in either the visible or mid-IR region would give compelling information on the physical properties of planets as well as on the presence and composition of an atmosphere. \\(\\bullet\\) The presence of molecular oxygen (O\\({}_{2}\\)) or its photolytic by-product ozone (O\\({}_{3}\\)) are the most robust indicators of photosynthetic life on a planet. Even though H\\({}_{2}\\)O is not a bio-indicator, its presence in liquid form on a planet's surface is considered essential to life and is thus a good signpost of habitability. \\(\\bullet\\) Species such as H\\({}_{2}\\)O, CO, CH\\({}_{4}\\), and O\\({}_{2}\\) may be present in visible-light spectra (0.7 to 1.0 \\(\\mu\\)m minimum and 0.5 to 1.1 \\(\\mu\\)m preferred) of Earth-like planets. An ozone band at 0.3 \\(\\mu\\)m and a general rise in albedo due to Rayleigh scattering are among the few features in the blue-UV part of the spectrum. The lines of these species can be resolved with spectral resolving powers of \\(\\lambda/\\Delta\\lambda\\sim 25-100\\). \\(\\bullet\\) Species such as H\\({}_{2}\\)O, CO\\({}_{2}\\), CH\\({}_{4}\\), and O\\({}_{3}\\) may be present in mid-infrared spectra of Earth-like planets (8.5 to 20 \\(\\mu\\)m minimum and 7 to 25 \\(\\mu\\)m preferred). These lines (except CH\\({}_{4}\\)) can be resolved with spectral resolving powers of \\(\\lambda/\\Delta\\lambda\\sim 5-25\\). \\(\\bullet\\) The influence of clouds, surface properties (including the presence of photosynthetic pigments such as chlorophyll), rotation, etc. can have profound effects on the photometric and spectroscopic appearance of planets and must be carefully addressed with theoretical studies in the coming years (e.g. Ford, Seager, and Turner 2001). In conclusion, the TPF-SWG agreed that either wavelength region would provide important information on the nature of detected planets and that the choice between wavelengths should be driven by technical considerations. ## 5 TPF Architectural Studies After an initial year during which the four study teams investigated more than 60 designs, the teams plus JPL identified four architectural classes (with a number of variants) worthy of more intensive study. High level descriptions of these architectures are given below; more detailed information is available in the summary of the recent architecture studies (Beichman _et al._ 2002), the final reports from the teams, and, for the separated spacecraft interferometer, the TPF Book (1999). ### Visible Light Coronagraphs Two groups (Ball and Boeing-SVS) investigated the potential for a visible light coronagraph to satisfy TPF's goals. While there are differences between the designs, there are major similarities: 1) a large optical surface (4 \\(\\times\\) 10 m for Ball, 8\\(\\times\\)8 m for Boeing-SVS); 2) a highly precise, lightweight primary mirror equipped with actuators for figure control with surface quality of order 1-5 nm depending on spatial frequency; and 3) a variety of pupil masks (square, Gaussian, or other (Spergel and Kasdin 2001) and/or Lyot stops) to suppress diffracted starlight. In the case of the Ball designs, a key component was a small deformable mirror with \\(\\sim 100\\times 100=10^{4}\\) elements capable of correcting residual mid-spatial frequency errors to \\(\\lambda/3,000\\) and stable to \\(\\lambda/10,000\\). In the Ball design, the combination of pupil masks and the deformable mirror reduces the ratio of starlight (scattered or diffracted)to planet light to approximately unity over an angular extent between \\(\\sim 5\\lambda/D\\) and \\(100\\lambda/D\\). With these features, the Ball systems are able to conduct a survey of 150 stars with images taken at 3 epochs for confirmation and orbital determination in less than half a year. The Boeing-SVS system, as proposed, takes more time to complete such a survey because without a deformable mirror the ratio of starlight to planet light is about 100 times worse than in the Ball design; addition of a deformable mirror would result in comparable performance for the two telescopes. In under a day per star, the Ball system could detect (SNR=5 at spectral resolution \\(R=\\lambda/\\Delta\\lambda\\sim 25-75\\)) various atmospheric tracers, including O\\({}_{2}\\), a critical signpost for the presence of photosynthetic life. The study teams pointed out that the potential for ancillary science was particularly impressive for the visible systems, since it would be straight-forward to add a complement of traditional astronomical instruments, e.g. UV-optical imagers and spectrographs. Operated on an 8-10 m telescope, such instruments would represent a giant advance over the present UV-optical performance of the Hubble Space Telescope. Of particular interest would be the ability to make diffraction limited images at UV-wavelengths with \\(<5\\) milli-arcsec resolution. Future studies will have to assess whether the specialized requirements of a planet finding, e.g. an off-axis secondary, might compromise the general astrophysics potential of a visible/ultra-violet system. Conversely, NASA will have to weigh whether specialized needs such as high UV throughput requiring special coatings and careful attention to contamination issues might significantly increase the cost of the observatory or compromise its planet-finding performance. The greatest technical risk for the visible coronagraph is in the development, manufacturing and implementation of a large primary mirror with ultra-low wavefront errors as well as components associated with starlight suppression. The coronagraphs themselves are functionally simple and although the demands for system performance are challenging, none are thought to be insurmountable. However, the problem of fabricating and launching a large (8-10 m) mirror cannot be overemphasized. The TPF Project's independent Technology Review Board noted that there exists no capability to fabricate such a high precision (3-5 times better than Hubble's mirror, 5-10 times better than NGST's mirror), lightweight optical element for ground or space. But even if a 8-10 m system proves to be too difficult to implement on the TPF timetable, a 2-4 m class telescope could demonstrate high dynamic range coronagraphic imaging and carry out an exciting scientific program. Such a system could find Earths only around the closest dozen stars because of its degraded angular resolution, but it could find and characterize Jupiters around many more distant stars. A telescope of this scale might fit into budget of a Discovery mission. ### Nulling Infrared Interferometers Lockheed Martin and JPL examined two versions of the infrared nulling interferometer: structurally connected and separated spacecraft. The Lockheed Martin study concluded that a structurally connected infrared interferometer with four 3.5 m diameter telescopes on a fixed 40 m baseline comes close to achieving TPF's goals. The system uses four collinear telescopes arranged as two interleaved Bracewell nulling interferometers to reject star light adequately so that stellar leakage does not compromise the overall system noise. The array would be rotated around the line of sight to the star over a 6-8 hour period. The telescopes can be combined in different pairs to achieve the short and long baselines needed to observe distant or nearby stars. The nulled outputs of the combined pairs are combined again to yield a \\(\\theta^{4}\\) null or an effective \\(\\theta^{3}\\) null with phase chopping.3The separated spacecraft version of the nulling interferometer was described in the 1999 TPF report (Beichman _et al._ 1999; also, see Woolf and Angel 1998). It uses a different arrangement of telescopes to produce a deeper, \\(\\theta^{6}\\), null that can be tuned to resolve most effectively the habitable zone around each target. Because the stellar leakage is reduced in this design, the stability requirements are relaxed relative to the structurally connected interferometer. The \\(\\theta^{6}\\) null is, however, less efficient in its use of baseline, requiring roughly 1.5-2 times longer baselines than the structurally connected system. Providing a 80-100 m baseline leads, in turn, to the likely requirement for a more complex separated spacecraft system. Thus, a near-term study must investigate whether a 40 m system can satisfy TPF's goals. If not, then NASA should pursue aggressively the development of a separated spacecraft nulling interferometer. It should be mentioned that the European Space Agency (ESA) has studied a two dimensional, separated spacecraft array of infrared telescopes for its Darwin mission. An industrial study by Alcatel found that this version of a planet-finding mission was technically feasible. The ancillary science possible with an interferometer is likely to be more specialized than for a 8-10 m visible telescope equipped with general purpose instruments. However, the prospect of a telescope with NGST-like sensitivity, but with 10 \\(\\times\\) better angular resolution, imaging the cores of protostars, active galaxies, and high redshift quasars is an exciting one. The largest area of technical risk for the infrared interferometers is not in the performance of the individual components but in the operation of the various elements as a complete system. Most of the required elements are either under development and making good progress, or are reasonable extensions of technology being developed for missions and ground observatories that will be in place well before TPF needs them. However, the system complexity of the separated spacecraft system cannot be overemphasized. Such a system would demand at least one precursor space mission: a formation flying interferometer, such as the Starlight project, to validate the complex control algorithms and beam transport needed for this version of TPF. ## 6 Terrestrial Planet Finder (TPF) and the Search for Extraterrestrial Intelligence (SETI) This conference offers a happy opportunity for researchers pursuing two quite different techniques for finding extraterrestrial life to come together. NASA's Origins program has focused on a search for planets and primitive life, in part because of political considerations that ended NASA's involvement in SETI over a decade ago. We are fortunate that dedicated scientists like Jill Tarter and Frank Drake, as well as far-seeing donors such as Paul Allen, have continued to pursue SETI in the context of the vibrant research activities we heard described at this conference. Despite the political barriers between NASA's programs and SETI, the unity of these efforts can be seen through an examination of the Drake equation \\[N=(Star~{}Formation~{}Rate)\\times f_{solar~{}type~{}stars}\\times f_{planets} \\times N_{\\oplus}\\]\\[\\times f_{life}\\times f_{intelligence}\\times f_{communicative}\\times Lifetime\\] Twentieth century astronomers determined the first two terms, the star formation rate in the galaxy and the fraction of stars that are of solar type. Present day radial velocity studies combined with future transit experiments (the Kepler mission; Borucki et al. 2001) and astrometric observations (the Space Interferometric Mission, SIM) will determine \\(f_{planets}\\), the fraction of stars with planets, and \\(N_{\\oplus}\\), the number of earth-like planets in the habitable zone, both statistically and around our nearest neighboring stars. TPF will characterize the Earth-like planets, habitable or not, and provide the first measurements (or upper limits) of \\(f_{life}\\), the fraction of suitable planets that develop life. Thus, within a generation, we will have well established values for the next three terms in the Drake equation. However, astronomical observations cannot determine the final three terms in the equation: the fraction of planets that develop intelligent life, the fraction of intelligent civilizations that can (or want) to communicate, and the lifetime of a communicative civilization. These terms are the realm of the SETI. However, it is the basic frustration of SETI that we cannot separate these remaining terms without at least one successful contact. This inability to interpret a negative result makes SETI an inherently non-scientific experiment despite its highly technological nature. On the other hand, SETI is an important program of _exploration_ that must be carried out with our best technology and with great scientific rigor in light of the importance of a positive outcome. Despite the interesting papers presented at this conference, useful information on the last terms in the Drake equation will not come from terrestrial analogy. Such attempts remind one of Capt. Cook's critics who tried to deduce the existence of a grand Southern Continent by analogy with the Northern Hemisphere (Hough 1994, p. 220,306). It took Cook's explorations to rebut these claims and discover the truth. Similarly, it will take rigorous searches (or serendipitous discovery) to determine whether there is another planet with _intelligent_ life in the Universe. ## 7 Conclusions NASA, together with its potential international partners, has begun to address the challenge of looking for habitable planets and seeking signs of life beyond the Solar System. Captain Cook's voyages remind us that we have asked these questions before and, after much hard work, have been rewarded with new continents to explore. As Halley said in his paper predicting the 1761 and 1769 transits of Venus (Halley 1716; quoted in Sellers 2001): _\"We therefore recommend again and again, to the curious investigators of the stars to whom, when our lives are over, these observations are entrusted, that they, mindful of our advice, apply themselves to the undertaking of these observations vigorously. And for them we desire and pray for all good luck, especially that they be not deprived of this coveted spectacle by the unfortunate obscuration of cloudy heavens, and that the immensities of the celestial spheres, compelled to more precise boundaries, may at last yield to their glory and external fame.\"_Cook's voyage, in the service of science and exploration, resonates with us today as we use transits of extra-solar planets and other techniques to search for new worlds and for life beyond Earth. That Cook, Banks, and the English society that dispatched them had multiple motivations for making this voyage -- some noble, some base -- only highlights the similarities with modern exploration where personal ambition, local politics, and practical applications mix with the search for scientific truth. Today's scientists and policy makers would do well to consider the linkages between science and exploration that future voyages of discovery may enable. The technologies needed to look for other habitable worlds are within our grasp. We need only the will (and the funding) to undertake the search. ## 8 Acknowledgements The research described in this paper was performed at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration. This work was supported by the TPF project. The author acknowledges valuable contributions from TPF Science Working Group and the contractor teams, as well as the dedication and hard work of Dan Coulter and Chris Lindensmith at JPL. The hospitality and dedication of the conference organizers and beauty of the conference site were appreciated by all the participants. Beichman, C.A., _Planetary Systems in the Universe_, International Astronomical Union. Symposium no. 202. Manchester, England (August 2000). Borucki, W. J., Koch, D. G., Jenkins, J. M. 2001, _BAAS_, **199**, 115.04. Des Marais, D., _et al._ 2002, _Astrobiology_, in press. Ford, E. B., Seager, S., Turner, E. L. 2001, _Nature_, **412**,885. Halley, E. 1716, _Philosophical Transactions_, XXIX, _A new Method of determining the Parallax of the Sun, or his Distance from the Earth_, Sec. R. S. NO 348, p.454. Hough, Richard, _Captain James Cook: A Biography_, (New York: W. W. Norton), 1997. Sellers, David, _The Transit of Venus_, 2001, (London: Magavelda Press). Spergel, D. and Kasdin, J. 2001, _BAAS_, **199**, 86.03. _Summary Report On Architectural Studies For The Terrestrial Planet Finder_, 2002, edited by Beichman, C. A., Coulter, D., Lindensmith, C and Lawson P. JPL Report, 02-11. _The Terrestrial Planet Finder (TPF): A NASA Origins Program to Search for Terrestrial Planets_, 1999, edited by Beichman, C. A., Woolf, N. J., Lindensmith. C. A., JPL Report 99-3. Woolf, N. and Angel, J. R. 1998, _ARAA_, **36**, 507.
Over two hundred years ago Capt. James Cook sailed up Whitsunday Passage, just a few miles where we now sit, on a voyage of astronomical observation and discovery that remains an inspiration to us all. Since the prospects of our visiting planets beyond our solar system are slim, we will have to content ourselves with searching for life using remote sensing, not sailing ships. Fortunately, a recently completed NASA study has concluded that a Terrestrial Planet Finder could be launched within a decade to detect terrestrial planets around nearby stars. A visible light coronagraph using an 8-10 m telescope, or an infrared nulling interferometer, operated on either a \\(\\sim 40\\) m structure or separated spacecraft, could survey over 150 stars, looking for habitable planets and signs of primitive life. Such a mission, complemented by projects (Kepler and Eddington) that will provide statistical information on the frequency of Earth-sized planets in the habitable zone, will determine key terms in the \"Drake equation\" that describes the number of intelligent civilizations in the Universe. Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA 91109 1Transits occur in pairs separated by roughly a century. The next pair will happen on June 8, 2004, and June 5, 2012. See _The Transit of Venus_ (Sellers, 2001).
Give a concise overview of the text below.
arxiv-format/0301009v1.md
# Directed flow in Au+Au, Xe+CsI and Ni+Ni collisions and the nuclear equation of state A. Andronic\\({}^{4}\\), W. Reisdorf\\({}^{4}\\), N. Herrmann\\({}^{6}\\), P. Crochet\\({}^{3}\\), J.P. Alard\\({}^{3}\\), V. Barret\\({}^{3}\\), Z. Basrak\\({}^{12}\\), N. Bastid\\({}^{3}\\), G. Berek\\({}^{2}\\), R. Caplar\\({}^{12}\\), A. Devismes\\({}^{4}\\), P. Dupieux\\({}^{3}\\), M. Dzelalija\\({}^{12}\\), C. Finck\\({}^{4}\\), Z. Fodor\\({}^{2}\\), A. Gobbi\\({}^{4}\\), Yu. Grishkin\\({}^{7}\\), O.N. Hartmann\\({}^{4}\\), K.D. Hildenbrand\\({}^{4}\\), B. Hong\\({}^{9}\\), J. Kecskemeti\\({}^{2}\\), Y.J. Kim\\({}^{9}\\), M. Kirejczyk\\({}^{11}\\), P. Koczon\\({}^{4}\\), M. Korolija\\({}^{12}\\), R. Kotte\\({}^{5}\\), T. Kress\\({}^{4}\\), A. Lebedev\\({}^{7}\\), Y. Leifels\\({}^{4}\\), X. Lopez\\({}^{3}\\), M. Merschmeyer\\({}^{6}\\), W. Neubert\\({}^{5}\\), D. Pelte\\({}^{6}\\), M. Petrovici\\({}^{1}\\), F. Rami\\({}^{10}\\), B. de Schauenburg\\({}^{10}\\), A. Schuttauf\\({}^{4}\\), Z. S\\({}^{52}\\), B. Sikora\\({}^{11}\\), K.S. Sim\\({}^{9}\\), V. Simion\\({}^{1}\\), K. Siwek-Wilczynska\\({}^{11}\\), V. Smolyankin\\({}^{7}\\), M.R. Stockmeier\\({}^{6}\\), G. Stoicea\\({}^{1}\\), Z. Tyminski\\({}^{4,11}\\), P. Wagner\\({}^{10}\\), K. Wisniewski\\({}^{11}\\), D. Wohlfarth\\({}^{5}\\), I. Yushmanov\\({}^{8}\\), A. Zhilin\\({}^{7}\\) (F0PI Collaboration) \\({}^{1}\\) National Institute for Physics and Nuclear Engineering, Bucharest, Romania \\({}^{2}\\) KFKI Research Institute for Particle and Nuclear Physics, Budapest, Hungary \\({}^{3}\\) Laboratoire de Physique Corpusculaire, IN2P3/CNRS, and Universite Blaise Pascal, Clermont-Ferrand, France \\({}^{4}\\) Gesellschaft fur Schwerionenforschung, Darmstadt, Germany \\({}^{5}\\) Forschungszentrum Rossendorf, Dresden, Germany \\({}^{6}\\) Physikalisches Institut der Universitat Heidelberg, Heidelberg, Germany \\({}^{7}\\) Institute for Theoretical and Experimental Physics, Moscow, Russia \\({}^{8}\\) Kurchatov Institute, Moscow, Russia \\({}^{9}\\) Korea University, Seoul, South Korea \\({}^{10}\\) Institut de Recherches Subatomiques, IN2P3-CNRS, Universite Louis Pasteur, Strasbourg, France \\({}^{11}\\) Institute of Experimental Physics, Warsaw University, Poland \\({}^{12}\\) Rudjer Boskovic Institute, Zagreb, Croatia ## I Introduction The study of collective flow in relativistic heavy-ion collisions has been an intense field of research for the past twenty years (see Refs. [1, 2] for recent reviews). The ultimate motivation for the whole endeavour has been the extraction of the equation of state (EoS) of nuclear matter (see Ref. [3] for an early account and Ref. [4] for more recent ones). Moreover, the study of highly complex (quantum) many-body dynamics of heavy-ion collisions is in itself a challenging task. The (in-plane) directed (or sideward) flow was predicted for semi-central heavy-ion collisions on the basis of fluid dynamical calculations [5] and observed in experiments soon after [6, 7]. The study of the average in-plane transverse momentum, \\(\\langle p_{x}\\rangle\\), as a function of rapidity, \\(y\\), provides an easy and intuitive way of quantizing the directed flow [8]. For beam energy range up to a few GeV per nucleon, the experimental [8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24] and theoretical [25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85,\\(v_{1}=\\langle\\cos(\\phi)\\rangle\\), where \\(\\phi\\) is the angle with respect to the reaction plane. The DDF was studied around the balance energy (\\(E_{bal}\\), which is the energy of disappearance of flow [14, 16, 23]) by Li and Sustich [44], who unraveled its interesting patterns. They also pointed out the marked sensitivity of DDF to both EoS and nucleon-nucleon cross section (\\(\\sigma_{nn}\\)). At AGS energies the DDF was studied both experimentally [45] and theoretically [46, 47]. Recently, we have completed the first experimental analysis of DDF for Au+Au collisions at incident energies from 90 to 400\\(A\\) MeV [48]. We have found interesting patterns of the differential flow, evolving as a function of incident energy, particle type and rapidity. In particular, the study of high-\\(p_{t}\\) particles is important because, as proposed in Ref. [49], they are good messengers from the high density state of the collision. The DDF could additionally provide snapshots of the flow development during the time of the collision. In this paper we present new experimental data on directed flow in collisions of Au+Au, Xe+CsI and Ni+Ni. The complete coverage of the FOPI detector makes possible precision studies of flow, refining our earlier studies done with Phase I data [15, 19, 20, 22]. Following our recent exploration of the energy range of 90 to 400\\(A\\) MeV for Au+Au [48], we focus here on the centrality (for Au+Au) and on system size dependence of directed flow. This analysis has been performed for the incident energies of 250 and 400 \\(A\\) MeV, for particles with \\(Z\\)=1 and \\(Z\\)=2. To avoid overloading, only a selection of results is included in the body of the paper. In the Appendix we provide additional figures to complete the data set. After describing the detector, the method of analysis, and the corrections applied to data, we study the centrality and system dependence of directed flow over the complete forward rapidity range, both in terms of (\\(p_{t}\\)) integrated and in a differential way. All the features of the experimental data are then compared with IQMD model calculations for the incident energies of 90 and 400\\(A\\) MeV. ## II Set-up and data analysis The data have been measured with a wide phase-space coverage using the FOPI detector [50] at GSI Darmstadt. The reaction products are identified by charge (\\(Z\\)) in the forward Plastic Wall (PW) at 1.2\\({}^{\\circ}<\\theta_{lab}<30^{\\circ}\\) using time-of-flight (ToF) and specific energy loss. In the Central Drift Chamber (CDC), covering 34\\({}^{\\circ}<\\theta_{lab}<\\) 145\\({}^{\\circ}\\), the particle identification is on mass (mass number \\(A\\)), obtained using magnetic rigidity and the energy loss. For PW the \\(Z\\) resolution is 0.13 charge units for \\(Z\\)=1 and 0.14 for \\(Z\\)=2, while for CDC the mass resolution varies from 0.20 to 0.53 mass units for \\(A\\)=1 to \\(A\\)=4. The contamination of \\(Z\\)=1 in the \\(Z\\)=2 sample varies from 6% to 10% (from Ni+Ni at 250\\(A\\) MeV to Au+Au at 400\\(A\\) MeV) for the PW and is up to 20% for the CDC (where it is the contamination of \\(A\\)=1,2 and 3 in the \\(A\\)=4 sample). The PW measures the velocity of particles via ToF with an average resolution of 150 ps. For the CDC, the relative momentum resolution \\(\\sigma_{p_{t}}/p_{t}\\) varies from 4% for \\(p_{t}<\\) 0.5 GeV/c to about 12% for \\(p_{t}\\)=2 GeV/c. For more details on the detector configuration for this experiment see Ref. [51]. The phase-space coverage of the FOPI detector is presented in Fig. 1 for particles with \\(Z\\)=1 (for PW) and \\(A\\)=1,2,3 (for CDC), measured in semi-central collisions Au+Au at incident energy of 250\\(A\\) MeV. To compare different incident energies and particle species, we use normalized center-of-mass (c.m.) transverse momentum (per nucleon) and rapidity, defined as \\[p_{t}^{(0)}=(p_{t}/A)/(p_{P}^{\\rm c.m.}/A_{P}),\\quad y^{(0)}=(y/y_{P})^{\\rm c.m.},\\] where the subscript \\(P\\) denotes the projectile. For the PW coverage, shadows around \\(\\theta_{lab}=7^{\\circ}\\) and \\(19^{\\circ}\\) are visible, arising from subdetector borders and frames. For the centrality selection we used the charged particle multiplicities, classified into five bins, M1 to M5. The variable \\({\\it{Erat}}=\\sum_{i}E_{\\perp,i}/\\sum_{i}E_{\\parallel,i}\\) (the sums run over the transverse and longitudinal c.m. kinetic energy components of all the products detected in an event) has been additionally used for a better selection of the most central collisions (M5 centrality bin). The geometric impact parameters interval for the centrality bins M3, M4, and M5 for Au+Au system at 400\\(A\\) MeV studied here are presented in Table 1. The impact parameter intervals corresponding to the three investigated systems at the incident energy of Figure 1: FOPI detector acceptance: phase-space distribution for \\(Z\\)=1 particles measured in the centrality bin M4 of the reaction Au+Au at 250\\(A\\) MeV. The intensity contours are spaced logarithmically. The thicker lines mark the geometrical acceptance of different subdetectors. 250\\(A\\) MeV, M4 centrality bin, are presented in Table 2 along with the reduced impact parameters \\(\\langle b_{geo}\\rangle/b_{geo}^{max}\\). \\(b_{geo}^{max}\\) is the maximum geometrical impact parameter, calculated as: \\(b_{geo}^{max}=1.2(A_{P}^{1/3}+A_{T}^{1/3})\\) (in fm). Applying the same recipe for the centrality selection for different systems the reduced impact parameter is similar, as seen in Table 2. The largest data sample was acquired with what we call \"Medium bias\" trigger, which accepts events roughly corresponding to centrality bins M3, M4 and M5 for Au+Au collisions. For Xe and Ni systems this trigger selection amounts to a bias for the M3 centrality. This is the reason why M3 is not included in the present paper for these systems. We have collected \"Minimum bias\" data for all systems, but the statistics is far smaller, thus not allowing the type of analysis done in this paper. ### The reaction plane determination and the correction for its resolution The reaction plane has been reconstructed event-by-event using the transverse momentum method [8]. All charged particles detected in an event have been used, except a window around midrapidity (\\(|y^{(0)}|<0.3\\)) to improve the resolution. The particle-of-interest has been excluded to prevent autocorrelations. The correction of the extracted values due to the reconstructed reaction plane fluctuations has been done using the recipe of Ollitrault [52]. The resolution of the reaction plane azimuth, \\(\\Delta\\phi\\), can be extracted by randomly dividing each event in two subevents and calculating for each one the reaction plane orientation, \\(\\Phi_{1}\\) and \\(\\Phi_{2}\\)[8, 52]. From the resolution, quantified as \\(\\langle\\cos(\\Phi_{1}-\\Phi_{2})\\rangle\\), the correction factors, \\(1/\\langle\\cos\\Delta\\phi\\rangle\\) can be calculated [52] (see Ref. [51] for more technical details). For the experimental data the correction factors for the centrality bins M3, M4, and M5 for Au+Au system at 400\\(A\\) MeV are presented in Table 1. The values for the three investigated systems at the incident energy of 250\\(A\\) MeV, M4 centrality bin, are presented in Table 2. The accuracy of the reaction plane correction procedure was checked using Isospin Quantum Molecular Dynamics (IQMD) [33] events analyzed in the same way as the data. The results are shown in Fig. 2 for IQMD events at 400\\(A\\) MeV. The upper panel presents the resolution and the correction factor as a function of the impact parameter, \\(b\\). Their dependence on centrality reflects mainly the dependence of the strength of the directed flow (see lower panel), but also a finite number effect [8], evident towards peripheral collisions. The lower panel of Fig. 2 presents the centrality dependence of integrated \\(v_{1}\\) values for \\(Z\\)=1 particles (for the forward hemisphere), derived from IQMD events in two cases: i) with respect to the true reaction plane (known in the model) and ii) with respect to the reconstructed reaction plane and using the correction according to [52] (correction factors of the upper panel of Fig. 2). The agreement between the two cases is perfect, down to most peripheral collisions, where correction factors up to 2 are necessary. Alternative methods of flow analysis have been proposed recently [53]. However, because, for our energy domain, the flow of nucleons is at its maximum and pro \\begin{table} \\begin{tabular}{l c c c} Centrality bin & M3 & M4 & M5 \\\\ \\hline \\(\\Delta b_{geo}\\) (fm) & 6.1-7.6 & 1.9-6.1 & 0-1.9 \\\\ \\(1/\\langle\\cos\\Delta\\phi\\rangle\\) & 1.05 & 1.04 & 1.17 \\\\ \\end{tabular} \\end{table} Table 1: The geometric impact parameters intervals \\(\\Delta b_{geo}\\) and the correction factors for the reaction plane resolution, \\(1/\\langle\\cos\\Delta\\phi\\rangle\\), for three centrality bins of Au+Au collisions at the incident energy of 400\\(A\\) MeV. Figure 2: Upper panel: the resolution of the reconstructed reaction plane (squares) and the corresponding correction factors (dots). Lower panel: \\(v_{1}\\) values for the true (continuous line) and reconstructed and corrected (dashed line) reaction plane. IQMD HM events were used for these studies. duced particles are very rare, the impact of these refined methods is expected to be minor. ### The influence of FOPI detector on the flow measurements As seen in Fig. 1, the complete phase space coverage of the FOPI detector (in its Phase II) is hampered by one empty region, corresponding to polar angles \\(\\theta_{lab}\\) from 30\\({}^{\\circ}\\) to 34\\({}^{\\circ}\\) in the laboratory frame. Additional detector shadows around 7\\({}^{\\circ}\\) and 19\\({}^{\\circ}\\) are present too. We have studied the effect of the FOPI acceptance using IQMD transport model [33]. The IQMD events were analyzed in the same way as the experimental data. The results are presented in Fig. 3, where the integrated \\(v_{1}\\) values as a function of rapidity for \\(Z\\)=1 and \\(Z\\)=2 particles are shown for the ideal case of total coverage (full lines) and when the FOPI filter is employed (dashed lines). IQMD SM events for the incident energy of 400\\(A\\) MeV, M4 centrality bin were used for this comparison. With the present configuration of the detector, the measured \\(v_{1}\\) values are very close to the ideal case. We note that, for our published directed flow data [15, 19, 20, 22], although the effect of the FOPI acceptance on the directed flow results was quite small [15, 54], its magnitude was comparable to the difference between soft and hard EoS. An important ingredient in the present analysis is the correction for distortions due to multiple hit losses. As an example, in case of PW, despite its good granularity (512 independent modules [50]), average multiple hit probabilities of up to about 9% at 400\\(A\\) MeV are registered for the multiplicity bin M4. Because of the directed flow, the average number of particles detected over the full PW subdetector is up to 2 times higher in-plane than out of the reaction plane. As a consequence, the losses due to multiple hit are strongly correlated with the directed flow and follow its dependences on incident energy, centrality and system size. These losses lead to an underestimation of the measured directed flow and need to be taken into account. We developed a correction procedure based on the experimental data, by exploiting the DDF left-right symmetry with respect to midrapidity. The correction acts upon \\(v_{1}\\) values (namely only on average values) and is deduced for each system, energy and centrality separately. Due to the flow profile in the polar angle, it depends also on transverse momentum. It is larger for \\(Z\\)=2 compared to \\(Z\\)=1 particles. The correction was derived in a window around midrapidity (\\(|y^{(0)}|<0.1\\)) and propagated for other rapidity windows for each \\(p_{t}^{(0)}\\)bin along the lines of constant \\(\\theta_{lab}\\) to follow the detector segmentation. It reaches up to 12% for Au+Au at 400\\(A\\) MeV and is almost negligible at 90\\(A\\) MeV. At 400\\(A\\) MeV it is up to 5% for Xe+CsI and up to 2% for Ni+Ni. The procedure was checked and validated using IQMD events passed through a complete GEANT [55] simulation of the detector. IQMD events were used for this study, at the incident energy of 400\\(A\\) MeV. The results are presented in Fig. 4 for \\(Z\\)=1 particles in M4 central Figure 4: The effect of the FOPI detector on \\(v_{1}\\) values for \\(Z\\)=1 particles, for the incident energy of 400\\(A\\) MeV, M4 centrality bin. IQMD events were used for this comparison. Three cases are compared: the simple geometric FOPI filter (full line) and GEANT simulation without (dotted line) and with (dashed line) the multiple hit correction. Figure 3: The effect of the FOPI detector filter on \\(v_{1}\\) values for \\(Z\\)=1 and \\(Z\\)=2 particles for the incident energy of 400\\(A\\) MeV, M4 centrality bin. IQMD SM events were used for this comparison. ity bin. The \\(v_{1}\\) values extracted from IQMD events as inputs into a complete GEANT simulation of the detector (and analyzed in exactly the same way as the data), without (dotted line) and with (dashed line) the multiple hit correction, are compared with the true \\(v_{1}\\) values, obtained from standard IQMD events (actually the same events used for the GEANT simulation) with only a simple geometric FOPI filter (full line). It is obvious that the correction is restoring the \"true\" \\(v_{1}\\) values (IQMD) from the \"measured\" GEANT values. Also, the correction used for the data is quantitatively reproduced by these simulations. Note that, because of larger multiplicities for \\(Z\\)=1 particles from IQMD (see Section IV), the correction is larger in the simulations than in the data. As a result of these studies, all the experimental data (both differential and integrated) have been corrected according to the procedure described above. The only source of systematic error on our measured \\(v_{1}\\) values could be the correction for multiple hit losses outlined above. However, as we have demonstrated based on complete GEANT simulations, this correction is well understood. As a result, the systematic error depends on incident energy, centrality, particle type and \\(p_{t}^{(0)}\\). It is below 5% on the differential \\(v_{1}\\). There are exceptions for some points, for which the systematic error arises from (rapidity-dependent) regions in \\(p_{t}^{(0)}\\)in which detector shadows are influencing the data. For those particular points the systematic error is already included in the plots. For the integrated \\(v_{1}\\) values the error is slightly smaller, up to 4%, including the influence of the uncovered region of \\(\\theta_{lab}=30^{\\circ}-34^{\\circ}\\). These values do not include the effect of particle misidentification. ## III General features of the data ### Centrality dependence By varying the centrality of the collision one aims at controlling both the size of the participant fireball (and consequently the magnitude of the achieved compression and subsequent expansion) and the size of the spectator fragment region. While semi-central collisions could provide information preferentially on (density dependent) EoS, more peripheral reactions can help in pinning down the MDI [4]. Figure 5 shows the centrality dependence of the directed flow for Au+Au at \\(400A\\) MeV for \\(Z\\)=2 particles. Plotted as a function of rapidity are the (\\(p_{t}\\)) integrated \\(v_{1}\\) values (upper panel) and those integrated values weighted by the average transverse momentum, \\(\\langle p_{t}^{(0)}\\rangle\\), for the respective rapidity bin. This weighted \\(v_{1}\\) quantity is proportional to the average in-plane transverse momentum \\(\\langle p_{x}\\rangle\\) (\\(v_{1}=\\langle p_{x}/p_{t}\\rangle\\)) and was chosen due to its convenience in applying the corrections discussed in Section II.2. First, one can notice that the known behaviour of the slope at midrapidity, namely the maximum for intermediate impact parameters (M4 bin), is evident only for the weighted \\(v_{1}\\) values (lower panel of Fig. 5). The asymmetries (\\(v_{1}\\) values, upper panel) are the same around midrapidity for M3 and M4 centrality bins. Second, in both observables, the most significant dependence on centrality is taking place in the spectator region (roughly \\(y^{(0)}>0.5\\)) and it is more pronounced for the weighted \\(v_{1}\\) values. Both the asymmetries and the in-plane transverse momentum reflect the influence of the participant and of the spectator size controlled by the variation of the centrality. These distributions are inherently a result of the superposition of collective and thermal contributions [22]. For a given flow magnitude, higher temperatures (presumably achieved for more central collision) would translate into a smaller effective flow. In Fig. 6 we show the centrality dependence of the differential flow for \\(Z\\)=2 particles for Au+Au collisions at \\(400A\\) MeV. Three centrality bins (different panels) are compared for three rapidity windows (different symbols). The lines are polynomial fits to guide the eye. The rapidity dependence of the DDF in different centrality bins follows the rapidity dependence of the integrated flow Figure 5: Centrality dependence of the integrated directed flow as a function of rapidity for Au+Au at \\(400A\\) MeV for \\(Z\\)=2 particles. The lines are joining the symbols to guide the eye. seen in Fig. 5: the most pronounced dependence is registered for the most central collisions, M5. As we have already discussed in [48], the shape of the DDF (a gradual development of a limiting value, followed by a decrease at high \\(p_{t}^{(0)}\\)) could be a result of the collision dynamics. Part of the high-\\(p_{t}\\) particles could have been emitted at a pre-equilibrium stage, therefore not reaching the maximum compression stage of the reaction. However, this possibility seems to be ruled out by the observation that the high-\\(p_{t}\\) particles originate preferentially from high-density regions of the collision [49]. The arrows in Fig. 6 mark the values of the average \\(p_{t}^{(0)}\\)for the corresponding centrality bin, according to the symbols. For this incident energy of 400\\(A\\) MeV the value of the projectile momentum in the c.m. system is 433 MeV/c per nucleon. Higher values of average transverse momenta are seen for more central collisions as a result of a stronger expansion from a bigger and more compressed source. The dependence of \\(\\langle p_{t}^{(0)}\\rangle\\) on rapidity is different for the most central collisions (M5 bin) compared to semi-central ones, for which smaller transverse momenta are seen towards the projectile rapidity as a result of the influence of the spectator matter. Similar data as those presented in Fig. 5 and 6 are presented in the Appendix for \\(Z\\)=1 particles in Au+Au at 400\\(A\\) MeV and for \\(Z\\)=1 and \\(Z\\)=2 particles at 250\\(A\\) MeV (Fig. 19 to Fig. 24). ### System size dependence As for the centrality variation, by varying the system size one aims to control the size of both the participant and the spectator. However, the question whether transparency plays a role in case of lighter systems, needs to be addressed simultaneously, as it results in a decrease of the achieved compression. In addition, the surface (or surface-to-volume ratio) can play an important role. It was suggested that the system size dependence of the directed flow could give insights about \\(\\sigma_{nn}\\)[32]. Figure 7 presents the phase space distribution d\\({}^{2}\\)N/dp\\({}_{t}^{(0)}\\)dy\\({}^{(0)}\\) of \\(Z\\)=2 particles for the three systems Figure 6: Differential flow for three centrality bins, in three rapidity windows, for \\(Z\\)=2 particles for collisions Au+Au at 400\\(A\\) MeV. The lines are polynomial fits to guide the eye. The arrows mark the values of the average \\(p_{t}^{(0)}\\)for the corresponding centrality bin. Figure 7: Phase space distributions d\\({}^{2}\\)N/dp\\({}_{t}^{(0)}\\)dy\\({}^{(0)}\\) of \\(Z\\)=2 particles for three systems at the incident energy of 250\\(A\\) MeV, M4 centrality. at the incident energy of 250\\(A\\) MeV, centrality bin M4. From Au+Au to Ni+Ni system, the phase space population becomes more and more focused, both in transverse momentum and rapidity. This is an indication of the decrease of stopping for lighter systems. Maximum density reached in the fireball depends on the system size [34], presumably as an effect of different stopping. On the other hand, due to the sizes of both fireball and spectator, the separation between the two regions is clearer (smaller surface contacts) for lighter systems. Figure 8 shows the system dependence of the directed flow for \\(Z\\)=2 particles in the M4 centrality bin of collisions Au+Au, Xe+CsI and Ni+Ni at 250\\(A\\) MeV. Plotted are the integrated \\(v_{1}\\) values as a function of rapidity for three cases: i) as such (upper panel); ii) scaled with the term (\\(A_{P}^{1/3}+A_{T}^{1/3}\\)), which is proportional to the sum of radii of projectile and target (middle panel); iii) scaled as: \\(v_{1}^{s}=v_{1}\\langle p_{t}^{(0)}\\rangle/(A_{P}^{1/3}+A_{T}^{1/3})\\), where \\(\\langle p_{t}^{(0)}\\rangle\\) is the average normalized transverse momentum for each rapidity bin. It is evident that for the first two cases there is no scaling with respect to the system size, neither without nor with accounting for the system size via the \\((A_{P}^{1/3}+A_{T}^{1/3})\\) term, while the in-plane average transverse momenta, proportional to \\(v_{1}^{s}\\), shows system size scaling (lower panel of Fig. 8) for the participant region. Somewhat expected, a deviation is present for the spectator part. We have observed a very similar feature for \\(Z\\)=3 particles, while for \\(Z\\)=1 the scaling holds over all the forward rapidity domain. The \\((A_{P}^{1/3}+A_{T}^{1/3})\\) scaling has been proposed by Lang et al. [34], who, within a BUU model, have found a linear dependence of the transverse pressure (leading to transverse momentum transfer) with the reaction time (passage time). Westfall et al. [14] have related the \\(A^{-1/3}\\) dependence of \\(E_{bal}\\) to a competition between the attractive mean field (associated with the surface, so scaling with \\(A^{2/3}\\)) and the repulsive nucleon-nucleon interaction (scaling as \\(A\\)). This competition betw Figure 8: Integrated directed flow as a function of rapidity for \\(Z\\)=2 particles in the M4 centrality bin of collisions Au+Au, Xe+CsI and Ni+Ni at 250\\(A\\) MeV. Upper panel: \\(v_{1}\\) values, middle panel: \\(v_{1}\\) scaled by the term (\\(A_{P}^{1/3}+A_{T}^{1/3}\\)), lower panel: scaled values, \\(v_{1}^{s}\\) (see text). The lines are joining the symbols to guide the eye. Figure 9: Differential flow for three systems at 250\\(A\\) MeV, M4 centrality bin, for \\(Z\\)=2 particles in three windows of rapidity. The lines are polynomial fits to guide the eye. The arrows mark the values of the average \\(p_{t}^{(0)}\\)for the corresponding system. may be the origin of the quoted scaling of the transverse pressure with \\((A_{P}^{1/3}+A_{T}^{1/3})\\). Earlier studies devoted to the slope of \\(\\langle p_{x}\\rangle-y\\) distributions (which translates into a flow angle) have experimentally confirmed such a scaling [18, 22]. We note that, in an ideal hydrodynamics the flow angle is a pure geometric quantity and does not depend on the system size [56]. The system size dependences presented above may be an interesting effect of the nuclear forces and/or a consequence of the non-equilibrium nature of the heavy-ion collisions. In Fig. 9 we present the differential flow for the three systems at 250\\(A\\) MeV, M4 centrality bin, for \\(Z\\)=2 particles in three windows of rapidity (the three panels). The arrows mark the values of the average \\(p_{t}^{(0)}\\)for the corresponding centrality bin, according to the symbols. For this incident energy of 250\\(A\\) MeV the value of the projectile momentum in the c.m. system is 342 MeV/c per nucleon. As seen already in Fig. 7, for all rapidity windows the \\(\\langle p_{t}^{(0)}\\rangle\\) depend on the system size (again breaking the scaling expected from hydrodynamics), suggesting an increase of the compression and expansion with the system size. Similar data as those presented in Fig. 8 and 9 are presented in the Appendix for \\(Z\\)=1 particles at 250\\(A\\) MeV and for \\(Z\\)=1 and \\(Z\\)=2 particles at 400\\(A\\) MeV (Fig. 25 to Fig. 30). ## IV Model Comparison The IQMD transport model [33, 41] is widely used for interpreting the data in our energy domain [33, 16, 20, 39]. We use two different parametrizations of the EoS, a hard EoS (compressibility \\(K\\)= 380 MeV) and a soft EoS (\\(K\\)= 200 MeV), both with MDI, labeled HM and SM, respectively and without MDI - H and S, respectively. We use the free nucleon-nucleon cross section, \\(\\sigma_{nn}^{free}\\) for all cases, but for the energy of 90\\(A\\) MeV in addition we consider the case of \\(\\sigma_{nn}=0.8\\sigma_{nn}^{free}\\). The events produced by the model are filtered by the experimental filter and analyzed in a similar way as the experimental data. This comprises the same recipe for the centrality selection and the same way of reaction plane reconstruction and correction. However, for the energy of 90\\(A\\) MeV, due to a weak flow signal (see below) we prefer to use the true reaction plane for the model calculations. In what concerns the reaction plane resolution at 400\\(A\\) MeV, it is in the model very similar compared to data. For instance, for M4 centrality bin, for IQMD SM the correction factors are 1.03, 1.05 and 1.24 for Au, Xe and Ni systems, respectively. ### What to compare A known problem of the IQMD model (and QMD models in general) is that of much lower yields of composite fragments compared to data [57]. For instance, for Au+Au at incident energy of 400\\(A\\) MeV, M4 centrality, integrated \\(Z\\)=2 yields relative to \\(Z\\)=1 are 1/4.7 for the experimental data while IQMD predicts 1/25 for HM and 1/15 for SM (these ratios do not depend on including MDI or not). As cluster formation and flow are intimately related, one cannot simply neglect this dramatic discrepancy. It is difficult to assess whether this strong dependence of fragment production on EoS parametrization is a genuine physical effect or is only particular to IQMD model. We note that most of the present models used in heavy-ion collisions at our energies involve a rather simple phase space coalescence mechanism to produce composite particles [43, 49] (IQMD uses the coalescence in coordinate space only [33]). Efforts to identify the fragments early in the collision might contribute to clarify this aspect [58]. Promising theoretical candidates to accomplish the task of realistic fragment formation could be AMD-type models [37]. To partially overcome the problem of fragment production in the models, one can perform the comparison taking into account all charged particles weighted by charge \\(Z\\) (so called proton-likes) [15, 20]. However, this type of comparison could be biased, as the neutrons bound in the composite fragments may contribute differently for the calculations compared to data. We have investigated various possibilities by using IQMD events. The results are presented in Fig. 10, where we compare integrated \\(v_{1}\\) values as a function of rapidity for protons, neutrons, all particles weighted by mass (\\(A\\)) and all particles weighted by charge (\\(Z\\)) for Au+Au at 400\\(A\\) MeV, M4 centrality bin. Figure 10: Comparison of integrated \\(v_{1}\\) values as a function of rapidity for protons, neutrons, all particles weighted by mass (\\(A\\)) and all particles weighted by charge (\\(Z\\)) for IQMD HM events, for Au+Au at 400\\(A\\) MeV, M4 centrality. In Fig. 11 we show the same comparison in case of differential flow. The neutrons exhibit the same flow as the protons, both for integrated and for differential values. As a consequence, within the model, in both cases the charge-weighted values are identical to the mass-weighted ones. However, as fragments heavier than \\(Z\\)=2 are extremely few in the model, this result may be somewhat biased. In the following we are comparing data and model both for selected particle types and for proton-likes. ### Integrated values We start our comparison of the data with the IQMD model at the incident energy of 90\\(A\\) MeV, for the M4 centrality bin. In Fig. 12 we show the rapidity dependence of the integrated \\(v_{1}\\) values for particles with \\(Z\\)=1 (upper panel) and \\(Z\\)=2 (lower panel). For both species the measured values are compared to the IQMD calculations for HM and SM parametrizations. For the HM case an additional set of calculations has been performed using \\(\\sigma_{nn}=0.8\\sigma_{nn}^{free}\\) (labeled HM.8 in Fig. 12). For the HM case the statistical errors are plotted. For the other cases the errors are comparable. For data the errors are in most cases smaller than the dimension of the points. The calculated values of the directed flow depend both on the parametrized EoS and, more pronounced, on \\(\\sigma_{nn}\\). This dependence is apparently of different magnitude for \\(Z\\)=1 and \\(Z\\)=2 particles. For the model calculations there is a coexistence of attractive (negative \\(v_{1}\\) values) and repulsive (positive \\(v_{1}\\)) flow, manifested as a function of rapidity (we shall call this dual flow). This coexistence is different for the two particle species. We noticed that the above characteristics of the model calculations depend on centrality as well, both the magnitude of the dual flow and the particle dependence being enhanced for more peripheral collisions. The model features are clearly not supported by the data, which show a monotonic repulsive flow over all the rapidity domain, both for \\(Z\\)=1 and \\(Z\\)=2 particles, as seen in Fig. 12. For the experimental data, for the centrality M4 studied here, the reaction plane correction factor is 1.54. A two-component flow was observed earlier in QMD calculations of semi-peripheral Ca+Ca collisions at 350\\(A\\) MeV [39]. That study pointed out its high sensitivity to MDI. But, unless the discrepancy between the calculations and the measured data is resolved, any conclusion on the sensitivity of the directed flow on the EoS, \\(\\sigma_{nn}\\) or MDI is meaningless for energies around \\(E_{bal}\\). It is not clear for the moment whether the particle dependence of the dual flow is not an artifact of the treatment of composite particles in the model. We note that measurements of \\(E_{bal}\\) for different particle types [14, 23] did not reveal, so far, any dual flow. Calculations with a Figure 11: As Fig. 10, but for differential flow, in the rapidity window \\(y^{(0)}\\)=0.7-0.9. The statistical errors are plotted for protons. Figure 12: Integrated \\(v_{1}\\) values as a function of rapidity for \\(Z\\)=1 and \\(Z\\)=2 particles, for the incident energy of 90\\(A\\) MeV. The data points (dots) are compared to IQMD calculations for two EoS (lines) parametrizations. The line labeled HM.8 corresponds to HM case, using \\(\\sigma_{nn}=0.8\\sigma_{nn}^{free}\\). For the HM case the statistical errors of the model are plotted. BUU model [44] found a dual flow only in a (\\(p_{t}\\)) differential way, but otherwise monotonic behavior of \\(\\langle p_{x}\\rangle-y\\) distributions. Recent experimental investigations of flow in light systems at \\(E_{bal}\\) pointed out interesting aspects of flow of light isotopes and heavy fragments, but again the balance energy was found not to depend on particle type [59]. In Fig. 13 and Fig. 14 we show the comparison of data and model calculations for \\(Z\\)=1 and \\(Z\\)=2 particles, respectively. The three studied systems are considered, for the centrality bin M4. The integrated \\(v_{1}\\) values show sensitivity to both EoS and to MDI. As expected, the MDI influence the flow essentially in the vicinity of the projectile spectator (\\(y^{(0)}>0.8\\)). This effect is more pronounced the lighter the system. All these sensitivities are enhanced for \\(Z\\)=2 particles. For Au+Au and Xe+CsI systems, the SM parametrization is reproducing the data very well, for both \\(Z\\)=1 and \\(Z\\)=2 particles. This may be the result of a similar balance of thermal and collective contributions in the model compared to the data. In fact, the phase space populations of \\(Z\\)=1 and \\(Z\\)=2 particles are similar for model and data. In case of Ni+Ni system the dependence on EoS is already negligible, but obviously the model underestimates the flow. One parameter of the model, the Gaussian width \\(L\\), which is the phase space extension of the wave packet of the particle (and acting as an effective interaction range) has been found to influence the directed flow considerably [41]. A decrease of \\(L\\) for lighter systems has been advocated with the argument of maximum stability of nucleonic density profiles [41]. As no clear prescription exists for handling the value of \\(L\\), we prefer to use a constant value of \\(L\\)=8.66 fm\\({}^{2}\\) throughout the present work. A smaller \\(L\\) would lead to an increase of the directed flow [41] and may cure the discrepancy that we observe for the Ni+Ni system, but will affect unfavorably the comparison in case of Xe+CsI system. These effects may reflect the importance played (via the interaction range) by the surface. A complete understanding of this aspect is a necessary step towards establishing the bulk properties of the nuclear matter created in heavy-ion collisions. We note that comparisons of integrated directed flow for Au+Au system using QMD-type models favored mostly a soft EoS [15, 54] but a hard EoS was also found to explain another set of experimental data [16]. In Fig. 15 we present the comparison of the measured Figure 14: As Fig. 13, but for \\(Z\\)=2 particles. For the HM case the statistical errors of the model are plotted. Figure 13: Integrated \\(v_{1}\\) values as a function of rapidity, for \\(Z\\)=1 particles, for three systems at 400\\(A\\) MeV, centrality bin M4. The data points (dots) are compared to IQMD calculations (lines). integrated \\(v_{1}\\) values to IQMD calculations for Au+Au at the incident energy of 400\\(A\\) MeV, taking into account all charged particles weighted by charge \\(Z\\). The centrality bins M4 and M3 are studied. In this case the sensitivity to EoS is reduced, as a consequence of a balance between magnitude of flow and yield of composite particles in the model: hard EoS produces more flow, but less particles with \\(Z>\\)1, while for soft EoS it is opposite. This behavior strongly underlines once more the necessity that theoretical models appropriately describe the yields of composite particles. The conclusion on EoS is this time less evident, but the parametrizations without MDI are ruled out once again, on the basis of their departure from the data in the region of spectator rapidity. As expected, this effect is more pronounced for the more peripheral centrality bin M3. Despite the good agreement seen at the beam energy of 400\\(A\\) MeV, we found that in the IQMD model the decrease of flow towards lower incident energies is much faster than for data, leading to larger theoretical \\(E_{bal}\\) compared to data (and to the behavior seen in Fig. 12). This may be a result of deficiencies in incorporating MDI and in the treatment of fragment production. The Pauli blocking may play a role too. In addition, it has been pointed out that the shape of the flow excitation function is drastically influenced by the method of imposing constraints on the Fermi momenta [41]. The features of the model calculations presented above for 400\\(A\\) MeV show the danger of deriving EoS-related conclusions from rapidity-integrated flow values (like \\(p_{x}^{dir}\\)) unless a detailed description of the data is first achieved in a differential way. As realized early on [27], a soft EoS with MDI is producing similar magnitude in \\(p_{x}^{dir}\\) as a hard EoS without MDI. ### Differential flow We restrict our model comparison of the differential flow to the incident energy of 400\\(A\\) MeV and M4 centrality bin. Data for all three systems investigated so far are compared to the model calculations. In Fig. 16 the measured differential directed flow for Au+Au collisions at incident energy of 400\\(A\\) MeV, M4 centrality, is compared to the IQMD results for all the four parametrizations used above. Particles with \\(Z\\)=1 (upper row) and \\(Z\\)=2 (lower row) for two windows in rapidity are used for the comparison. For both particle species there is a clear sensitivity of DDF on the EoS. As for the case of the integral flow, the SM parametrization Figure 16: Differential flow for particles with \\(Z\\)=1 (upper row) and \\(Z\\)=2 (lower row) for Au+Au collisions at incident energy of 400\\(A\\) MeV, M4 centrality, for two windows in rapidity (columns). Experimental data are represented by dots and the model calculations are the lines. For the HM case the statistical errors of the model are plotted. Figure 15: Integrated \\(v_{1}\\) values as a function of rapidity for all particles weighted by \\(Z\\), for the centrality bins M4 (upper panel) and M3 (lower panel), for the incident energy of 400\\(A\\) MeV. The data points (dots) are compared to IQMD calculations (lines). reproduces the experimental data quite well. Apparently the model calculations deviate from the data at high \\(p_{t}\\) in case of \\(Z\\)=1 particles, while the corresponding \\(Z\\)=2 particles are well explained. This deviation is more pronounced for larger rapidities. We have found earlier [60] that a BUU model does not explain the DDF of protons in the spectator region at higher energies. The shape of the \\(Z\\)=1 DDF distributions are in case of the IQMD model strikingly similar to the ones of \\(Z\\)=2, while for the data there are subtle differences between the two particle species (at this energy of 400\\(A\\) MeV as well as down to 90\\(A\\) MeV [48]). The model features may be the result that the nucleons (dominating the \\(Z\\)=1 sample) in the models are all \"primordial\", which does not account for the sequential decays of heavier fragments. The dynamics of the expansion and fragment formation may be responsible for the differences, too. In Table 3 we compare the experimental values of the average normalized transverse momentum with the values from IQMD, for HM and SM cases. Particles with \\(Z\\)=1 and \\(Z\\)=2 for the two windows in rapidity studied in Fig. 16 are compared. The data values have a systematic error represented by the number in parenthesis as the error on the last digit. The model reproduces reasonably well the average transverse momenta for \\(Z\\)=1 particles, while it underestimates them for \\(Z\\)=2, for both windows of rapidity. We mention that, recently, our experimental differential directed flow in Au+Au [48] was nicely reproduced by a BUU model wich includes an improved Dirac-Brueckner formalism [49]. In this case, for the densities expected at 400\\(A\\) MeV, the EoS is soft, which is in agreement to our results. In Fig. 17 we show the measured DDF for Xe+CsI and Ni+Ni systems at 400\\(A\\) MeV, M4 centrality, in comparison to IQMD results, for \\(Z\\)=1 particles in two windows of rapidity. In case of Xe+CsI system the model calculations are at the same level of agreement with data as in case of Au+Au: the SM parametrization is reproducing the data, with clear deviations at high momenta. For the Ni+Ni case even the HM parametrization underpredicts the measured data. Most notably, as obvious particularly for Ni+Ni, the MDI have effects predominantly at low \\(p_{t}\\), contrary to earlier BUU predictions (performed for the asymmetric system Ar+Pb) [36]. In Fig. 18 we show the model comparison of the differential flow for particles weighted by \\(Z\\) for Au+Au collisions at incident energy of 400\\(A\\) MeV, M4 centrality, for two windows in rapidity. As in case of integrated values, as a result of different relative contribution of particles heavier than \\(Z\\)=1, the sensitivity to EoS is reduced for this type of comparison. ## V Summary and Conclusions We have presented experimental results on directed flow in Au+Au, Xe+CsI and Ni+Ni collisions at incident energies from 90 to 400\\(A\\) MeV. General features of the directed flow have been investigated using experimental data, particularly the centrality and the system dependence. We have studied the rapidity dependence of the first Fourier coefficient, \\(v_{1}\\), integrating over all transverse momentum range. A special emphasis has been put on the differential directed flow, namely the \\(p_{t}\\) dependence of \\(v_{1}\\). While for integrated values we presented a new way of looking at old (and generally known) dependences, the DDF results are reported for the first time for our energy domain, both for the centrality and for the system size dependence. We have devoted special care to the corrections of the experimental data. The influence of the finite granularity of the detector has been studied and corrected for. The high accuracy of the final results is Figure 17: Comparison of data and model differential flow for Z=1 particles, for the incident energy of 400\\(A\\) MeV, M4 centrality bin, for the systems Xe+CsI (upper row) and Ni+Ni (lower row) for two windows in rapidity (columns). \\begin{table} \\begin{tabular}{l l l l l} Rapidity, particle & \\multicolumn{2}{c}{Data} & IQMD HM & IQMD SM \\\\ \\hline \\(y^{(0)}\\)=0.5-0.7 & \\(Z\\)=1 & 0.60(3) & 0.62 & 0.62 \\\\ & \\(Z\\)=2 & 0.48(2) & 0.42 & 0.44 \\\\ \\hline \\(y^{(0)}\\)=0.7-0.9 & \\(Z\\)=1 & 0.58(3) & 0.55 & 0.56 \\\\ & \\(Z\\)=2 & 0.44(2) & 0.37 & 0.37 \\\\ \\end{tabular} \\end{table} Table 3: Average normalized transverse momentum \\(\\langle p_{t}^{(0)}\\rangle\\) for particles with \\(Z\\)=1 and \\(Z\\)=2 in Au+Au collisions at 400\\(A\\) MeV, M4 centrality bin. Data and model values are compared for two rapidity windows. For data, the number in parenthesis represents the error on the last digit. based as well on a good reaction plane resolution achieved with the full coverage of the FOPI detector. We have compared the experimental data with IQMD transport model calculations, for both integral and differential \\(v_{1}\\) values. This comparison, performed for all the three studied systems, shows a clear sensitivity of the directed flow on the EoS parametrization in the model, especially in case of particle-selected comparison. In this case, for both integrated and differential directed flow at the incident energy of \\(400A\\) MeV, we conclude that a soft EoS with MDI is the only parametrization in the model that reproduces the data for Au and Xe systems. A clear discrepancy is seen for Ni system, which needs to be addressed separately. It may reflect the increasing importance played by the nuclear surface for lighter systems. We consider our present results as a case study on the sensitivities in determination of EoS and MDI from directed flow comparisons. We emphasized the necessity of the present kind of differential comparison prior to more global quantities. We have shown that the combination of rapidity and transverse momentum analysis of (differential) directed flow can impose constraints on the model. We also pointed out some difficulties of the model to reproduce the measured data concerning: i) flow at low energy (we considered here \\(90A\\) MeV), ii) flow as a function of system size, and iii) fragment production. As a consequence, none of the IQMD parametrizations studied here is able to consistently explain the whole set of experimental data. The importance of spectators acting as clocks for the expansion is one particular argument to study collective flow in semi-central collisions at energies from a few hundred MeV to a few GeV per nucleon [4]. We have demonstrated that high precision experimental data allows us to study the many facets of the heavy-ion collisions. Other observables, like \\(v_{2}\\), should receive a comparable (and simultaneous) attention too. Whether the nuclear equation of state can be extracted from such studies depends ultimately on the ability of any type of microscopic transport model to reproduce the measured features. ## Acknowledgment This work has been supported in part by the German BMBF under contracts 06HD953, RUM-005-95/RUM-99/010, POL-119-95, UNG-021-96 and RUS-676-98 and by the Deutsche Forschungsgemeinschaft (DFG) under projects 436 RUM-113/10/0, 436 RUS-113/143/2 and 446 KOR-113/76/0. Support has also been received from the Polish State Committee of Scientific Research, KBN, from the Hungarian OTKA under grant T029379, from the Korea Research Foundation under grant No. KRF-2002-015-CS0009, from the agreement between GSI and CEA/IN2P3 and from the PROCOPE Program of DAAD. Figure 18: Differential flow for particles weighted by \\(Z\\) for Au+Au collisions at incident energy of \\(400A\\) MeV, M4 centrality, for two windows in rapidity. Experimental data are represented by dots. The model calculations are the lines. Figure 21: Centrality dependence of the integrated directed flow as a function of rapidity for Au+Au at 400\\(A\\) MeV for \\(Z\\)=1 particles. Figure 22: Differential flow for three centrality bins, in three rapidity windows, for \\(Z\\)=1 particles for collisions Au+Au at 250\\(A\\) MeV. The lines are polynomial fits to guide the eye. The arrows mark the values of the average \\(p_{t}^{(0)}\\)for the corresponding centrality bin. Figure 23: Differential flow for three centrality bins, in three rapidity windows, for \\(Z\\)=2 particles for collisions Au+Au at 250\\(A\\) MeV. Figure 24: Differential flow for three centrality bins, in three rapidity windows, for \\(Z\\)=1 particles for collisions Au+Au at 400\\(A\\) MeV. Figure 25: Integrated directed flow as a function of rapidity for \\(Z\\)=1 particles in the M4 centrality bin of collisions Au+Au, Xe+CsI and Ni+Ni at 250\\(A\\) MeV. Upper panel: \\(v_{1}\\) values, middle panel: \\(v_{1}\\) scaled by the term \\((A_{P}^{1/3}+A_{T}^{1/3})\\), lower panel: scaled values, \\(v_{1}^{s}=v_{1}\\langle p_{t}^{(0)}\\rangle/(A_{P}^{1/3}+A_{T}^{1/3})\\). Figure 26: Integrated directed flow as a function of rapidity for \\(Z\\)=1 particles in the M4 centrality bin of collisions Au+Au, Xe+CsI and Ni+Ni at 400\\(A\\) MeV. Figure 27: Integrated directed flow as a function of rapidity for \\(Z\\)=2 particles in the M4 centrality bin of collisions Au+Au, Xe+CsI and Ni+Ni at 400\\(A\\) MeV. Figure 28: Differential flow for three systems at 250\\(A\\) MeV, M4 centrality bin, for \\(Z\\)=1 particles in three windows of rapidity. The lines are polynomial fits to guide the eye. The arrows mark the values of the average \\(p_{i}^{(0)}\\)for the corresponding system. Figure 30: Differential flow for three systems at 400\\(A\\) MeV, M4 centrality bin, for \\(Z\\)=2 particles in three windows of rapidity. Figure 29: Differential flow for three systems at 400\\(A\\) MeV, M4 centrality bin, for \\(Z\\)=1 particles in three windows of rapidity. ## References * [1] W. Reisdorf and H.G. Ritter, Ann. Rev. Nucl. Part. Sc. **47**, 663 (1997). * [2] N. Herrmann, J.P. Wessels, and T. Wienold, Ann. Rev. Nucl. Part. Sc. **49**, 581 (1999). * [3] H. Stocker and W. Greiner, Phys. Rep. **137**, 277 (1986). * [4] P. Danielewicz, Nucl. Phys. A **685**, 368c (2001) [nucl-th/0112006]; nucl-th/0112006; nucl-th/0201032. * [5] H. Stocker, J.A. Maruhn, and W. Greiner, Phys. Rev. Lett. **44**, 725 (1980). * [6] H.A. Gustafsson et al., Phys. Rev. Lett. **52**, 1590 (1984). * [7] R.E. Renfordt et al., Phys. Rev. Lett. **53**, 763 (1984). * [8] P. Danielewicz and G. Odyniec, Phys. Lett. B **157**, 146 (1985). * [9] K.G.R. Doss et al., Phys. Rev. Lett. **57**, 302 (1986). * [10] K.G.R. Doss et al., Phys. Rev. Lett. **59**, 2720 (1987). * [11] D. Keane et al., Phys. Rev. C **37**, 1447 (1988). * [12] C. Ogilvie et al., Phys. Rev. C **40**, 2592 (1989). * [13] W.M. Zhang et al., Phys. Rev. C **42**, R491 (1990). * [14] G.D. Westfall et al., Phys. Rev. Lett. **71**, 1986 (1993). * [15] V. Ramillien et al., Nucl. Phys. A **587**, 802 (1995). * [16] M.D. Partlan et al., Phys. Rev. Lett. **75**, 2100 (1995). * [17] M.J. Huang et al., Phys. Rev. Lett. **77**, 3739 (1996). * [18] J. Chance et al., Phys. Rev. Lett. **78**, 2535 (1997). * [19] P. Crochet et al., Nucl. Phys. A **624**, 725 (1997). * [20] P. Crochet et al., Nucl. Phys. A **627**, 522 (1997). * [21] R. Pak et al., Phys. Rev. Lett. **78**, 1022 (1997); R. Pak et al., Phys. Rev. Lett. **78**, 1026 (1997). * [22] F. Rami et al., Nucl. Phys. A **646**, 367 (1999). * [23] D.J. Magestro et al., Phys. Rev. C **61**, 021602(R) (2000). * [24] H. Liu et al., Phys. Rev. Lett. **84**, 5488 (2000). * [25] J. Kapusta and D. Strottman, Phys. Lett. B **106**, 33 (1981). * [26] J.J. Molitoris, J.B. Hoffer, H. Kruse, and H. Stocker, Phys. Rev. Lett. **53**, 899 (1984). * [27] J. Aichelin, A. Rosenhauer, G. Peilert, H. Stocker. and W. Greiner, Phys. Rev. Lett. **58**, 1926 (1987). * [28] A. Bonasera and L.P. Csernai, Phys. Rev. Lett. **59**, 630 (1987). * [29] G. Peilert, H. Stocker, W. Greiner, A. Rosenhauer, A. Bohnet, and J. Aichelin, Phys. Rev. C **39**, 1402 (1989). * [30] V. Koch, B. Blattel, W. Cassing, U. Mosel, and K. Weber, Phys. Lett. B **241**, 174 (1990). * [31] C. Gale, G.M. Welke, M. Prakash, S.J. Lee, and S. Das Gupta, Phys. Rev. C **41**, 1545 (1990). * [32] B. Blattel, V. Koch, A. Lang, K. Weber, W. Cassing, and U. Mosel, Phys. Rev. C **43**, 2728 (1991). * [33] J. Aichelin, Phys. Rep. **202**, 233 (1991). * [34] A. Lang, B. Blattel, W. Cassing, V. Koch, U. Mosel, and K. Weber, Z. Phys. A **340**, 287 (1991). * [35] J. Jaenicke, J. Aichelin, H. Ohtsuka, R. Linden, and A. Faessler, Nucl. Phys. A **536**, 201 (1992). * [36] Q. Pan and P. Danielewicz, Phys. Rev. Lett. **70**, 2062 (1993); Phys. Rev. Lett. **70**, 3523 (1993). * [37] A. Ono, H. Horiuchi, and T. Maruyama, Phys. Rev. C **48**, 2946 (1993). * [38] A. Insolia, U. Lombardo, N.G. Sandulescu, and A. Bonasera, Phys. Lett. B **334**, 12 (1994). * [39] S. Soff, S.A. Bass, C. Hartnack, H. Stocker, and W. Greiner, Phys. Rev. C **51**, 3320 (1995). * [40] C. Fuchs, T. Gaitanos, and H.H. Wolter, Phys. Lett. B **381**, 23 (1996). * [41] C. Hartnack, R.K. Puri, J. Aichelin, J. Konopka, S.A. Bass, H. Stocker, and W. Greiner, Eur. J. Phys. A **1**, 151 (1998). * [42] P.K. Sahu et al., Nucl. Phys. A **640**, 493 (1998). * [43] A. Insolia, U. Lombardo, and N. Sandulescu, Phys. Rev. C **61**, 067902 (2000). * [44] B.-A. Li and A. Sustich, Phys. Rev. Lett. **82**, 5004 (1999). * [45] J. Barette et al., Phys. Rev. C **59**, 884 (1999). * [46] B.-A. Li, C.M. Ko, and G.Q. Li, Phys. Rev. C **54**, 844 (1996). * [47] S. Voloshin, Phys. Rev. C **55**, R1630 (1997). * [48] A. Andronic et al., Phys. Rev. C **64**, 041604(R) (2001) [nucl-ex/0108014]. * [49] T. Gaitanos, C. Fuchs, H.H. Wolter, and A. Faessler, Eur. J. Phys. A **12**, 421 (2001) [nucl-th/0102010]. * [50] A. Gobbi et al., Nucl. Instr. and Meth. in Phys. Res. A **324**, 156 (1993); J. Ritman for the FOPI Collaboration, Nucl. Phys. B (Proc. Suppl.) **44** (1995) 708. * [51] A. Andronic et al., Nucl. Phys. A **679**, 765 (2001). * [52] J.-Y. Ollitrault, nucl-ex/9711003. * [53] N. Borghini, P.M. Dinh, and J.-Y. Ollitrault, Phys. Rev. C **64**, 054901 (2001) [nucl-th/0105040]; N. Borghini, P.M. Dinh, J.-Y. Ollitrault, A.M. Poskanzer, and S.A. Voloshin, nucl-th/0202013; N. Borghini, P.M. Dinh, and J.-Y. Ollitrault, nucl-th/0204017. * [54] P. Crochet, PhD Thesis, Strasbourg, CRN 96-09 (1996). * Detector Description and Simulation Tool, CERN Program Library Long Writeup W5013; _[http://wwwinfo.cern.ch/asdoc/geant_html3/geantall.html_](http://wwwinfo.cern.ch/asdoc/geant_html3/geantall.html_) * [56] W. Schmidt, U. Katscher, B. Waldhauser, J.A. Maruhn, H. Stocker, and W. Greiner, Phys. Rev. C **47**, 2782 (1993). * [57] W. Reisdorf et al., Nucl. Phys. A **612**, 493 (1997). * [58] R.K. Puri, C. Hartnack, and J. Aichelin, Phys. Rev. C **54**, R28 (1996). * [59] D. Cussol et al., Phys. Rev. C **65**, 044604 (2002). * [60] P. Crochet at al., Phys. Lett. B **486**, 6 (2000) [nucl-ex/0006004].
We present new experimental data on directed flow in collisions of Au+Au, Xe+CsI and Ni+Ni at incident energies from 90 to 400\\(A\\) MeV. We study the centrality and system dependence of integral and differential directed flow for particles selected according to charge. All the features of the experimental data are compared with Isospin Quantum Molecular Dynamics (IQMD) model calculations in an attempt to extract information about the nuclear matter equation of state (EoS). We show that the combination of rapidity and transverse momentum analysis of directed flow allow to disentangle various parametrizations in the model. At 400\\(A\\) MeV, a soft EoS with momentum dependent interactions is best suited to explain the experimental data in Au+Au and Xe+CsI, but in case of Ni+Ni the model underpredicts flow for any EoS. At 90\\(A\\) MeV incident beam energy, none of the IQMD parametrizations studied here is able to consistently explain the experimental data. PACS: 25.70.Lm, 21.65.+f, 25.75.Ld
Summarize the following text.
arxiv-format/0301011v1.md
# Head-on/Near Head-on Collisions of Neutron Stars With a Realistic EOS Edwin Evans\\({}^{(1)}\\), A. Gopakumar\\({}^{(1,2)}\\), Philip Gressman\\({}^{(1,3)}\\), Sai Iyer\\({}^{(1)}\\), Mark Miller\\({}^{(1,4)}\\), Wai-Mo Suen\\({}^{(1,5)}\\), and Hui-Min Zhang\\({}^{(1)}\\) \\({}^{(1)}\\)McDonnell Center for the Space Sciences, Department of Physics, Washington University, St. Louis, Missouri 63130 \\({}^{(2)}\\)Physics Department, University of Guelph, Canada \\({}^{(3)}\\)Mathematics Department, Princeton University, Princeton, NJ 08544 \\({}^{(4)}\\) 238-332 Jet Propulsion Laboratory, 4800 Oak Grove Drive, Pasadena, CA 91109 \\({}^{(5)}\\)Physics Department, Chinese University of Hong Kong, Hong Kong November 7, 2021 # Introduction Shapiro [1] has conjectured that two neutron stars (NSs) falling from infinity and colliding head-on would not, independent of their masses, promptly collapse to a black hole. The basic argument for the conjecture is that, until there is significant neutrino cooling, the thermal pressure generated by shock heating is always enough to support the merged object. Since neutrino cooling operates on time scales of seconds instead of the NS dynamical time scale of milliseconds, this could have significant implications on gravitational wave and neutrino emissions. The results obtained in [1] were based on a polytropic equation of state (EOS) \\(P=K\\rho^{\\Gamma}\\) (with \\(K\\) a function of the entropy), and it was suggested that the result was true for a general EOS. We have shown in [2] that one implicit assumption used in the derivation in [1], namely that the collision can be approximated by a quasi-equilibrium process, is not valid. We carried out simulations of the head-on collision of neutron stars described by a polytropic EOS, as in the conjecture. For two 1.4 \\(M_{\\odot}\\) NSs (with a polytropic index of \\(\\Gamma=2\\), initial polytropic coefficient \\(K\\) of \\(1.16\\times 10^{5}\\) cm\\({}^{5}\\)/g s\\({}^{2}\\) as in a typical NS model), we showed that the merged object collapsed promptly. The shock front generated in the collision does not even have time to propagate to the outer part of the merged object before it is engulfed in an apparent horizon, let alone produce enough thermal pressure to support the merged object as envisioned in the quasi-equilibrium argument of [1]. In our study in [2], the merged object had a mass well above the critical mass (the maximum mass that the EOS with the polytropic constants \\(\\Gamma\\) and \\(K\\) given above can support). We pointed out in [2; 3] that the prompt collapse is due to the dynamical compression of the collision that is absent in the quasi-equilibrium analysis in [1; 4]. This brings up three further questions: 1. What if the mass of the merged object is _less_ than the critical mass? Will the dynamical compression in the collision process be strong enough to initiate a collapse? 2. What if we use a realistic EOS instead of a polytropic EOS? 3. What if we break the exact axisymmetry? In this paper we answer these three questions with one set of numerical simulations. The simulations are based on the GR-Astro code (formerly called GR-3D) constructed in the NASA Neutron Star Grand Challenge project [5] and the NSF Astrophysics Simulation Collaboratory (ASC) project [6]. For the construction of the code and the classes of validation tests we have carried out for it, see [7; 8; 9; 10]. The GR-Astro code solves the coupled set of the Einstein equations and the general relativistic hydrodynamic equations with a realistic EOS. It will be released to the community through the ASC portal [6] upon completion of the project. In this paper, we report on simulations of NSs constructed with the Lattimer-Swesty[11] (LS) EOS, which has been used in various neutron star studies (all existing simulations based on the LS EOS that we are aware of are based on Newtonian gravity and hence cannot answer questions of collapse). To the best of our knowledge, our simulations represent the first set of general relativistic 3D simulations based on a realistic EOS (for polytropic EOS simulations, see [12]). In this study, we use neutron stars of rest (baryonic) mass 1.6 \\(M_{\\odot}\\) (corresponding to an ADM mass of 1.4 \\(M_{\\odot}\\) in isolation), with a radius of 13.8 km (proper distance). The merged object has a rest mass of 3.2 \\(M_{\\odot}\\) which is considerably _lower_ than the critical mass of 3.67 \\(M_{\\odot}\\) in the LS EOS with our choice of parameters. Nevertheless, we find that the merged object will promptly collapse to a black hole within a dynamical timescale. An apparent horizon is found engulfing the shock wave at 0.15 ms after the two stars have touched, with time measured at infinity, i.e. at the edge of the computational grid, which is 47 km away from thecollision center. The prompt collapse is verified for both the head-on collision case and an off-axis collision (the axisymmetry is broken by an impact parameter of 1/2 stellar radius). This study demonstrates that dynamical effects are strong enough to cause a prompt collapse even when the total mass is _below_ the single-star critical mass, and that this result is _not_ a consequence of exact axisymmetry. We hence post the following \"Prompt Collapse Conjecture\": For head-on and near head-on collisions of neutron stars described by a generic equation of state and infalling from rest at infinity, there exists a window in the rest mass of the merged object, _below_ the critical single-star rest mass, where prompt collapse to a black hole can occur. The claim of the \"near head-on\" part means that the prompt collapse is stable with respect to small perturbations of the initial velocity. **The Setup.** In this paper we use our implementation of the LS EOS as described in [13]. We use the LS EOS in a tabular form with rest mass density and specific energy density as the two independent thermodynamic variables, which are evolved using the general relativistic hydrodynamic (GR-Hydro) equations (the lepton to baryon ratio is set to a constant (0.1) in all simulations in this paper). As in the Newtonian LS EOS simulations of [14; 15], we set the initial specific energy density of the NSs at a relatively large value of 0.9 \\(\\mathrm{M}\\mathrm{e}\\mathrm{v}\\). The ADM mass and rest mass as a function of central rest mass density for a single static NS is given in Fig. 1. We see that the critical rest mass of an LS EOS star is at 3.67 \\(M_{\\odot}\\). In our simulation we use NSs with a rest mass of 1.6 \\(M_{\\odot}\\) (marked by a X in Fig. 1). The merged object is hence guaranteed to have a rest mass below the critical single-star mass. In Fig. 1 we have also plotted the corresponding curves for the case of a polytropic EOS with \\(\\Gamma=2\\) and \\(K=1.16\\times 10^{5}\\)\\(\\mathrm{c}\\mathrm{m}^{5}/\\mathrm{g}\\)\\(\\mathrm{s}^{2}\\) for comparison. In the head-on collision case, we put the two stars at a proper distance of \\(d=42\\)\\(\\mathrm{k}\\mathrm{m}\\) apart (about \\(3R\\) separation, \\(R=\\mathrm{radius}\\) of star) along the z-axis, and boost them toward one another at the speed (as measured at infinity) of \\(\\sqrt{GM/d}\\) (the Newtonian infall velocity). The metric and extrinsic curvature of the two boosted stars are obtained by (i) adding the off-diagonal components of the metric, (ii) adding the diagonal components of the metric and subtracting 1, (iii) adding the components of the extrinsic curvature. The resulting matter distribution, momentum distributions, conformal part of the metric, and transverse traceless part of the extrinsic curvature are used as input to York's procedure [16] for determining the initial data. The initial data satisfies the complete set of Hamiltonian and momentum constraints to high accuracy (terms in the constraints cancel to \\(10^{-6}\\)), _and_ represents two NSs in head-on collision. The initial data is then numerically evolved by solving the coupled Einstein GR-Hydro evolution equations with numerical methods described in [7]. The simulations reported here use the \"\\(1+\\log\\)\" slicing [17]. The simulations have been carried out with resolutions ranging from \\(\\Delta x=0.74\\)\\(\\mathrm{k}\\mathrm{m}\\) to 0.3 \\(\\mathrm{k}\\mathrm{m}\\) (28 to 70 grid points across each NS) for convergence and accuracy analysis. We find that the constraint violations rise linearly throughout the evolution, and converge to zero with increased numerical resolution. The total rest mass of the system is conserved to better than 0.2% throughout the simulations. **The Results.** In Fig. 2a, we show the collapse of the lapse along the \\(z\\) axis from \\(t=0\\)\\(\\mathrm{m}\\mathrm{s}\\) to \\(t=0.37\\)\\(\\mathrm{m}\\mathrm{s}\\) at intervals of 0.0926 \\(\\mathrm{m}\\mathrm{s}\\). (With the reflection symmetry across the \\(z=0\\) plane and the axisymmetry of the head-on collision, we only need to evolve the first octant.) By the time \\(t=0.37\\)\\(\\mathrm{m}\\mathrm{s}\\), the lapse has collapsed significantly. Fig. 2b shows the corresponding evolution of the \\(zz\\) component of the metric function. The \"grid stretching\" peak, characteristic of a black hole evolved in a singularity avoiding slicing, is apparent. Fig. 3 shows contour lines in the \\(y=0\\) plane of the log of the gradient of the rest mass density \\(\\log\\left(\\sqrt{\ abla^{i}(\\rho)\ abla_{i}(\\rho)}\\right)\\) at time \\(t=0.37\\)\\(\\mathrm{m}\\mathrm{s}\\). Sharp changes in rest mass density (where contour lines bunch up) indicate shocks. We see that the shock front is at 7 \\(\\mathrm{k}\\mathrm{m}\\) in the x direction, and 10 \\(\\mathrm{k}\\mathrm{m}\\) along the \\(z\\) direction (the collision axis) and has not yet reached the back end of star (at 14.5 \\(\\mathrm{k}\\mathrm{m}\\)). At this point the shock front is still moving outward in coordinate space, although it is completely engulfed by the apparent horizon, as seen in Fig. 4 below. In Fig. 4, we show the intersection of the apparent horizon (AH) with the \\(x-z\\) plane. To confirm the location of the AH, convergence tests, both in terms of resolution and in terms of location of the computational bound Figure 1: The ADM mass and rest mass vs. central rest mass density of a static NS with the LS EOS. The corresponding curves for a polytropic EOS with \\(\\Gamma=2\\), \\(K=1.16\\times 10^{5}\\)\\(\\mathrm{c}\\mathrm{m}^{5}/\\mathrm{g}\\)\\(\\mathrm{s}^{2}\\) are plotted for comparison. of 6.9 km). With this setup, the computation expense is significantly higher, as we can no longer evolve just an octant as in the axisymmetric case above. We found that the inclusion of a small impact parameter does not change the qualitative features of the collision, including the occurrence of prompt collapse. In Fig. 6, we show the apparent horizon found again at \\(t=0.37\\) ms. We see that the AH is tilted with respect to the \\(z\\) axis, but is very similar in shape and size to the head-on case. In all the above cases, we have prompt collapses. We have confirmed that at a low enough mass, the merged object will not collapse but will instead merge, bounce, oscillate and form a stable NS. However, the determination of the dividing line between a final state black hole or a final state NS would require extremely high resolution whose computational expense is beyond what is available to our group. Finally, one may also ask whether the sub-critical mass collapse occurs in the head-on collision of NSs described by a polytropic EOS. Preliminary studies with our 3D code show that, in fact, the dividing line in the rest mass of the final merged object between prompt collapse and non-prompt collapse is quite close to the critical rest mass of a single NS for the \\(\\Gamma=2\\) polytrope. We are currently working on an axisymmetric version of the code which we will use to not only explore this question, but to also explore the possible existence of type-1 critical phenomena at the interface dividing the prompt collapse and non-prompt collapse cases. **Conclusions.** In this paper we report on a sub-critical mass collapse phenomenon in the head-on/near head-on collisions of NSs with a realistic EOS. We propose a \"Prompt Collapse Conjecture\": For head-on and near-head-on collisions of neutron stars described by a generic equation of state and infalling from rest at infinity, there exists a window in the rest mass of the merged object, _below_ the critical single-star rest mass, where prompt collapse to a black hole can occur. This is the opposite of the Shapiro conjecture [1], which predicted no prompt collapse for all NS masses, including those above the single-star critical mass. It has been argued in [18] that head-on/near head-on collisions of NSs could have a significant event rate, and could be a candidate for a sub-class of short gamma-ray bursts. The results of prompt collapse reported in this paper could have implications on the observation of such processes, with the prompt formation of the horizon cutting off the causal connection of the shock heated matter from outside observers. Note that we are not claiming a prompt collapse in inspiral coalescence of NSs. **Acknowledgements.** We thank Luc Blanchet, K. Thorne, and C. Will for useful discussions, and Lap-Ming Lin for comments on the manuscript. The simulations in this paper have made use of code components developed by several authors: BAM (multigrid solver) by B. Brugman; AH-FINDER (apparent horizon finder) by M. Alcubierre; CONF-ADM (evolution of the Einstein field equations), MAHC (evolution for the GRHydro equations), and IVP (conformal constraint solver) by M. Miller; PRIM-SOL (solver for the hydrodynamical primitive variables) by P. Gressman, ELS (LS-EOS tabular treatment) by E. Evans, and the CACTUS Computational Toolkit by T. Goodale _et al_. Support for this research has been provided by the NSF KDI Astrophysics Simulation Collaboratory (ASC) project (Phy 99-79985), NASA Neutron Star Grand Challenge Project(NCCS-153), the NSF NRAC Project Computational General Relativistic Astrophysics (93S025), and the NASA AMES NAS. ## References * (1) S. Shapiro, Phys. Rev. D **58**, 103002 (1998). * (2) M. Miller, W.-M. Suen and M. Tobias, Phys. Rev. D **63**, 121501(R) (2001) * (3) M. Miller, W.-M. Suen and M. Tobias, gr-qc/9910022 (1999) * (4) S. Shapiro, gr-qc/9909059 (1999) Figure 5: The L2 norm of the hamiltonian constraint converging linearly with respect to resolution for the whole duration of the numerical evolution. Figure 6: The position of the AH at \\(t=0.37\\;ms\\) for a near head-on collision using a spatial resolution of \\(\\Delta x=0.37\\;km\\). * (5)[http://wugrav.wustl.edu/Relativ/nsgc.html](http://wugrav.wustl.edu/Relativ/nsgc.html) * (6)[http://wugrav.wustl.edu/ASC/](http://wugrav.wustl.edu/ASC/) * (7) J. A. Font, M. Miller, W. -M. Suen and M. Tobias, Phys. Rev. D **61**, 044011 (2000) * (8) J. A. Font, M. Miller, W. -M. Suen and M. Tobias, Phys. Rev. D Repository, EPAPS: E-PRVDAQ-61-029004 (2000) * (9) J. A. Font, T. Goodale, S. Iyer, M. Miller, L. Rezzolla, E. Seidel, N. Stergioulas, W-M. Suen and M. Tobias, Phys. Rev. **D 65**, 084024 (2002) * (10) W.-M. Suen, Prog. Theor. Phys. Suppl. **136**, 251 (1999) * (11) J. M. Lattimer and D. F. Swesty, Nucl. Phys. A, **535**, 331 (1991) * (12) M. Shibata and K. Uryu, Phys. Rev. D, **61**, 064001 (2000) * (13)[http://wugrav.wustl.edu/Codes/GR3D/GR3D_EOS.html](http://wugrav.wustl.edu/Codes/GR3D/GR3D_EOS.html) * (14) M. Ruffert, H.-Th. Janka and G. Schafer, Astron. Astrophys. **311**, 532 (1996) * (15) M. Ruffert, H.-Th. Janka, K. Takahashi and G. Schafer, Astron. Astrophys. **319**, 122 (1997) * (16) J. York, in _Sources of Gravitational Radiation_, edited by L. Smarr (Cambridge University Press, Cambridge, 1979). * (17) P. Anninos, K. Camarda, J. Masso, E. Seidel, W.-M. Suen and J. Towns, Phys. Rev. D **52**, 2059 (1995) * (18) J. I. Katz and L. M. Canel, Ap. J. **471**, 915 (1996)
It has been conjectured that in head-on collisions of neutron stars (NSs), the merged object would not collapse promptly even if the total mass is higher than the maximum stable mass of a cold NS. In this paper, we show that the reverse is true: even if the total mass is _less_ than the maximum stable mass, the merged object can collapse promptly. We demonstrate this for the case of NSs with a realistic equation of state (the Lattimer-Swesty EOS) in head-on _and_ near head-on collisions. We propose a \"Prompt Collapse Conjecture\" for a generic NS EOS for head on and near head-on collisions. pacs: 04.25.Dm,04.30.+x,97.60.Jd,97.60.Lf
Provide a brief summary of the text.
arxiv-format/0302007v1.md
# Using data assimilation in laboratory experiments of geophysical flows M. Galmiche, J. Sommeria, E. Thivolle-Cazat and J. Verron Laboratoire des Ecoulements Geophysiques et Industriels BP53 38041 Grenoble CEDEX 9, France ###### Introduction : operational issues An increasing interest in operational oceanography has developed in recent years. A number of pre-operational projects have emerged at the national and international scale, most of them coordinating their activites within the international Global Ocean Data Assimilation Experiment (GODAE). The heart of operational systems consists of three main components : the observation system, the dynamical model and the data assimilation scheme. Thanks to recent advances in satellite and in-situ observations, numerical modelling, assimilation techniques and computer technology, the operational systems have now acquired some degree of maturity. However, there are still a number of issues that must be solved in applications, and validation tests are needed. The ideal method for validating the overall forecasting system would be to compare results with independent oceanic observations, i.e. observations that are not used in the assimilation process. However, such observations are rare because in-situ surveys are difficult to undertake and extremely expensive, particularly in the deep ocean. Another problem is that, because assimilation is only approximate, forecast errors may be due not only to the model itself, but also to the temporal growth of imperfections in the initial condition. It is therefore difficult to objectively identify the model errors on the one hand and the assimilation errors on the other. Alternatively, analytical solutions of simple flows with well-defined initial and boundary conditions can be used as a reference to unravel some aspects of the model error components. However, such analytical solutions are limited to some extremely simplified flow configurations. In this letter, a new, experimental approach to these problems is presented. Laboratory experiments and numerical, shallow water simulations of simple oceanic flows are performed and sequential data assimilation is used as a tool to keep the numerical simulation close to the experimental reality. By contrast with real-scale oceanic measurements, the experimental measurements are available with a high level of precision and resolution. The general methodology is given in Section 2. In Section 3, the example of un unstable vortex in a two-layer, rotating fluid is presented as an illustration of the experimental test-bed. In particular, the behaviour of the model is studied when data assimilation is stopped. The vortex deformation and splitting as predicted by the numerical simulation is then compared to the real flow evolution. ## 2 The Coriolis test-bed Laboratory experiments are of particular interest as test cases for operational systems, filling the gap between the oversimplified analytical solutions and the full complexity of real oceans. On the one hand, they are much more realistic than any numerical or theoretical \"reality\", provided that the experimental facility allows good similarity with the ocean. On the other hand, data are available with much better space and time resolution than actual scale oceanic measurements. Furthermore, a great number of experiments can be performed and compared to one other. Such comparisons are obviously impossible at the real scale because of the ever changing flow conditions in the ocean. The flow parameters can also be easily varied to perform parametric studies. Thanks to its large size (13 meter diameter), the Coriolis turntable (Grenoble, France) is a unique facility which enables oceanic flows to be reproduced with a good level of similarity (see Fig.1). It is possible to come close to inertial regimes, i.e. with limited effects of viscosity and centrifugal forces. Various experiments can be performed on the turntable in multi-layer stratified salt water, such as experiments on vortices or boundary currents. Our approach relies on numerical simulation of such laboratory experiments using data assimilation, in a similar way to real-scale ocean forecasting systems. A major difference with the real ocean is that the measured quantity here is the velocity field in several horizontal planes instead of scalar quantities measured only at the surface or along vertical sections. The elevation of the interface between the layers is not measured in the experiments. It is treated as an output of the asssimilation process (see section 3). The velocity field is measured in horizontal planes using CIV (Correlation Image Velocimetry): particle tracers are illuminated by a horizontal laser sheet and a camera is used to visualize the particle displacements from above, leading, after numerical treatment, to the horizontal velocity field. The rms measurement error in particle displacement is about 0.2 pixels, as determined by Fincham & Delerce (2000), and the errors found in neighboring points are not correlated. The resulting error in velocity about 3% of the maximum velocity. In parallel with these measurements, numerical simulations are performed. The system is modelled as a multi-layer fluid with hydrostatic approximation for which the variables are the horizontal velocity components \\(u(x,y,i)\\) and \\(v(x,y,i)\\), and the layer thickness \\(h(x,y,i)\\), where \\(x\\) and \\(y\\) are the horizontal coordinates and \\(i\\) is the layer index. The basic shallow-water equations are solved using MICOM (Miami Isopycnic Coordinate Ocean Model, Bleck & Boudra 1986) in its simplest version. The measured velocity field is assimilated into the simulations at each measurement point using an adaptive version of the Singular Evolutive Extended Kalman (SEEK) filter, a method adapted for oceanographic purposes on the basis of the Kalman filter. Each data assimilation provides a new dynamical state which optimally blends the model prediction and the measured data, in accounting for their respective error. The forecast state vector \\(\\mathbf{X}^{f}\\) is replaced by the analysed state vector \\(\\mathbf{X}^{a}=\\mathbf{X}^{f}+\\mathbf{K}[\\mathbf{Y}^{o}-\\mathbf{H}\\mathbf{X}^ {f}]\\),where \\(\\mathbf{Y}^{o}\\) is the observed part of the state vector (i.e. the velocity field in the measurement domain), \\(\\mathbf{H}\\) is the observation operator and \\(\\mathbf{K}\\) is the Kalman gain defined by \\(\\mathbf{K}=\\mathbf{P}^{f}\\mathbf{H}^{T}\\)\\([\\mathbf{H}\\mathbf{P}^{f}\\mathbf{H}^{T}+\\mathbf{R}]^{-1}\\). Here, \\(\\mathbf{P}^{f}\\) and \\(\\mathbf{R}\\) are the forecast error and observation error covariance matrices respectively. The observation errors are here supposed to be uniform, and the multi-variate correlations between the variables are described as components on Empirical Orthogonal Functions (EOF's) computed from the model statistics, providing an estimation of \\(\\mathbf{P}^{f}\\). The reader is referred to Pham, Verron & Roubaud (1998) and Brasseur, Ballabrera-Poy &Figure 1: Picture of the Coriolis turntable with the setup of the two-layer vortex experiment. The layers have density \\(\\rho_{1}\\) and \\(\\rho_{2}>\\rho_{1}\\) and undisturbed thickness \\(H_{1}\\)=12.5 cm and \\(H_{2}\\)=50 cm. For the experiment presented in this paper, the relative density difference is 1.0 10\\({}^{-3}\\), the initial displacement of the interface is \\(\\eta_{0}=-H_{1}\\) inside the cylinder, and the tank rotation period is 40 s. The corresponding Rossby radius of deformation is 12.5 cm. At \\(t=0\\) the cylinder is removed. Verron (1999) for mathematical details. ## 3 Example : Baroclinic instability of a two-layer vortex Among the various experiments performed on the Coriolis turntable, concerning, for example, baroclinic instability and coastal currents, the study of the baroclinic instability of a two-layer vortex is presented in the present letter because it provides a good illustration of the experimental test-bed. This flow problem is of particular interest because simple experiments are feasible as well as numerical simulations, although it is quite a complex non-linear process (e.g. Griffiths and Linden 1981) and plays a crucial role in the variability of the real ocean. The initial conditions are well defined and the lateral boundaries have no significant influence. A cylinder of radius \\(R=0.5\\) m is initially introduced in a two-layer fluid across the interface (see Fig. 1). A displacement \\(\\eta_{0}\\) of the interface is produced inside the cylinder, and at \\(t=0\\) the cylinder is rapidly removed. A radial gravity current is then initiated, which is deviated by the Coriolis force, resulting in the formation of a vortex in the upper layer after damping of inertial oscillations. A vortex of opposite sign is produced in the lower layer, and the resulting vertical shear is a source of baroclinic instability. The main control parameter in this system is \\(\\gamma=R/R_{D}\\), where \\(R_{D}\\) is the Rossby deformation radius. The results presented here were obtained with \\(\\gamma=4\\). The vortex then undergoes baroclinic instability which gives rise to splitting into two new vortices. The experimental vortex is dynamically similar to an oceanic vortex with a radius of the order of 100 km at mid-latitude (the radius of deformation is typically 25 km). In the experiments, vortex instability takes place in typically 20 rotation periods of the tank, corresponding to about 30 days at mid-latitude (taking the inverse of the Coriolis parameter as the relevant time unit). The ratio of the vertical to the horizontal scales is distorted by a factor of 10 in the experiments. This is not important provided that the hydrostatic approximation is valid. The velocity field is measured in each layer every 11 s, which is half the observed period of inertial oscillations. Since we are interested in the slow balanced dynamics, we eliminate the residual inertial oscillations by averaging two successive fields for data assimilation. The velocity data obtained are assimilated in the numerical model at each grid point in the measurement domain (2.5 m \\(\\times\\) 2.5 m). In the numerical simulations, the system is modelled as a two-layer fluid with a standard biharmonic dissipation term and the simulation domain is 5 m wide (i.e. twice as large as the measurement domain) in order to avoid spurious confinement by boundaries. The simulations are performed using \\(100^{2}\\) or \\(200^{2}\\) grid points in each layer. A good fit is then obtained between the model and the experimental data, as shown in Fig. 2. The irregular shape of the vortex, the position of its center and the presence of residual currents in its vicinity are well represented. The elongation of the vortex and the formation of two new, smaller vortices are also well reproduced. Data assimilation provides us with an indirect measurement of the interface depth, also shown in Fig. 2. No significant inertio-gravity wave is excited in the simulation after data assimilation is performed, showing that the interface position is well determined, without any spurious imbalance effect. The initial development of baroclinic instability is well described by the growth of mode two (calulated using a polar Fourier decomposition of the radial velocity field along a circle of radius R). Excellent agreement between the model and the observation is obtained when data assimilation is performed, as shown in Fig. 3. The growth of this mode is considerably delayed in the model without data assimilation, as the initial perturbation is smaller than in the experiments. The rms distance between the forecast and measured velocity fields is plotted in Fig. 4. After a few assimilation cycles, this distance remains of the order of 0.6 mm.s\\({}^{-1}\\), close to the experimental errors (3% of the maximum velocity, i.e. about 0.5 mm.s\\({}^{-1}\\)). Similar agreement is obtained in both layers. The state vector obtained at a given time can be used as an initial condition to test the free model. To do so, we stop the assimilation at time \\(t=75s\\) and measure the growth of the rms distance between the laboratory experiments and the free model run with this new initial state, as shown in Fig. 4. This growth can be due either to the amplification of small initial errors, or to limitations of the dynamical model. It is actually possible to show that the sensitivity to the initial condition is not the dominant effect, as observed in Fig. 5. It is clear in this figure that the divergence of the model from the experimental reality is not sensitive, over the short term, to small variations in the initial condition. The model diverges from reality on a timescale of around 3000s, which is about 30 times the typical advection timescale of the flow \\(2R/U\\simeq 100s\\) (where \\(U\\simeq 1\\) cm.s\\({}^{-1}\\) is the order of magnitude of the velocity within the flow). The model error is therefore about 1/30 of the dominant advective term. This error is actually small but seems to be systematic. The results are similar when \\(100^{2}\\) or \\(200^{2}\\) numerical grid points are used in each layer (see Fig. 4, 5 and 6). The effect of dissipation and friction was also investigated in various test runs. The rms distance to observations obtained in the most representative of these test runs is plotted in Fig. 4. The results show that the model errors persist when the numerical viscosity coefficient is changed or when an Ekman friction term is added in the momentum equation. We notice that, in all cases, vortex splitting occurs faster than in the experimental reality (Fig. 6). It is therefore likely that the basic simplifying assumptions of the hydrostatic, two-layer shallow-water formulation, rather than resolution, dissipation or friction problems, are responsible for the limitations of the model. For instance, the interface between the layers may have a finite thickness in reality, leading to effects that cannot be reproduced in the two-layer simulation. Also, the hydrostatic approximation may slightly enhance the growth of baroclinic instability, as shown in the theoretical study of non-hydrostatic effects by Stone (1971). In the last stage of our testing procedure, we perform assimilation using only upper layer data and check how the behavior of the lower layer is reconstructed. The results are shown in Fig. 6. Although some local discrepancies are observed in the bottom layer compared to the measured velocity field, the global flow field is well reproduced. ## 4 Conclusion The results reported in the present letter illustrate the interest of an experimental test-bed for operational oceanography : (i) Thanks to data assimilation, a complete description of the experimental flow fields is obtained, including the non-measured variables. Any physical quantity can then be calculated. Data assimilation can thus be used as a complementary tool for experimental investigation and physical analysis of the flow. For instance, potential vorticity anomalies can be calculated, providing quantities which are generally impossible to measure but which are crucial to a better understanding of the physics of the baroclinic instability. (ii) The obtained flow field can be used as an initial condition to test the numerical model. The divergence of the numerical model from reality is, in principle, caused either by the sensitivity of system evolution to the initial condition, or by the model error itself. We have checked that, in our test cases sensitivity to weak variations in the initial condition is not the dominant effect. This makes it possible to quantify the systematic forecast errors. Thus, even weak model errors can be detected, of the order of 1/30 of the dominant inertial term in the present case. Such a weak model limitation would be probably much more difficult to detect in complex oceanic applications. Test runs were performed to show that these model errors are not caused by resolution, dissipation or friction problems. The most probable sources of error are the hydrostatic approximation or the two-layer formulation of the equations. Further work is needed to test this hypothesis. (iii) The accuracy of the assimilation scheme can also be analysed in detail. The present study shows, for instance, how the assimilation scheme is able to reconstruct the velocity field of the lower layer from observation of the upper layer. This is clearly of practical interest because vertical extrapolation of the measured surface quantities is a great challenge in oceanography (see for instance Hurlburt 1986). Many other tests can of course be performed with the available data using various dynamical models and/or assimilation schemes. Possible improvement by non-hydrostatic models would be of particular interest. The study of other processes is in progress, involving the instability of boundary currents, gravity currents on a slope and current/topography interaction. The measurements obtained from these experiments are available to other researchers on the Coriolis web site (www.coriolis-legi.org) as a data base to test numerical models and assimilation schemes. Figure 2: Velocity field in the top layer (a) and interface depth (b) at \\(t=75s\\) in the free run, in the experiment and in the simulation performed with data assimilation every 22s. For clarity, only \\(25^{2}\\) vectors are plotted. The rms measurement error is about 0.5 mm.s\\({}^{-1}\\). Figure 3: Amplitude of baroclinic mode 2 in the top layer as a function of time in the experiment (line with stars), in the free simulation (thin line) and in the simulation performed with data assimilation every 22s (thick line). Figure 4: Value of the rms distance between the simulated and measured velocity fields in the top layer as a function of time in the simulation performed with data assimilation every 22s and in the simulation where data assimilation is stopped at \\(t=75s\\). \\(200^{2}\\) grid points are used in both layers. The results obtained in two other test runs are also plotted : simulation with doubled viscosity coefficient (dashed line) and simulation with additional friction (dot-dashed line). The Ekman friction coefficient \\(C_{f}\\) is taken as equal to \\(1.4\\;10^{-3}s^{-1}\\) in the bottom layer and \\(5.6\\;10^{-3}s^{-1}\\) in the top layer. These values are those obtained assuming rigid upper and lower boundaries. Figure 6: Velocity field in the top (a) and bottom (b) layers at \\(t=350s\\) obtained in the experiment and in the simulation using different assimilation scenarios : assimilation of all data switchched off at \\(t=75s\\) (note that the vortex splitting occurs faster than in the experiment, independently of the resolution); assimilation using only top layer data until \\(t=350s\\) (note that the bottom layer is well reconstructed). For clarity, only \\(25^{2}\\) vectors are plotted in all cases. This study has been sponsored by EPSHOM, contract Nr. 9228. We acknowledge the kind support of Y. Morel for the implementation of the MICOM model, and of J.M. Brankart, P. Brasseur and C.E. Testut for the implementation of the SEEK assimilation scheme. ## References * [1] Bleck, R. and Boudra, D. Wind driven spin-up in eddy-resolving ocean models formulated in isopycnic coordinates, _J. Geophys. Res._, _91_, 7611-7621, 1986. * [2] Brasseur, P., Ballabrera-Poy, J. & Verron, J. 1999. Assimilation of altimetric data in the mid-latitude oceans using the Singular Evolutive Extended Kalman filter with an eddy-resolving, primitive equation model, _J. Marine Sc. 22_, 269-294, 1999. * [3] Fincham, A. and Delerce, G. Advanced optimization of correlation imaging velocimetry algorithms, _Experiments in Fluids 29_, S13-S22, 2000. * [4] Griffiths, R.W. and Linden, P.F. The stability of vortices in a rotating, stratified fluid. _J. Fluid Mech. 105_, 283-316, 1981. * [5] Hurlburt, H.E. Dynamic Transfer of Simulated Altimeter Data Into Subsurface Information by a Numerical Ocean Model. _J. Geophys. Res. 91_, C2, 2372-2400, 1986. * [6] Pham, D., Verron, J. and Roubaud, M. A Singular Evolutive Extended Kalman filter for data assimilation in oceanography, _J. Marine Sc._, _16_ (3-4), 323-340, 1998. * [7] Stone, P.H. Baroclinic instability under non-hydrostatic conditions. _J. Fluid Mech. 45_, part 4, 659-671, 1971.
Data assimilation is used in numerical simulations of laboratory experiments in a stratified, rotating fluid. The experiments are performed on the large Coriolis turntable (Grenoble, France), which achieves a high degree of similarity with the ocean, and the simulations are performed with a two-layer shallow water model. Since the flow is measured with a high level of precision and resolution, a detailed analysis of a forecasting system is feasible. Such a task is much more difficult to undertake at the oceanic scale because of the paucity of observations and problems of accuracy and data sampling. This opens the way to an experimental test bed for operational oceanography. To illustrate this, some results on the baroclinic instability of a two-layer vortex are presented.
Write a summary of the passage below.
arxiv-format/0302501v1.md
# Effect of Clouds on Apertures of Space-based Air Fluorescence Detectors. P. Sokolsky High Energy Astrophysics Institute University of Utah J. Krizmanic Universities Space Research Association NASA Goddard Space Flight Center ## Introduction There are several proposals to place Fly's Eye type air-fluorescence detectors in space. These include EUSO[1], under consideration for approval by the European Space Agency (ESA) for the International Space Station (ISS) at a 400 km orbit, and OWL[2], a proposal to put a pair of free-flying satellites in a higher (\\(\\sim\\)1000 km)and near-equatorial orbit. These experiments would look down on the Earth's surface over latitudes ranging from near equatorial (+/- 5 deg. proposed for OWL) to the +/- 60 deg. accessible to the ISS. These detectors have wide-angle optics with half-opening angles of near 30 deg for EUSO and 22.5 deg for OWL. This corresponds to a footprint swept out over the Earth's surface by the near nadir pointing optical system of 170,000 and 540,000 \\(\\mathrm{km}^{2}\\) respectively. Tilting the optical axis away from the nadir will increase the footprint area substantially, but we do not consider this possibility in this paper. The pixel size inside the footprint corresponds to 1km by 1km. Cosmic ray interactions in this footprint will be seen by the detectors through a broad range of weather conditions, mainly over the ocean's surface. The Fly's Eye technique[3] has been extensively developed by groups using upward-looking detectors placed on the Earth's surface. It utilizes the fact thationizing particles in shower cascades, or extensive air showers (EAS) produced by incoming ultra-high-energy cosmic rays will excite N\\({}_{2}\\) fluorescence in the atmosphere. Detection of such fluorescence light (in the 300 to 400 nm UV region) can be used to reconstruct the shower energy and the shape and position of the cascade shower in the atmosphere can be employed to infer the composition of cosmic rays. Ground-based experiments have observed cosmic rays from \\(\\sim 10^{17}\\) to just beyond \\(10^{20}\\) eV. Predicted thresholds for OWL and EUSO range from 3x10\\({}^{19}\\) eV to near \\(10^{20}\\) eV. The flux of cosmic rays above these energies is so low, that even such enormous apertures will yield only hundreds of events at the highest energies over the lifetime of the experiments. A critical issue for such space-based experiments is the fraction of aperture that is useful for the robust determination of cosmic ray shower energy and shower shape. EAS produced by cosmic rays with energy greater than 3 x \\(10^{19}\\) eV will develop in the atmosphere and trigger the detectors as they traverse distances of between 10 and 100 km (depending on the zenith angle and the height of the initial interaction). Such showers may cross through and into cloud layer at various heights. In that case, the isotropically produced N\\({}_{2}\\) fluorescence light generated by the EAS will be multiply scattered in the cloud. In addition, the forward-going Cherenkov light beam which develops along the EAS will be effectively scattered by the cloud (both backscattered from the cloud top and multiply scattered in the volume of the cloud). Similar scattering will occur in aerosol layers, though these are mostly contained in the first few km of the atmosphere above the surface. Such scattered Cherenkov light will be picked up by the detector and produce very significant distortions superimposed on the shower profile produced by isotropic fluorescence light generated by the shower electrons. Simulations have shown that passage of EAS showers through cloud layers will produce apparent structure in the shower profile[4]. In addition, since \\(\\sim 65\\%\\) of observed showers at \\(10^{20}\\) eV will have their shower maximum (Xmax) below 9 km above the ground[5], high cirrus clouds, occurring between 8 and 15 km will serve as an unpredictable attenuating mask. Light from the shower will pass through these clouds and be scattered. Unless the optical depth (OD) of these clouds as a function of position and UV wavelength is known, the energy and the shape of the shower will be mis-reconstructed. Various techniques have been proposed to deal with this problem. The most promising is a LIDAR system mounted on the detector which would sweep a laser beam along the direction of the triggered event (passing through the same triggered pixels) within several seconds of the trigger. Back-scattered light would be detected either by the fly's eye detector itself, or by a specialized LIDAR receiver. This would detect the presence of even very thin clouds. Such information could be used to correct the signal, or veto the event as unreliable. A demonstration LIDAR system (Project LITE)[6] was flown on the Space Shuttle in 1995. While technical issues with the use of lasers in space are non-trivial, GLAS[7] (a laser-altimeter system) was launched in January 2002 with a planned three year operational life, and a planned two year duration satellite based LIDAR system (CALYPSO)[8] is set for launch in 2005. It has been proposed that the intense Cherenkov beam in the shower can be used as an auto-diagnostic for the presence of clouds[1]. It is thought that cloud layers through which the shower passes will show up as structure superimposed on a smooth fluorescence light profile. More specifically, for optically thick and spatially thin clouds, the scattered Cherenkov light from the intense Cherenkov beam in the shower will develop peaks whose widths are related to the cloud thickness. However, optically thin but spatially thick (few km) clouds generate a much more subtle distortion of the shower shapes[4]. Showers passing through such clouds will be qualitatively similar to ordinary showers but may show unusually rapid rise or fall in their development. In the absence of clouds, such unusual showers would be a signature of new physics. Since such new physics would be of the greatest interest, such an auto-diagnostic technique precludes such discoveries. Unraveling the effects of clouds on shower development requires either a space-based LIDAR system to determine the locations of clouds along the triggered track (as is likely to be proposed for EUSO and OWL), or a stereo detector. In the case of a stereo detector such as OWL, the locations of peaks can be easily determined from stereo geometry alone. In addition, since scattered Cherenkov light has an angular dependence (dominated by the single-scattering phase function), the Cherenkov peaks or distortions will have different intensities when viewed at different angles by the two OWL detectors. In contrast, the portion of the shower which develops in clear air and is thus dominated by isotropically produced N2 fluorescence light will produce equal signals in the two stereo detectors after geometrical and atmospheric Rayleigh scattering correction. The lack of such balance for cloud scattered Cherenkov light will be an important signature, differentiating real from apparent \"bumpy\" structure in the development of an EAS in the atmosphere. In addition, the presence of high over-riding clouds that scatter the fluorescence light as it propagates towards the detectors will also manifest itself as an energy imbalance between the two stereo detectors. These techniques, while important for understanding signals produced by EAS, do not give the instantaneous aperture of the experiment, i.e. what fraction of the geometrical aperture is sufficiently un-obscured to allow the EAS to trigger the detector. Since climatology studies indicate that clouds of one kind or other cover the Earth's surface about 70%[17] of the time, this is a non-trivial correction. This correction is also dynamic, constantly changing as a function of time as the detector footprint sweeps over the Earth's surface. A LIDAR could in principle sample the entire aperture in a small enough grid to determine this aperture accurately. Unfortunately, the required data rate is much too high to be practicable, given the speed at which land passes below the orbiting detector. A coarse sampling is likely to be insufficient because cloud patterns and topologies are very variable on many distance scales. Furthermore, LIDAR, while pinpointing cloud locations accurately, does not represent how an EAS crossing the cloud would trigger the detector. This would have to be determined in Monte Carlo simulation, with some model of how this particular kind of cloud scatters light. In this paper, we instead propose to ask a simpler question. Assuming that the details of the nature of clouds (height, OD, albedo etc.) are much more difficult to accurately ascertain than their simple presence, we inquire first into the fraction of the geometrical aperture which will be completely cloud free. If this is large enough, then the experiment can clearly be successful. If it is too small, then a next level of complexity must be addressed. ## IR Remote Sensing of Clouds We use existing remote-sensing data to develop cloud-masks. Since EUSO and OWL only operate at night, only IR data in the 3 to 15 micron region is useful (Note Solar reflected light represents a 6000 K black body, while nighttime IR from the Earth represents a \\(\\sim\\) 270 K black body - hence IR above 3 microns will come from the Earth). There may also be differences in the kinds and distributions of clouds between day and night so we use only nighttime IR data. Combined GOES and other geostationary satellites give snapshots of the entire sub-polar Earth twice an hour[10]. However, the IR pixel size is not ideal (4km x 4km vs 1km x 1km for EUSO or OWL) and they have a limited number of wavelength windows. A new generation of GOES satellites (GIFTS) are planned for launch in 2006[20]. These will have imaging IR Fourier spectrometers, so that a full spectrum of IR light will be available for each pixel. This should make cloud height determination much more precise. At present, however, polar orbiting satellites that carry instruments such as MODIS have the best spatial and wavelength resolution (1km x 1km resolution and 36 spectral bands ranging from 0.4 to 14.4 microns). As discussed below, this information can be used to determine the height of the clouds more accurately. ## IR Transmission Through the Atmosphere and the SST IR is readily absorbed by water vapor and trace elements in the Earth's atmosphere at most wavelengths. However, there are a number of windows, notably 3-5 microns and 8-13 microns (see Fig. 1) that allow more efficient detection of IR from the Earth's surface. MODIS and GOES satellites observe upwelling radiation near 11 microns, near the peak of the 270 K Black Body spectrum. In the absence of clouds and aerosols, the intensity of 11 micron IR is directly related to the surface temperature and surface emissivity. Ocean water is a good black-body, hence clear, cloud free pixels at 11 microns can be used to measure the sea surface temperature or SST. This \"product\" is of great interest to oceanographers and climatologists[11]. The SST, while having geographical and long-term temporal variations (cf. the \"el nino\" effect) is quite stable in the short-term. Its determination in a pixel can be verified using the extensive sea-buoy and freighter data base maintained by NOAA. What is required is knowledge that the pixel under consideration is truly \"cloud-free\". The MODIS group has developed a set of algorithms (described below) to determine such cloud-free pixels[12]. They produce a \"MODIS cloud-mask product\" with four confidence indexes (high confidence cloud free, confident cloud free, probably cloudy and cloudy). Note that the SST cloud mask does not differentiate between high and low clouds or the presence of aerosols detected as clouds. This product has been checked by comparing with the SST derived from surface measurements. A good example of this is the SST product for the Gulf of Mexico. It turns out that the Gulf of Mexico is essentially a perfect isotherm from the months of June to September[13] (less than 0.1 K SST variation). This makes it a perfect background for checking the cloud mask, since even thin clouds will produce a lower effective SST temperature relative to the uniform cloud-free pixels. Below, we use the MODIS cloud mask to determine the OWL cloud-free aperture efficiency. ### Cloud Detection Algorithms A number of algorithms have been developed to select out cloudy pixels[14]. They are based on four basic ideas: a. Temperature threshold b. Spatial coherence c. Temporal coherence d. Temperature differences for adjacent IR bands We consider these in turn. Figure 1: **Earth radiance in the mid-to far-infrared spectrum. The various curves give a range of expected infrared radiances for a variety of typical atmospheres and surface temperatures.** A 300 K blackbody curve is provided to permit visual comparison of path length absorption (from reference [18]). Temperature threshold As indicated above, sea-water temperature is stable, with small diurnal variation and extensive surface data bases exist. Cloudy pixels will produce a lower temperature, with higher clouds appearing cooler than lower clouds (the lapse-rate of the atmosphere is approximately 6 degrees K/km). One can establish a threshold temperature, typically for the 11 micron IR window, \\(\\rm T_{b}\\), such that pixels with \\(\\rm T<T_{b}\\) are considered cloud contaminated. This threshold can be dynamically adjusted, either by comparing to the local geographical data base, or using the fact that any large enough cloud scene will have enough clear pixels that these will show up as a high temperature peak in a histogram (see Fig. 2). ## 0.b Spatial Coherence Clouds often have large variations in effective IR temperature over small distance scales, due to changes in height and emissivity. An array of pixels (arrays as small as 3 x 3 are effective) can be used to determine a mean \\(\\rm T\\) and a standard deviation \\(\\sigma\\). If \\(\\sigma\\) is larger than some threshold \\(\\sigma_{\\rm thr}\\) determined from arrays of clear pixels (as defined by the threshold temperature test, for example), then the entire array is flagged as potentially cloudy. ## 0.c Temporal Coherence GOES and other geostationary satellites can check the stability of a pixel temperature as a function of time. Over water, clear pixels will show only the small diurnal variations. Polar orbiting MODIS satellites typically return to the same scene about 9 times a day and similar criteria can be applied, but over considerably larger time intervals. ## 4 Temperature Differences Radiances in adjacent IR windows in the 11-14 micron region come from different altitudes in the troposphere due to increasing CO\\({}_{2}\\) absorption with increasing wavelength. The 11 micron window sees surface radiances clearly while windows near 14 microns are only sensitive to IR from high cloud tops near 10 km, since radiation from below is absorbed. Distributions of temperature differences between such windows, \\(\\Delta\\)Tij, can be studied to establish a \"clear\" range and threshold rejection can be performed. Alternatively these temperature differences can be used in the study of spatial and temporal coherence. _This \\(\\Delta\\)Tij test is particularly sensitive to the presence of high thin clouds which can be missed in a simple temperature threshold test._ All of these tests can be combined to generate a cloud mask such as the MODIS SST product. ## 5 Determination of Cloud Height Single, optically-thick clouds are assumed to be in thermal equilibrium with the surrounding atmosphere. The 6 deg/km temperature lapse rate in the troposphere (or more precisely, a measurement of the P(T) profile using radiosonde data) could then allow us to determine the cloud-top height from a single measurement of the 11 micron IR radiance, if the clouds emissivity were known and under the assumption that all radiation was emitted at the cloud-top surface. Unfortunately, cloud emissivities vary depending on cloud composition (ice versus water droplets, for example) and even with ice crystal structure. An alternative method called CO\\({}_{2}\\) slicing[14] has been developed to deal with this problem. ## 6 CO\\({}_{2}\\) Slicing Algorithm The MODIS team has developed an algorithm for cloud height determination based on the following assumptions[15] * The cloud-top emissivity is a slow function of wavelength in the IR * A detectable cloud (typically with optical depth of \\(>.1\\)) can be represented by radiation from the cloud top only (this is necessary to make the mathematical analysis tractable). Above 11 microns, CO2 absorption reduces IR throughput from the surface. Taking ratios of IR measurements in adjacent windows in this wavelength range both removes the dependence on emissivity and increases sensitivity to cloud-top height. The MODIS team states that a combination of this technique and the temperature difference technique allows them to resolve cloud heights even when overriding thin cirrus clouds are present. High thin cirrus clouds are stated to be detectable down to OD of \\(\\sim\\)0.1. Note, however,that since UV light is scattered by clouds more effectively than IR[21], that this corresponds to a near UV OD threshold more like 0.2 to 0.3. The physical and mathematical basis for the CO2 slicing technique is presented in Appendix A. ### Large-scale Cloud Distribution An overall view of the problem posed by clouds can be had by examining the HRES data set (this was an imaging IR satellite preceding the MODIS era) averaged over 2 deg by 3 deg latitude-longitude bins over latitude ranges up to +/- 60 degrees[17]. We use data averaged over 6 years for the months of February and July (representing possible seasonal variations) and broken down into low (\\(<2\\)km), medium (2 to 8 km) and high (8 to 17 km) cloud incidence, as determined by the CO\\({}_{2}\\) slicing algorithm. Note that if multiple clouds are present, the data reports the highest cloud height. Fig 3 shows the incidence of various types of clouds as a function of latitude for the two seasons for latitude between + 20 and - 20 degrees. Table 1 summarizes the data for latitudes between +60 and - 60 degrees. Several general trends emerge. * Low clouds are present at all latitudes at the 40-50% level. * Medium high clouds occur independently of latitude with an incidence of about 20% and then rise to 25% at high latitudes. High clouds are somewhat more prevalent near the equator, but the incidence declines very slowly and remains at the 12 to 15% level for all orbital inclinations. * Seasonal variations, integrated over the orbital paths are small. **Fig. 3a - Percent Incidence of Clouds (top - high(\\(>\\)8km), middle - medium (2-8 km), bottom - low (\\(<\\)2km) for 20 to 10 deg N latitudes (each trace is a 2 degree step in latitude). X axis is longitude in units of 0.1 degree.** **Fig. 3b - Percent Incidence of Clouds (top - high(>8km), middle - medium (2-8 km), bottom - low (<2km) for 10 to 0 deg N latitudes. X axis is longitude in units of 0.1 degree.** **Fig. 3c - Percent Incidence of Clouds (top - high(\\(>\\)8km), middle - medium (2-8 km), bottom - low (\\(<\\)2km) for 0 to -10 deg S latitudes. X axis is longitude in units of 0.1 degree.**While there are areas on the Earth's surface which are relatively free of clouds (such as the South Pacific), integrated over all longitudes, the latitude dependence of cloud incidence is quite slow. The ISS orbit might have somewhat fewer high and low clouds and somewhat more mid-level clouds on average. Mid-level clouds certainly are the most problematic as they occur where most of the EAS develop, but the fraction of data taken over land (where cloud finding is much more difficult and the CO2 slicing method less reliable) and over light-polluted areas is also an issue. The only significant way to decrease cloud incidence is to go into geostationary orbit over an area like the South Pacific. This is not practical at the present level of technology, since it would require \\(\\sim\\) 100 m diameter optical apertures. ### High Resolution Cloud Distributions While the 2 deg x 3 deg averaged HRES data is useful to give a general picture of the problem, it neglects the effect of correlations between different cloud types and is too coarse to convolve with the CR track-length distribution so as to determine the trigger aperture. To investigate this we take the most reliable remote-sensing based definition of a cloud-free pixel (derived from MODIS satellite data[9]) and create realistic cloud masks. We then throw Monte Carlo cosmic ray events into this real scene and require that the resultant track not cross any cloud-contaminated pixels and have a clear area (road) around it. In the case of scenes with thick, continuous cloud layers, we expect the efficiency to be very close to the ratio of clear to total pixels. For highly striated, chaotic or spottily dispersed clouds the efficiency depends on the topology, the fill factor and the length of track. Specifically, to study the interaction of the track length distribution with the scale of clear spaces between clouds, we use the 1km x 1km MODIS SST cloud mask product from actual instantaneous cloud scenes[18]. These nighttime scenes are approximately 2200 \\(\\times\\) 2000 km\\({}^{2}\\) and are much larger than the \\(\\sim\\) 400 km radius OWL footprint. We take the center of each scene and generate Monte Carlo events randomly throughout a footprint (see Fig 4 for a typical distribution). The MODIS cloud mask product employs algorithms that incorporate the various IR band measurements along with ancillary data, e.g. land/water maps, to determine four clear-sky confidence levels. Numerically, a 99% confidence that a 1 \\(\\times\\) 1 km\\({}^{2}\\) pixel is cloud-free is denoted as _high-confidence clear_, a 95% confidence is considered _clear_, a 66% cloud-free confidence is considered _probably cloudy_, and a pixel is considered _cloudy_ if the cloud-free confidence is less than 66%. For this study, we form a binary cloud flag for a given pixel by assigning a _high-confidence clear_ or a _clear_ mask value as CLEAR and a pixel with a _probably cloudy_ or a _cloudy_ mask value as CLOUDY. Given that the _probably cloudy_ designation corresponds to a cloud-free probability of 66%, the inclusion of this mask value under the CLOUDY flag could, in principal, artificially enhance the level of cloudiness in a MODIS scene. However, the fraction of pixels with a _probably cloudy_ mask value in a particular MODIS scene is approximately 10% for the scenes considered in this study with the location of the _probably cloudy_ pixels highly correlated to the edges of _cloudy_ regions. Thus, the conservative assignment of these as CLOUDY conforms to goal of this study: the determination of the fraction of observed UHECR airshowers that occur in definitely cloud-free areas of the viewed atmosphere. The Monte Carlo events used in this cloud study assumed \\(10^{20}\\) eV protons as the primaries and were randomly distributed uniformly in position and isotropic in angular incidence. Fully fluctuated airshowers were generated in 1 usec time steps with the subsequent air fluorescence and scattered Cherenkov light attenuated by the atmosphere in a wavelength-dependent fashion. In the Monte Carlo, we assume that the atmosphere is cloud free. The OWL instruments were modeled with the 2002 baseline design[2] assuming 1000 km orbits and 500 km satellite separation. Events were accepted if they passed the nominal trigger criteria of having at least 4 detector pixels with at an integral signal of at least 5 photo-electrons (in each pixel) for both OWL eyes. The event sample included 1674 events, and the track length was defined as the portion of the airshower viewed by both instruments. The resultant 3-dimensional track length distribution is asymmetric with a mean value of approximately 16 km and a most probable value near 8 km. The two dimensional, xy-projection of the track lengths yields an asymmetric distribution with a mean value of approximately 15 km and a most probable value near 5 km. The xy-projected distribution has a range from slightly more than 0 km, corresponding to nearly vertical events,to approximately 125 km in projected length. The xy-projected track lengths did not include the modification of projecting onto a sphere as this is a minor effect for the spot size of approximately 400 km radius considered in this study. The xy-projected track lengths were then superimposed on a sample of MODIS generated data samples that provided a pixel-by-pixel cloud mask with approximately 1 km spatial resolution. The MODIS data was from the \\(15^{\\text{th}}\\) day of the odd months (Jan, Mar, etc.) in 2001 and at least 12 different MODIS near-equatorial measurements from each date were incorporated into this study. The center of each MODIS scene was selected for the OWL track superposition as each data scene was larger than the approximate 800 km diameter OWL ground spot size. The nearest distance of the various MODIS cloud mask designations (confident cloud, probable cloud, confident clear, high-confident clear) for a MODIS pixel as compared to the projected OWL track was recorded. Thus, the fraction of tracks with a cloud some minimum distance away could be determined for each MODIS scene. Figure 5: Distribution of Fractional Clear Aperture for 85 Randomly Selected MODIS Cloud Scenes Along the Equator. On average, only 6.5% of incident cosmic ray airshower tracks have a completely clear aperture. The least restrictive \"clear aperture\" is defined as having tracks with no clouds closer than 1 km or one pixel. Since the extended OWL optical spot size may split the signal between pixels, a more realistic aperture cut is defined for tracks with no clouds closer than 3 pixels. The more realistic \"clear aperture\" ranges from 0 to \\(\\sim\\) 50% over the 85 randomly selected cloud scenes considered, _with the mean at 6.5%_ and the median at 3.0 % (see Fig. 5). Fig 6 shows the distribution of the \"clear\" fraction as a function of latitude and longitude. There is no strong evidence for geographical correlation. Fig 7 shows the distribution of cloudy and clear pixels, and the correlation between the fraction of clear pixels and the fractional \"clear\" aperture for tracks. **Fig 4. Portion of MODIS SST Cloud Mask. The projected \\(10^{20}\\) eV simulated tracks, which trigger OWL, are shown superimposed as white lines. This view corresponds to approximately one quadrant of the square defined by the embedded quasi-circular OWL footprint. Light Blue - High-Confidence Cloud Free, Dark Blue - Cloud Free, Red - Probably Cloudy, Green - Cloudy. Note MODIS Roster Scanning Artifacts. For this particular Cloud Scene, only 19.9% of the simulated track sample have a “Clear” Aperture as defined by no cloudy or probably cloudy MODIS pixels within 3 km of a track.** [MISSING_PAGE_POST] As expected from climatological studies, the overall cloudy pixel probability averages \\(\\sim\\)80% and the \"clear\" track aperture is significantly smaller than the clear pixel fraction, though it becomes approximately proportional for relatively clear cloud scenes. Preliminary results were based on half the number of cloud scenes reported here, but doubling the number did not significantly change the distributions. This conservative estimate of \"cloud-free\" track efficiency results in a residual aperture of the full geometric aperture of only 6.5%. For the EUSO detector, its \\(1.7\\mathrm{x}10^{5}\\) km\\({}^{2}\\) footprint becomes effectively \\(1.1\\mathrm{x}10^{4}\\) km\\({}^{2}\\) which implies a time-averaged aperture (assuming a 14% on-time set by the requirement of no moon and no sun) of \\(1540\\) km\\({}^{2}\\), smaller than the Auger ground array area of 3000 km\\({}^{2}\\)). For the OWL detector, its 5.4 x \\(10^{5}\\) km\\({}^{2}\\) footprint reduces to 4900 km\\({}^{2}\\) and while there will certainly be a significant number of \"golden\" cloud-free events near the OWL detector's low energy threshold, it is clearly necessary to deal with clouds in a less restrictive way to regain the lost aperture. Figure 7: **Top: Distribution of Cloudy and Clear Pixels; Bottom: Correlation between fractional “clear” aperture and fraction of clear pixels.** ### Low Cloud Incidence Since low clouds (here defined as \\(<\\)2km) contribute about half of the total cloud incidence (see Table 1) and EAS typically have their shower maxima above 2km, we might expect a significant improvement in \"clear\" aperture if we are willing to live with such clouds. For monocular experiments, such as EUSO, their presence may even be helpful, since the UV albedo from low, dense water-vapor clouds is much higher than from sea or land. The reflected Cherenkov light from the cloud-top will produce a marker which can be used to improve the geometrical reconstruction of the track, _if the cloud height is known from some other measurement (LIDAR return, for instance)_. On the other hand, knowing that the clouds are indeed low and that there are no over-riding high thin clouds is a much less precise proposition than knowing that there are no clouds at all in a pixel. We are working with the MODIS team to produce a \"clear or low-cloud\" product, based on 11 to 13 micron temperature differences, the SST cloud mask, and the derived cloud-top temperature and pressure. For the purpose of getting a quick look, however, we use 11 micron GOES data[19] to find the effective cloud-top temperature for a particular day and hour. We crudely determine the SST from the distribution of warmest pixels in a Figure 8a: **Distribution of GOES Pixel Temperatures for 800 km (latitude) and 1800 km (longitude) along the equator. Figure shows transition from low, warm clouds to high cool clouds back to low clouds again. Temperature is along Z axis, latitude along X axis, and longitude along Y axis.**cloud scene and assume a typical 6 deg/km lapse rate. For the data set under consideration, clear pixel sea surface temperature corresponds to T\\(>\\) 292 K, so that a 2 km cloud would have a T=280 K, while clouds near 10 km would have T=230 K or cooler (see Fig. 2). Figs. 8a and 8b show the distribution of cloud-top temperatures as determined in each 4 km by 4 km GOES 11 micron pixel as a function of latitude and longitude for a 800 by 1800 km swath near the equator. Fig 8a shows a transition from an area of low clouds to an area dominated by high clouds while Fig 8b shows an area with rapid variation between high and low clouds. We can also examine the distribution of pixel temperatures in each OWL aperture footprint around the equator. The ratio of clear (T\\(>\\)292 K) and clear or low cloud (T\\(>\\)280 K) to the total number of pixels for each footprint gives a crude upper limit on the \"clear or low cloud\" efficiency. For the particular day and hour considered, the clear efficiency was 31%, while the \"clear or low cloud\" efficiency was 70%, averaged over the equator. Over five contiguous 1800 km (longitude) x 800 km (latitude) steps, these efficiencies are 18 and 70%, 43 and 73%, 33 and 61%, 14 and 39% and 28 and 83% respectively. These efficiencies do not take into account the tracklength-cloud-topology interaction. While much more work is needed, particularly with the new and more reliable MODIS data, it seems likely that including low clouds will increase the useful geometrical aperture will increase by about a factor of two. High thin cirrus clouds with optical depth of less than 0.3 may be difficult to detect over opaque low clouds. While such overlying clouds will certainly affect the energy and shower profile determination, the trigger efficiency is not affected[4], except very near detector threshold. Their presence can bias the determination of cloud-top pressure for low clouds to smaller values, however. ## Conclusions While space-based experiments have enormous geometrical apertures, the requirement of cloud-free viewing proposed in this paper imposes very stringent reductions. HRES data averaged over coarse bins indicates that \\(\\sim\\)75% of the time a cloud of some kind will be detected in a pixel. High spatial resolution MODIS data leads to a similar conclusion. The requirement of a cloud-free region of greater or equal to 3 km (3 pixels) around a track reduces the geometrical aperture by more than an order of magnitude to 6.5%. If complications due to the interaction of light from cosmic ray EAS with clouds are to be avoided, intrinsic geometrical apertures need to be very large. For example, assuming a 6.5% cloud-free fraction, and the EUSO estimate of 12% on time (34% darkness, 50% moon-free, 80% aurora-free) leads to an 0.8% overall efficiency. Because ground arrays can operate for a decade or more, while space-based detectors have typical lifetimes of three years, an additional factor of three is required to match the integrated exposure over the lifetime of the experiments. An order of magnitude larger aperture than the Auger ground detector (7000 km\\({}^{2}\\)str) would thus require an intrinsic geometrical aperture of 2.7 x 10\\({}^{7}\\) km\\({}^{2}\\)str. We conclude that to fully exploit the space-based fluorescence technique, one must confront the issue of EAS - cloud interactions. This will be discussed in a paper presently in preparation. ## Appendix A - Basis of the CO2 Slicing Method Consider a single level optically dense cloud in a single pixel. The cloud does not necessarily fill the pixel and the product of the filling fraction times emissivity is defined as \\(f\\). In that case, the total upward-welling radiance at the MODIS detector, R(\\(\\lambda\\)) can be written as: \\[\\mathrm{R(\\lambda)=R_{surface}(\\lambda,T_{s})(1\\text{-}f)+fR_{cloud}(\\lambda,T_ {c})+R_{below}(1\\text{-}f)+R_{above}}\\] Where R(\\(\\lambda\\), T) is proportional to the Planck function, T\\({}_{s}\\) and T\\({}_{c}\\) are the surface and cloud-top temperatures while R\\({}_{below}\\) and R\\({}_{above}\\) are the integrated column radiances from the atmosphere below and above the cloud top. Note in this approximation, atmospheric column radiances directly below the cloud are assumed to be totally absorbed by the cloud, and the cloud emission occurs at the cloud top only. We can then define the clear radiance R(\\(\\lambda\\))\\({}_{\\rm clear}\\) as \\[\\rm R(\\lambda)_{\\rm clear}=R_{\\rm surface}(\\lambda,T_{s})+R_{\\rm below}+R_{above}\\] Then \\(\\rm\\Delta R=R(\\lambda)\\) - R(\\(\\lambda\\))\\({}_{\\rm clear}\\) = -\\(fR_{\\rm surface}(\\lambda,T_{s})+fR_{\\rm cloud}(\\lambda,T_{c})\\) -\\(fR_{\\rm below}\\) where, more precisely, \\({}^{1}R_{\\rm surface}(\\lambda,T_{s})=B(\\lambda,T(P_{s}))\\tau(\\lambda,P_{s})\\) \\(R_{\\rm cloud}(\\lambda,T_{c})=B(\\lambda,T(P_{c}))\\tau(\\lambda,P_{c})\\) \\(P_{\\rm c}\\) \\(R_{\\rm below}=\\int B(\\lambda,T(P))(d\\ \\tau/dP)\\ dP\\) \\(P_{s}\\) where B is the Planck distribution function and \\(\\tau(\\lambda,P)\\) is the atmospheric absorption from pressure P to the top of the atmosphere. T(P) is the temperature profile of the atmosphere. Integrating by parts, one finds \\(\\rm\\Delta R=\\)\\(f\\)\\(\\int\\tau(\\lambda,P)(dB(\\lambda,T(P))/dP)\\ dP\\). If observations are made at two windows with similar \\(\\lambda\\), then one can assume that \\(f\\) is independent of wavelength and \\(\\rm\\Delta R(\\lambda_{\\rm 1})/\\rm\\Delta R(\\lambda_{\\rm 2})\\) depends only on \\(P_{c}\\) if \\(\\tau(\\lambda,P)\\) and T(P) is known. Now \\(\\rm\\Delta R\\) is a measured quantity since one can find a clear air pixel close to the cloudy pixel under consideration using the SST cloud mask. The RHS of the ratio equation can then be calculated and compared to the measured ratio \\(\\rm\\Delta R(\\lambda_{\\rm 1})/\\rm\\Delta R(\\lambda_{\\rm 2})\\) for a series of nearby wavelengths. \\(P_{c}\\) is then the best match for the whole series. Note that once \\(P_{c}\\) is known, \\(f\\) can be calculated as well. ## Aknowledgements We would like to thank Bob Streitmatter for supporting this work and for many stimulating discussions. Bill Ridgeway and Dennis Chesters were generous with their time in pointing us to relevant MODIS and GOES data and reformatting data to fit our needs. One of us (P.S.) would like to thank the John Simon Guggenheim Foundation and Universities Space Research Association for financial support. ## References * [1] Extreme-Universe Space Observatory (EUSO) Concept Study Report, NASA AO 01-OSS-03-MIDEX Concept Study Report, 2003. 2. \"Orbiting Wide-Angle Light-Collector (OWL)\", White paper submitted to NASA SEUS, 2002; http:// www.physics.ucla.edu/hep/owlalks/streitmatter.pdf. 3. R.M. Baltrusaitis et al.,\"The Utah Fly's Eye Detector\", Nuclear Instruments and Methods, A240, pp. 410-428, 1985. 4. T. Abbu-Zayad, personal communication; T. Abbu-Zayad, \"Cloud Simulations for the OWL detector\", unpublished, 2002; E. M. Rosa and J.K. Krizmanic, \"A Study of Integral Fluorescence through Different Models of Clouds\", unpublished, 2000. 5. J. K. Krizmanic and the OWL Collaboration, Proceedings of the 27 International Cosmic Ray Conference,Vol. 2, p. 861, 2001. 6. R. H. Couch, et. al, _Opt. Eng._, 30, 88-95, 1991. 7. B. E. Schutz, \"Spaceborne laser altimetry: 2001 and beyond\", in Plag, H.P. (ed.), Book of Extended Abstracts, WEGNER-98, Norwegian Mapping Authority, Honefoss, Norway, 1998; [http://icesat.gsfc.nasa.gov/](http://icesat.gsfc.nasa.gov/) 8. D. M. Winker and B. A Wielicki, \"The PICASSO-CENA Mission\". in Sensors, Systems, and Next Generation Satellites\", H. Fujisada, Ed., _Proc. SPIE_, Vol. 3870, pp. 2636,1999. 9. M. D. King et al., \" Remote sensing of cloud, aerosol and water vapor properties from the Moderate Resolution Imaging Spectrometer (MODIS)\", IEEE Trans., Geosci. Remote Sensing, 30, 2, 1992 [http://modis.gsfc.nasa.gov/about/design.html](http://modis.gsfc.nasa.gov/about/design.html) 10. W. Paul Menzel and James F. W. Purdom, \"Introducing GOES-I: The First of a New Generation of Geostationary Operational Environmental Satellites\", Bulletin of the American Meteorological Society, Vol. 75, No. 5, May 1994; [http://rsd.gsfc.nasa.gov/goes/](http://rsd.gsfc.nasa.gov/goes/) 11. O. B. Brown and P. Minnett, \"MODIS Infrared Sea-surface Temperature Algorithm\",Version 2.0; M.T. Chahine, \"Infrared Sensing of Sea-surface Temperature\" in _Remote Sensing of Atmospheres and Oceans_, Academic Press, New York, p. 411, 1980 12. S. A. Ackerman et al., Journal of Geophysical Research-Atmospheres, 103(D24), 32141-32157, 1998. 13. F.E. Muller-Karger, personal communication; Wick, G. A., W. J. Emery, and P. Schluessel.. Journal of Geophysical Research. 97(C4). 5569-5595, 1992. 14. Stumpf and Pennock, J. of Geophysical Research, 94(C10),14,363 (1989). 15. M.T. Chahine, J. Atmos. Sci., 31,233(1974); W. P. Menzel, D. P. Wylies and K.J. Strabata, J. Appl. Meteor., 31, 370,1992. 16. W.P. Menzies et al., \"Cloud Top Properties and Cloud Phase Algorithm Theoretical Basis Document\", Version 6, (2002) 17. D.P. Wylie and W.P. Menzel, Jour. Clim., 12, 170, 1999. 18. S. Ackerman et al.,\"Discriminating Clear-Sky from Cloud with MODIS Algorithm\" Theoretical Basis Document (MOD35), Version 4 (2002); Cloud scenes were provided by Bill Ridgeway of the Goddard MODIS team. 19. Global composite data was provided by Dennis Chesters of the Goddard GOES team. - The New Millennium Earth Observing-3 Mission\", _Proc. of IRS 2000: Current Problems in Atmospheric Radiation_, A. Deepak Publishing, Hampton, Virginia, 2001.[http://asd-www.larc.nasa.gov/GIFTS/abs.html](http://asd-www.larc.nasa.gov/GIFTS/abs.html) * [21] W.P. Menzel, personal communication.
Space-based ultra-high-energy cosmic ray detectors observe fluorescence light from extensive air showers produced by these particles in the troposphere. Clouds can scatter and absorb this light and produce systematic errors in energy determination and spectrum normalization. We study the possibility of using IR remote sensing data from MODIS and GOES satellites to delimit clear areas of the atmosphere. The efficiency for detecting ultra-high-energy cosmic rays whose showers do not intersect clouds is determined for real, night-time cloud scenes. We use the MODIS SST cloud mask product to define clear pixels for cloud scenes along the equator and use the OWL Monte Carlo to generate showers in the cloud scenes. We find the efficiency for cloud-free showers with closest approach of three pixels to a cloudy pixel is 6.5%, exclusive of other factors. We conclude that defining a totally cloud-free aperture reduces the sensitivity of space-based fluorescence detectors to unacceptably small levels.
Condense the content of the following passage.
arxiv-format/0303264v2.md
# Cern-Th/2003-060 Unintu-Thep-05/03 Hd-Thep-03-16 Casimir Effect on the Worldline Holger Gies\\({}^{b,c}\\), Kurt Langfeld\\({}^{a}\\), Laurent Moyaerts\\({}^{a}\\) \\({}^{a}\\) Institut fur Theoretische Physik, Universitat Tubingen D-72076 Tubingen, Germany \\({}^{b}\\) CERN, Theory Division, CH-1211 Geneva 23, Switzerland \\({}^{c}\\) Institut fur theoretische Physik, Universitat Heidelberg D-69120 Heidelberg, Germany March 2003 ###### Introduction The Casimir effect [1] has recently been under intense study, experimentally [2] as well as theoretically (for recent comprehensive reviews, see [3]). In fact, we are currently witnessing a transition of the Casimir effect from a pure fundamental quantum effect, being interesting in its own right, via an experimentally challenging problem to a phenomenon becoming relevant to applied physics such as nanotechnology [4]. Moreover, the Casimir effect has been suggested as an experimentally powerful tool for investigating new physics beyond the standard model [5]. Considerable progress has been made in recent years as far as the Casimir effect of real (rather than idealized) conductors is concerned: the effects of finite conductivity, finite temperature, and surface roughness are theoretically well under control for the current experimental realizations. Even the dependence of the Casimir force on the isotopic composition of the interacting bodies has been studied recently [6]. By contrast, the dependence of the Casimir force on the geometry of the interacting bodies is neither completely understood nor quantitatively satisfactorily under control. Except for a small number of analytically solvable geometries, one has to rely on approximations among which the \"proximity force approximation\" [7, 8] represents the most widely used method. Roughly speaking, the proximity force approximation maps the Casimir effect of an arbitrary geometry onto Casimir's parallel-plate configuration, thereby neglecting curvature and tilt effects in an uncontrolled manner. In fact, the current limitations for a quantitative comparison of theory and experiment arise essentially from an estimated 1% error of the proximity force approximation. The basic obstacles against improving this situation are mainly technical in nature and partly fundamental. Standard strategies perform the Casimir calculations in two steps: first, the mode spectrum of quantum fluctuations in a given background geometry has to be identified; secondly, the Casimir energy is obtained by summing up (tracing over) the spectrum. The first step is obviously increasingly difficult the more complex a given geometry is; without a high degree of symmetry, even the use of standard numerical techniques is rather limited. The second step suffers from the same problems, but is moreover complicated by the fact that the mode sum is generally ultraviolet divergent. The divergencies have to be analyzed and, if possible, be removed by renormalization of physical parameters. Not only is the handling of these divergencies technically (and numerically) challenging, but the classification of divergencies is also still under intense debate [9, 10, 11]. In this work, we propose a method that has the potential to solve these technical problems. Moreover, it is embedded in perturbative quantum field theory with its clear and unambiguous renormalization program. Our method is based on the \"string-inspired\" worldline formalism in which perturbative \\(N\\)-point amplitudes are mapped onto quantum mechanical path integrals over closed worldlines [12] (for a recent review, see [13]). The technical advantages arise from the fact that the mode spectrum and its sum are not computed separately but all at once. These worldline integrals can conveniently be calculated with Monte-Carlo methods (_worldline numerics_) with an algorithm that is completely in dependent of the Casimir geometry; in particular, no background symmetry is required. Whereas the worldline integral is finite, the ultraviolet divergencies occur in a \"propertime\" integral, roughly corresponding to an integral over the size of the worldlines. The divergencies can be found at small propertimes (\\(\\hat{=}\\) small size \\(\\hat{=}\\) ultraviolet), where a mapping to Feynman-diagram language is possible and the standard rules of renormalization can be applied. In order to illustrate our method, we focus in this work on the calculation of Casimir forces between rigid bodies, induced by quantum fluctuations of a scalar field. The rigid bodies are modeled by background potentials \\(V(x)\\) (mainly of \\(\\delta\\) function type), which allow us to approach the idealized limit of Dirichlet boundary conditions in a controlled way. As a benchmark test, we study the classic parallel-plate configuration in detail. Finally, we compute the Casimir forces between a plate and a cylinder as well as the experimentally highly relevant case of a plate and a sphere, both in the idealized Dirichlet limit. Here we find clear signals of curvature effects if the distance between the bodies is roughly a few percent of the cylinder/sphere radius or larger. This scale characterizes the limit of quantitative accuracy of the proximity force approximation. We developed the technique of worldline numerics in [14] and it has successfully been applied to the computation of quantum energies or actions induced by scalar or fermion fluctuations in electromagnetic backgrounds [14, 15, 16]. As for any numerical method, possible finite-size or discretization errors have to be analyzed carefully. In this respect, the idealized Casimir problem turns out to be most challenging, because the background potentials with their \\(\\delta\\)-like support affect the quantum fields on all scales. Therefore, we have to make sure that our worldline numerics operates sufficiently close to the \"continuum limit\" (propertime continuum in our case). We dedicate a whole section (Sect. 3) to this question, also relevant for further applications of worldline numerics, and present a number of new and efficient algorithms for the generation of Gaussian distributed closed-loop ensembles. Though the heart of our method is intrinsically numerical, we would like to emphasize that the worldline technique offers an intuitive approach to quantum phenomena. Particularly for Casimir forces between rigid bodies, many features such as the sign of the interaction or curvature effects can easily be understood when thinking in terms of worldline ensembles (_loop clouds_). The paper is organized as follows: the next section provides a brief introduction into the worldline approach to the Casimir effect. Section 3 describes efficient methods for the generation of loop ensembles. The reader who is mainly interested in Casimir phenomenology may skip this section. Section 4 provides for an intuitive understanding of rigid-body Casimir forces in the light of the worldline language. Our numerical findings for the rigid-body Casimir force for several geometries (plate-plate, plate-sphere, plate-cylinder) are presented in section 5. ## 2 Worldline techniques for Casimir configurations ### Framework Let us discuss the formalism for the simplest case of a real scalar field \\(\\phi\\) coupled to a background potential \\(V(x)\\) by which we describe the Casimir configuration. The field theoretic Lagrangian is \\[\\mathcal{L}=\\frac{1}{2}\\partial_{\\mu}\\phi\\partial_{\\mu}\\phi+\\frac{1}{2}m^{2} \\phi^{2}+\\frac{1}{2}V(x)\\,\\phi^{2}. \\tag{1}\\] The potential \\(V(x)\\) can be considered as a spacetime dependent mass squared, implying that it has mass dimension 2. In the absence of any further fields and couplings, the complete unrenormalized quantum effective action for \\(V\\) is \\[\\Gamma[V] = \\frac{1}{2}\\mathrm{Tr}\\ln\\frac{-\\partial^{2}+m^{2}+V(x)}{- \\partial^{2}+m^{2}} \\tag{2}\\] \\[= -\\frac{1}{2}\\int_{1/\\Lambda^{2}}^{\\infty}\\frac{dT}{T}\\int d^{D}x \\,\\left[\\langle x|e^{-T(-\\partial^{2}+m^{2}+V(x))}|x\\rangle-\\frac{1}{(4\\pi T)^ {D/2}}e^{-m^{2}T}\\right]. \\tag{3}\\] Here we work in \\(D=d+1\\) Euclidean spacetime dimensions, i.e., \\(d\\) space dimensions. In Eq. (3), we have introduced the propertime representation of the \\(\\mathrm{Tr}\\ln\\) with UV cutoff \\(\\Lambda\\) at the lower bound of the \\(T\\) integral.1 Interpreting the matrix element as a quantum mechanical transition amplitude in propertime \\(T\\), we can introduce the Feynman path integral, or worldline, representation, Footnote 1: Other regularization techniques are possible as well, e.g., dimensional regularization, \\((dT/T)\\to\\mu^{2\\epsilon}(dT/T^{1-\\epsilon})\\); the propertime cutoff is used only for the sake of definiteness. For a pedagogical review of various regularization techniques in the Casimir context, see [17]. \\[\\int d^{D}x\\,\\langle x|e^{-T(-\\partial^{2}+V(x))}|x\\rangle=\\int d^{D}x_{\\rm CM} \\,\\mathcal{N}\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\! \\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\!\\ The last step of our construction that is crucial for numerical efficiency consists of introducing _unit loops_\\(y(t)\\) which are dimensionless closed worldlines parameterized by a unit propertime \\(t\\in[0,1]\\), \\[y_{\\mu}(t)=\\frac{1}{\\sqrt{T}}x_{\\mu}(Tt)\\quad\\Longrightarrow\\quad\\int_{0}^{T}d \\tau\\,\\dot{x}^{2}(\\tau)=\\int_{0}^{1}dt\\,\\dot{y}^{2}(t), \\tag{7}\\] where the dot always denotes differentiation with respect to the argument. Inserting the path integral representation (4) into the effective action (3) and using the unit loops \\(y(t)\\), we end up with the desired formula which is suitable for a numerical realization, \\[\\Gamma[V]=-\\frac{1}{2}\\frac{1}{(4\\pi)^{D/2}}\\int_{1/\\Lambda^{2}}^{\\infty}\\frac {dT}{T^{1+D/2}}\\,e^{-m^{2}T}\\int d^{D}x\\left[\\left\\langle\\,W_{V}[y(t);x]\\, \\right\\rangle_{y}-1\\right]. \\tag{8}\\] Here and in the following we have dropped the subscript \"CM\" of the center-of-mass coordinate \\(x_{\\mu}\\) and introduced the \"Wilson loop\" \\[W_{V}[y(t);x]=\\exp\\left[-T\\int_{0}^{1}dt\\,V(x+\\sqrt{T}y(t))\\right], \\tag{9}\\] and \\[\\left\\langle W_{V}[y(t);x]\\right\\rangle_{y}=\\frac{\\int\\limits_{y(0)=y(1)} \\mathcal{D}y\\ W_{V}[y(t);x]\\,e^{-\\int_{0}^{1}dt\\,\\dot{y}^{2}/4}}{\\int\\limits_{ y(0)=y(1)}\\mathcal{D}y\\ e^{-\\int_{0}^{1}dt\\,\\dot{y}^{2}/4}} \\tag{10}\\] denotes the expectation value of an operator with respect to the path integral over unit loops \\(y(t)\\). This construction of Eq. (8) is exact and completely analogous to the one proposed in [14] for electromagnetic backgrounds; further details can be found therein. For time-independent Casimir configurations, we can carry out the time integration trivially, \\(\\int dx_{0}=L_{x_{0}}\\), where \\(L_{x_{0}}\\) denotes the \"volume\" in time direction, and define the (unrenormalized) Casimir energy as \\[\\mathcal{E}=\\Gamma/L_{x_{0}}. \\tag{11}\\] ### Renormalization The analysis of divergencies in Casimir calculations is by no means trivial, as the ongoing debate in the literature demonstrates [10, 11]. The reason is that divergencies in these problems can have different sources with different physical meaning. On the one hand, there are the standard field theoretic UV divergencies that can be mapped onto divergencies in a finite number of Feynman diagrams at a given loop order; only these divergencies can be removed by field theoretic renormalization, which is the subject of the present section. On the other hand, divergencies can arise from the modeling of the Casimir boundary conditions. In particular, idealized conditions such as perfectly conducting surfaces affect quantum fluctuations of arbitrarily high frequency; therefore, an infinite amount of energy may be required to constrain a fluctuating field on all scales. These divergencies are real and imply that idealized conditions can be ill-defined in a strict sense. The physically important question is whether these divergencies affect the physical observable under consideration (such as Casimir forces) or not. If not, the idealized boundary conditions represent a simplifying and valid assumption, and the removal of these divergencies can be justified. But if the observable is affected, the idealized conditions have to be dropped, signaling the strong dependence of the result on the physical details of the boundary conditions (e.g., material properties). Even though the worldline is an appropriate tool for analyzing both types of divergencies, we concentrate on the first type in this paper, leaving a discussion of the second for future work. In order to isolate the field theoretic UV divergencies, we can expand the proper-time integrand for small propertimes (high momentum scales). Since this is equivalent to a local gradient expansion in terms of the potential \\(V(x)\\) (heat-kernel expansion), each term \\(\\sim V(x)^{n}\\) corresponds to a scalar one-loop Feynman diagram with \\(n\\) external legs coupling to the potential \\(V(x)\\) and its derivatives, and with the momentum integration already performed (thanks to the worldline method). Using \\(\\int_{0}^{1}dt\\,y_{\\mu}(t)=0\\) and \\(\\int_{0}^{1}dt\\,\\langle y_{\\mu}(t)y_{\ u}(t)\\rangle_{y}=(1/6)\\delta_{\\mu\ u}\\), we find up to order \\(T^{2}\\), \\[\\int_{x}\\langle W_{V}-1\\rangle_{y} = -T\\int d^{D}x\\,V(x)-\\frac{T^{2}}{6}\\int d^{D}x\\,\\partial^{2}V(x) \\tag{12}\\] \\[+\\frac{T^{2}}{2}\\int d^{D}x\\,V(x)^{2}+{\\cal O}(T^{3}),\\] which should be read together with the propertime factor \\(1/T^{1+D/2}\\) in Eq. (8). The term \\(\\sim V(x)\\) corresponds to the tadpole graph. In the conventional \"no-tadpole\" renormalization scheme, the renormalization counter term \\(\\sim V(x)\\) is chosen such that it cancels the tadpole contribution completely. Of course, any other renormalization scheme can be used as well. The corresponding counter term can be fixed unambiguously by an analysis of the tadpole Feynman diagram in the regularization at hand. In \\(D<4\\) spacetime dimensions, there is no further counter term, since \\(V(x)\\) has mass dimension 2. The remaining terms of \\({\\cal O}(T^{2})\\) are UV finite in the limit \\(T\\to 1/\\Lambda^{2}\\to 0\\). In \\(4\\leq D<6\\), we need further subtractions. Here, it is useful to note that the last term on the first line of Eq. (12) vanishes anyway, provided that the potential is localized or drops off sufficiently fast at infinity. This is, of course, always the case for physical Casimir configurations.2 Renormalization provides us with a further counter term \\(\\sim\\int_{x}V^{2}\\) subject to a physically chosen renormalization condition such that the divergence arising from the last \\(T^{2}\\) term is canceled. With this renormalization condition, the physical value of the renormalized operator \\(\\sim V^{2}\\) is fixed.3 For even higher dimensions, similar subtractions are required that involve higher-order terms not displayed in Eq. (12). Footnote 3: Since we used a gradient expansion, the renormalized operator is fixed in the small-momentum limit; if the renormalization condition operates at finite momentum, e.g., using the polarization operator, possible finite renormalization shifts can be obtained from an analysis of the corresponding Feynman diagram. However, in the present case of static Casimir problems, it is natural to impose a renormalization condition in the small-momentum limit anyway. As far as controlling divergencies by renormalization is concerned, this is all there is and no further _ad hoc_ subtractions are permitted. However, having removed these UV divergencies with the appropriate counter terms does not guarantee that the resulting Casimir energy is finite. Further divergencies may arise from the form of the potential as is the case for the idealized Casimir energies mentioned above. In the present work, we take up a more practical position and are merely interested in the Casimir forces between disconnected rigid bodies which are represented by the potential \\(V(x)=V_{1}(x)+V_{2}(x)+\\dots\\). We assume the rigid bodies as given, disregarding the problem of whether the Casimir energy of every single body is well defined by itself. For this, it suffices to study the _interaction_ Casimir energy defined as the Casimir energy of the whole system minus the separate energies of the single components, \\[E:={\\cal E}_{V=V_{1}+V_{2}+\\dots}-{\\cal E}_{V_{1}}-{\\cal E}_{V_{2}}-\\dots\\ . \\tag{13}\\] Note that the subtractions do not contribute to the Casimir force which is obtained by differentiating the interaction energy with respect to parameters that characterize the separation and orientation of the bodies. By this differentiation, the subtractions drop out. Furthermore, these terms remove the field theoretic UV divergencies of Eq. (12): this is obvious for the terms linear in \\(V(x)\\); for the quadratic one, this follows from \\(\\int_{x}V^{2}=\\int_{x}(V_{1}+V_{2}+\\dots)^{2}=\\int_{x}(V_{1}^{2}+V_{2}^{2}+\\dots)\\). The last equation holds because of the local support of the disconnected bodies. By the same argument, the subtractions remove every term of a local expansion of \\({\\cal E}_{V_{1}+V_{2}+\\dots}\\) to any finite order. In this way, any divergence induced locally by the potentials is canceled. But, of course, the Casimir force is not removed - it is inherently nonlocal. The interaction energy in Eq. (13) is also numerically favorable, since the subtractions can be carried out already on the level of the propertime integrands, avoiding manipulations with large numbers. We would like to stress that the definition of the interaction energy in Eq. (13) should not be confused with renormalization. It is a procedure for extracting exact information about the Casimir force between rigid bodies, circumventing the tedious question as to whether Casimir energy densities are locally well defined. This procedure also removes the field theoretic UV divergencies. In this case, renormalization conditions which fix the counter terms do not have to be specified. These local counter terms cannot exert an influence on the Casimir force for disconnected rigid bodies anyway, because the latter is a nonlocal phenomenon. Expressed in physical terms of the QED Casimir effect: the renormalized strength of the coupling between the electromagnetic field and the electrons in the metal is, of course, important for a computation of the local energy density near a plate, but the Casimir force between two plates is independent of the electromagnetic coupling constant. We would like to point out that the concept of the interaction energy is meaningless for the computation of Casimir stresses of single bodies, e.g., a sphere. Here, the renormalization procedure has to be carried out as described above, and the result may depend on the renormalization conditions and strongly on the details of the potential. ## 3 Worldline numerics In this section, we discuss possible numerical realizations of the worldline integral Eqs. (8)-(10) (the more phenomenologically interested reader may proceed directly to Sect. 4). As proposed in [14], we estimate the analytical integral over infinitely many closed worldlines by an ensemble average over finitely many closed loops obeying a Gaussian velocity distribution \\(P[\\{y(t)\\}]\\), \\[P[\\{y(t)\\}]=\\delta\\left(\\int_{0}^{1}dt\\,y(t)\\right)\\,\\exp\\left(-\\frac{1}{4} \\int_{0}^{1}dt\\,\\dot{y}^{2}\\right),\\quad\\mbox{with $y(0)=y(1)$}, \\tag{14}\\] where the \\(\\delta\\) constraint ensures that the loops are centered upon a common center of mass (here and in the following, we drop all normalizations of the distributions, because they are irrelevant when taking expectation values). Here, we have chosen to work with rescaled _unit loops_\\(y(t)\\) as introduced in Eqs. (8)-(10). Numerical arithmetics requires discretization; however, we generally _do not_ discretize spacetime on a lattice, but only the loop propertime parameter \\(t\\): \\[\\{y(t)\\}\\quad\\rightarrow\\quad\\{y_{k}\\}\\in\\mathbb{R}^{D},\\quad k=1,2,\\ldots,N, \\tag{15}\\] where \\(N\\) denotes the number of points per loop (ppl). Whereas Gaussian distributed numbers can easily be generated, the numerical difficulty is to impose the \\(\\delta\\) constraint, \\(y_{1}+y_{2}+\\cdots+y_{N}=0\\), and the requirement of closeness. In the following, we discuss four possible algorithms, and recommend the last two of them based on Fourier decomposition (\"f loops\") or explicit diagonalization (\"v loops\"). ### Heat-bath algorithm A standard approach for the generation of field (or path) distributions that obey a certain action is the heat-bath algorithm, which has been employed for worldline numerics in [14, 15, 16]. Discretizing the derivative in the exponent of Eq. (14), e.g., by \\(\\dot{y}\\to N(y_{k}-y_{k-1})\\), each point on a loop can be regarded as exposed to a \"heat bath\" of all neighboring points. The discretized probability distribution then reads \\[P\\big{[}\\{y_{k}\\}\\big{]}\\ =\\ \\delta\\Big{(}y_{1}+\\ldots+y_{N}\\Big{)}\\ \\exp \\Big{\\{}-\\frac{N}{4}\\sum_{k=1}^{N}(y_{k}-y_{k-1})^{2}\\Big{\\}}\\;. \\tag{16}\\]where \\(y_{0}\\equiv y_{N}\\). The heat-bath procedure now consists in the following steps: (i) choose a site \\(i\\in[1,N]\\), consider all variables \\(y_{k}\\), \\(k\ eq i\\) as constant, and generate the \\(y_{i}\\) according to its probability; (ii) visit all variables of the loops (e.g., in a serial fashion or using the checkerboard algorithm). Thereby, the closeness requirement is easily realized with, e.g., \\(y_{N}\\) being in the heat bath of \\(y_{N-1}\\) and \\(y_{1}\\), etc. The center-of-mass constraint can be accommodated by shifting the whole loop correspondingly after one thermalization sweep (update of all points per loop). Whereas this procedure has been sufficient for the applications discussed in [14, 15, 16], it turns out that this algorithm suffers in practice from a thermalization problem for large values \\(N\\). To demonstrate this, let us define the extension \\(e\\) of the loop ensemble by the loop mean square \\[e^{2}\\;=\\;\\frac{1}{\\mathcal{N}}\\int dy_{1}\\ldots dy_{N}\\;y_{k}^{2}\\;P\\big{[} \\{y_{k}\\}\\big{]}\\,\\hskip 28.452756pt\\mathcal{N}\\;=\\;\\int dy_{1}\\ldots dy_{N}\\;P \\big{[}\\{y_{k}\\}\\big{]}. \\tag{17}\\] This quantity can be calculated analytically, straightforwardly yielding \\[e\\;=\\;\\sqrt{\\frac{1}{6}\\bigg{(}1-\\frac{1}{N^{2}}\\bigg{)}}. \\tag{18}\\] In order to generate a \"thermalized\" loop, one starts with a random ensemble \\(\\{y_{k}\\}\\) and performs \\(n_{t}\\) heat-bath sweeps. For each loop, we calculate its extension \\(e\\). After averaging over 1000 loops, we compare the estimator of \\(e\\) as function of \\(n_{t}\\) with the analytic result (18) corresponding to the limit \\(n_{t}\\to\\infty\\). The result is shown in Fig. 1. One clearly observes that the thermalization of loop ensembles is expensive for \\(N>500\\). In fact, roughly \\(n_{t}=45000\\) is needed for an acceptable loop ensemble consisting of \\(N=1000\\) points. Since a computation of Casimir energies requires loop ensembles of \\(N\\geq 1000\\), the heat-bath algorithm is too inefficient and cannot be recommended. ### Random Walk In order to circumvent the thermalization problem, one may exploit the connection between loops with Gaussian velocity distribution and random walks [18, 19]. This has been adapted to worldline numerics with latticized spacetime in [21]; here, however, we keep spacetime continuous. For this purpose, let us give up the concept of unit loops for a moment, and reinstate the naturally emerging coordinate space loops \\(x(\\tau)\\), \\[x(\\tau)=\\frac{1}{\\sqrt{T}}\\,y(\\tau/T),\\quad x(0)=x(T). \\tag{19}\\] Probability theory tells us that random walks automatically implement the Gaussian velocity distribution \\[\\prod_{i=1}^{N-1}\\exp\\biggl{\\{}-\\frac{1}{4\\Delta\\tau}(x_{i+1}-x_{i})^{2} \\biggr{\\}}. \\tag{20}\\]The crucial point is to establish the relation between a loop that a random walker with step length \\(s\\) would generate for us and a thermalized loop at a given propertime \\(T\\). This relation results from a coarse-graining procedure, which we present here briefly. Given that the random walker starts at the point \\(x_{i}\\), the probability density for reaching the point \\(x_{f}\\) after \\(n\\) steps is given by \\[p(x_{f}\\mid x_{i},n,s)=\\int d^{D}x_{2}\\ldots d^{D}x_{n-1}\\ \\prod_{k=1}^{n-1} \\frac{1}{\\Omega(D)s^{D-1}}\\delta(\\mid x_{k+1}-x_{k}\\mid-s),\\] with \\(\\Omega(D)\\) being the solid angle in \\(D\\) dimensions, \\(x_{1}=x_{i}\\) and \\(x_{n}=x_{f}\\). For \\(n\\gg 1\\), but \\(ns^{2}\\) fixed, the central-limit theorem can be applied [19]: \\[\\lim_{n\\rightarrow\\infty}p(x_{f}\\mid x_{i},n,s)\\ =\\ \\left(\\frac{D}{2\\pi ns^{2}} \\right)^{\\frac{D}{2}}\\exp\\biggl{\\{}-\\frac{D}{2ns^{2}}\\left(x_{f}-x_{i}\\right)^ {2}\\biggr{\\}},\\ \\ \\ ns^{2}=\\mbox{fixed}. \\tag{21}\\] Comparing (21) with (20), one identifies \\[\\Delta\\tau=\\frac{ns^{2}}{2D}. \\tag{22}\\] The dimension of the propertime as well as its relation to the loop length \\(L\\) appears here in an obvious way, \\[T=\\frac{N_{w}s^{2}}{2D}=\\frac{Ls}{2D}, \\tag{23}\\] where \\(N_{w}\\) now is the total number of walker steps. Is is important to point out that the propertime can be tuned in two ways: we can adjust either the walker step \\(s\\) or the Figure 1: The average extension \\(e\\) (multiplied by \\(\\sqrt{N}\\) for better visualization) of the loops as function of the number of thermalizations \\(n_{t}\\). number of points \\(N_{w}\\). The corresponding two methods to generate a loop ensemble at given propertime \\(T\\) work as follows: Method 1 : \\(s\\) is fixed. 1. choose the walker step \\(s\\); 2. read off from Eq. (23) the number of points \\(N_{w}\\) corresponding to \\(T\\); 3. generate \\(N_{w}\\) points by letting a random walker go \\(N_{w}\\) steps, and accept the configuration if the last step leads him into a small sphere (radius \\(\\varepsilon\\)) centered upon the starting point; 4. close the loop 'by hand' by shifting the last point to the starting point; 5. shift the center of mass to zero; 6. repeat steps (3) and (4) \\(n_{\\rm L}\\) times for an ensemble of \\(n_{\\rm L}\\) loops. We point out that the value of \\(s\\) must be much smaller than the characteristic length scale provided by the background potential. A second constraint on \\(s\\) arises from the applicability of the central limit theorem, i.e., \\(n\\gg 1\\) in (21). A third systematic numerical uncertainty follows from the shift in step (3b). Unfortunately, small values for \\(\\varepsilon\\) result in low acceptance rate for loops, and, therefore, increase the numerical effort to generate the loop ensemble. A good compromise is to set the radius \\(\\varepsilon\\) to some percentage of the step length \\(s\\). For illustration only, we leave the Casimir effect for a second and consider the average Wilson Loop \\(\\langle W_{V}\\rangle\\) (see Eq. (10)) for the case of a constant magnetic background field \\(\\vec{B}=Be_{z}^{*}\\) at \\(T=1\\) and \\(D=2\\), \\[V(x)\\;=\\;A_{k}(x)\\,\\dot{x}_{k}\\;,\\;\\;\\;\\;\\;\\vec{A}=B/2\\,(y,-x,0)\\;.\\] For \\(T=1\\) the walker step length is given by \\(s=\\frac{2}{\\sqrt{N_{w}}}\\). Figure 2 shows our numerical result as a function of \\(N_{w}\\) in comparison with the exact value. Circles with error bars correspond to loop ensembles generated with \\(\\varepsilon=0.05\\,s\\). The limit (21) seems to be attained for \\(50<N_{w}<100\\) (\\(s<0.3\\)). For a further improvement of the numerical accuracy, large values of \\(N_{w}\\) and a decrease of \\(\\varepsilon\\) at the same time are required. Finally, we point out that the deviation from the exact result in the case of the heat-bath-generated loop ensemble (blue square) is probably due to thermalization effects. Note that we have to generate a loop ensemble for each value of \\(T\\) (\\(\\sim N_{w}s^{2}\\)), which makes this procedure far more memory consuming than the heat-bath approach. If we decide to generate the loop ensembles once and for all and save them to disk, we have to handle huge amounts of data. On the other hand, if we create our loops 'on demand' (while performing the \\(T\\) or \\(\\vec{x}\\) integrations), we are confronted with a serious waste of computing time. Method 2 : \\(N_{w}\\) is fixed. 1. choose the number of points \\(N_{w}\\);2. set the walker step to \\(s=1\\); 3. proceed with steps (3), (4) and (5) of the first method. The loop ensemble is here generated only once and then rescaled to adjust the step length to the value \\(s\\) corresponding to \\(T\\) in (23). This method therefore works as in the case of the rescalable thermalized unit loops, with the difference that the propertime rescaling is realized via the rescaling of the step length. This tuning at the level of \\(s\\) provides for a better control of the microscopic features of the loops. The second procedure is thus a good candidate to replace the thermalized loops since it combines the absence of thermalization and the rescaling of an all-at-once generated ensemble. It should however be emphasized that most of the computer time is spent on generating redundant open loops. This is due to the fact that, for a given \\(\\varepsilon\\), the fraction of loops which close after \\(N_{w}\\) steps decreases like \\(N_{w}^{-D/2}\\). ### Fourier decomposition: \"f loops\" We are now looking for alternative methods that could combine some advantages of the two previous approaches and bypass the problems rendering them impractical. A highly efficient procedure arises from a Fourier representation of our unit loops \\[y(t)\\ =\\ \\sum_{\ u=0}^{N}\\Biggl{[}a_{\ u}\\ \\cos\\Bigl{(}2\\pi\\,\ u\\,t\\Bigr{)}\\ +\\ b_{\ u}\\ \\sin\\Bigl{(}2\\pi\\,\ u\\,t\\Bigr{)}\\Bigr{]}\\;,\\hskip 28.452756pta_{0} \\ =\\ 0\\;, \\tag{24}\\] where \\(N\\) is the number of Fourier modes included (which agrees with the number of points specifying each loop, see below). The choice \\(a_{0}=0\\) guarantees that the loop center of Figure 2: Average Wilson Loop \\(\\langle W_{V}\\rangle\\) (cf. Eq. (10)) for the case of a constant magnetic background field \\(B\\) for \\(B=1\\), \\(T=1\\) and \\(D=2\\) as a function of the number of points per loop. mass is located at the origin. Inserting Eq. (24) into Eq. (14), the probability distribution for the coefficients is given by \\[P\\big{[}a,b\\big{]}\\ =\\ \\exp\\Bigl{\\{}-\\frac{\\pi^{2}}{2}\\sum\\limits_{\ u=1}^{N}a_{ \ u}^{2}\\ -\\ \\frac{\\pi^{2}}{2}\\sum\\limits_{\ u=0}^{N}b_{\ u}^{2}\\Bigr{\\}}\\;. \\tag{25}\\] We can then take advantage of the fact that the Fourier components \\(\\{a,b\\}\\) are not correlated, in order to generate our loops in momentum space. The reconstruction of the unit loop \\(y(t)\\) in Eq. (24) is most efficiently performed by using the fast Fourier transformation (FFT). For these purposes, we define complex coefficients \\(c_{\ u}:=a_{\ u}+i\\,b_{\ u}\\), and obtain \\[y(t)\\ =\\ \\Re\\sum\\limits_{\ u}c_{\ u}\\ \\exp\\Bigl{\\{}-i\\,2\\pi\\,t\\,\ u\\Bigr{\\}}\\;. \\tag{26}\\] The FFT procedure generates a series of points \\(y_{i}\\), \\(i=0\\ldots N-1\\) which discretize the continuous curve \\(y(t)\\) and thereby constitute the unit loops. ### Explicit diagonalization: \"v loops\" Finally, we propose an algorithm that is based on a linear variable transformation \\(\\{y_{k}\\}\\to\\{\\bar{v}_{k}\\}\\), such that the discretized distribution (16) becomes purely Gaussian. These new variables are velocity-like and diagonalize the quadratic form in the exponent. Because of the \\(\\delta\\) function in Eq. (16), only \\(N-1\\) coordinates per loop are independent. Defining \\(\\int{\\cal D}y=\\int_{-\\infty}^{\\infty}\\prod\\limits_{i=1}^{N}dy_{i}\\), we may perform, e.g., the \\(y_{N}\\) integration using the \\(\\delta\\) function, \\[\\int{\\cal D}y\\,P\\big{[}\\{y_{k}\\}\\big{]}\\ldots = \\int\\prod\\limits_{i=1}^{N-1}dy_{i}\\ e^{\\left[-\\frac{N}{4}\\left( \\sum\ olimits_{i=2}^{N-1}(y_{i}-y_{i-1})^{2}+(2y_{1}+y_{2}+\\cdots+y_{N-1})^{2} +(y_{1}+y_{2}+\\cdots+2y_{N-1})^{2})\\right]}\\ldots \\tag{27}\\] \\[=: \\int\\prod\\limits_{i=1}^{N-1}dy_{i}\\ e^{\\left[-\\frac{N}{4}Y\\right]}\\ldots,\\] where the dots represent an arbitrary \\(y\\)-dependent operator, and we introduced the abbreviation \\(Y\\) for the quadratic form. In order to turn the exponential into a product of simple Gaussians, we define \\(N-1\\) new velocity-like variables, \\[\\bar{v}_{1} := \\frac{3}{2}y_{1}+y_{2}+y_{3}+\\cdots+y_{N-2}+\\frac{3}{2}y_{N-1},\\] \\[v_{i} := y_{i}-y_{i-1},\\quad i=2,3,\\ldots,N-1. \\tag{28}\\] For notational simplicity, it is useful to also introduce the auxiliary variable, \\[v_{i,j}=v_{i}+v_{i-1}+\\cdots+v_{j+1}\\equiv y_{i}-y_{j},\\quad\\mbox{for $i\\geq j=1,2,\\ldots,N-1$}, \\tag{29}\\]such that the exponent \\(Y\\) can be written as \\[Y = \\sum_{i=2}^{N-1}v_{i}^{2}+\\left(\\bar{v}_{1}-\\frac{1}{2}v_{N-1,1} \\right)^{2}+\\left(\\bar{v}_{1}+\\frac{1}{2}v_{N-1,1}\\right)^{2} \\tag{30}\\] \\[= 2\\bar{v}_{1}^{2}+\\frac{1}{2}v_{N-1,1}^{2}+\\sum_{i=2}^{N-1}v_{i}^ {2}.\\] We observe that the variable \\(\\bar{v}_{1}\\) now appears quadratically in the exponent as desired. The same has still to be achieved for \\(v_{2}\\ldots v_{N-1}\\). For this, we note that \\(v_{N-1,1}=v_{N-1}+v_{N-2,1}\\) by definition (29). Defining \\[\\bar{v}_{N-1}:=v_{N-1}+\\frac{1}{3}v_{N-2,1}, \\tag{31}\\] we indeed obtain for the exponent \\(Y\\) \\[Y = 2\\bar{v}_{1}^{2}+v_{N-1}^{2}+\\frac{1}{2}(v_{N-1}+v_{N-2,1})^{2}+ \\sum_{i=2}^{N-2}v_{i}^{2} \\tag{32}\\] \\[= 2\\bar{v}_{1}^{2}+\\frac{3}{2}\\bar{v}_{N-1}^{2}+\\frac{1}{3}v_{N-2,1}+\\sum_{i=2}^{N-2}v_{i}^{2},\\] where \\(\\bar{v}_{N-1}^{2}\\) also appears quadratically. We can continue this construction by defining \\[\\bar{v}_{N-i}:=v_{N-i}+\\frac{1}{i+2}\\,v_{N-i-1,1},\\quad i=1,\\ldots,N-2\\;, \\tag{33}\\] which turns the exponent \\(Y\\) into a purely Gaussian form: \\[Y=2\\bar{v}_{1}^{2}+\\frac{3}{2}\\bar{v}_{N-1}^{2}+\\frac{4}{3}\\bar{v}_{N-2}^{2}+ \\cdots+\\frac{i+2}{i+1}\\bar{v}_{N-i}^{2}+\\cdots+\\frac{N}{N-1}\\bar{v}_{2}^{2}. \\tag{34}\\] The last step of this construction consists in noting that we can substitute the integration variables according to \\[\\prod_{i=1}^{N-1}dy_{i}=J\\prod_{i=2}^{N-1}dv_{i}d\\bar{v}_{1}=\\bar{J}\\prod_{i=1 }^{N-1}d\\bar{v}_{i}\\equiv{\\cal D}\\bar{v} \\tag{35}\\] with nonzero but constant Jacobians \\(J\\), \\(\\bar{J}\\), the value of which is unimportant for the calculation of expectation values. This allows us to write the path integral Eq. (27) as \\[\\int{\\cal D}y\\,P\\big{[}\\{y_{k}\\}\\big{]}\\cdots=\\bar{J}\\int{\\cal D}\\bar{v}\\ \\exp\\left[-\\frac{N}{4}\\left(2\\bar{v}_{1}^{2}+\\sum_{i=1}^{N-2}\\frac{i+2}{i+1} \\bar{v}_{N-i}^{2}\\right)\\right]\\cdots\\equiv\\bar{J}\\int{\\cal D}\\bar{v}\\,P\\big{[} \\{\\bar{v}_{k}\\}\\big{]}\\ldots, \\tag{36}\\] where \\(P\\big{[}\\{\\bar{v}_{k}\\}\\big{]}\\) can now be generated straightforwardly with the Box-Muller method [20]. For the construction of unit loops (\"v loops\"), the above steps have to be performed backwards. The recipe is the following:1. generate \\(N-1\\) numbers \\(w_{i}\\), \\(i=1,\\ldots,N-1\\) via the Box-Muller method such that they are distributed according to \\(\\exp(-w_{i}^{2})\\); 2. compute the \\(\\bar{v}_{i}\\), \\(i=1,\\ldots,N-1\\), by normalizing the \\(w_{i}\\): \\[\\bar{v}_{1} = \\sqrt{\\frac{2}{N}}\\ w_{1},\\] \\[\\bar{v}_{i} = \\frac{2}{\\sqrt{N}}\\sqrt{\\frac{N+1-i}{N+2-i}}\\ w_{i},\\quad i=2, \\ldots,N-1\\;;\\] (37) 3. compute the \\(v_{i}\\), \\(i=2,\\ldots,N-1\\), using \\[v_{i}=\\bar{v}_{i}-\\frac{1}{N+2-i}\\,v_{i-1,1},\\quad\\mbox{where $v_{i-1,1}=\\sum_{j=2}^{i-1}v_{j}$}\\;;\\] (38) 4. construct the unit loops according to \\[y_{1} = \\frac{1}{N}\\left(\\bar{v}_{1}-\\sum_{i=2}^{N-1}\\left(N-i+\\frac{1}{ 2}\\right)v_{i}\\right),\\] \\[y_{i} = y_{i-1}+v_{i},\\quad i=2,\\ldots,N-1,\\] \\[y_{N} = -\\sum_{i=1}^{N-1}y_{i}\\;;\\] (39) 5. repeat this procedure \\(n_{\\rm L}\\) times for \\(n_{\\rm L}\\) unit loops. The formulas in step (4) can be checked straightforwardly by inserting the definitions of the \\(v_{i}\\)'s and \\(\\bar{v}_{1}\\). This v-loop algorithm allows us to generate unit loops efficiently without thermalization, i.e., no redundant thermalization sweeps have to be performed, and works for an arbitrary number of points per loop \\(N\\). ### Benchmark test We test the quality of our loops with the aid of the Casimir energy for the parallel-plate configuration in the Dirichlet limit, the physics of which is described in the next section. As far as numerics is concerned, there are basically two parameters that control the quality of our loop ensemble: the number of points per loop (ppl) \\(N\\), and the number of loops \\(n_{\\rm L}\\). The larger these numbers, the more accurate is our numerical estimate at the expense of CPU time and size. Whereas increasing the number of loops \\(n_{\\rm L}\\) reduces the statistical error of the Monte-Carlo procedure, increasing the number of ppl \\(N\\) reduces the systematic error of loop discretization. In order to estimate this systematic error, we have to study the approach towards the continuum limit. The idea is to choose \\(N\\) large enough for a given \\(n_{\\rm L}\\), such that the systematic error is smaller than the statistical one. In Fig. 3, we plot the numerical estimates for the parallel-plate Casimir energy as a function of the number of ppl \\(N\\) and compare it with the classic result. The error bars represent the statistical error of the Monte-Carlo procedure. The deviation of the numerical estimates from the exact result on top of the error bars serves as a measure of the systematic error. As is visible therein, a rather small number of several thousand ppl, \\(N\\gtrsim{\\cal O}(1000)\\), is sufficient to get a numerical estimate with \\(\\lesssim 5\\%\\) error using \\(n_{\\rm L}=1500\\) loops. For a high-precision estimate with an error \\(\\lesssim 0.5\\%\\), larger loop ensembles with \\(n_{\\rm L}\\gtrsim 100\\,000\\) are required. For \\(N\\simeq 50\\,000\\)ppl, systematic and statistical errors are of the same order, and for \\(N\\gtrsim 100\\,000\\)ppl, the systematic error is no longer relevant for v loops. For f loops, however, we observe a systematic \\(1\\%\\) error in the high-precision data of unclear origin. Nevertheless, the important conclusion of this test is that worldline numerics has proved its ability to describe quantum fluctuations with Dirichlet boundary conditions quantitatively. Figure 3: Numerical estimate of the interaction Casimir energy of the parallel-plate configuration for various loop ensembles as a function of the number of points per loop \\(N\\). The error bars correspond to the Monte-Carlo statistical error; deviations from the exact result on top of the statistical error measure the systematic error due to loop discretization. Casimir forces between rigid bodies Casimir forces can be analytically computed for only a small number of rigid-body geometries among which there is Casimir's classic result for the parallel-plate configuration; for perfectly conducting plates at a distance \\(a\\), the interaction energy per unit area is [1] \\[E_{\\rm PP}(a)=-\\frac{1}{2}\\frac{\\pi^{2}}{720}\\,\\frac{1}{a^{3}} \\tag{40}\\] for a fluctuating real scalar field; for a complex scalar as well as for electromagnetic fluctuations, the factor \\(1/2\\) has to be dropped. The famous Casimir force is obtained by differentiating Eq. (40) by \\(a\\). ### Proximity force approximation The standard approximation method for not analytically solvable Casimir problems is the proximity force approximation (PFA) [7, 8]. The basic idea is to apply the parallel-plate result to infinitesimal bits of the generally curved surfaces and integrate them up, \\[E=\\int_{S}E_{\\rm PP}(z)\\,d\\sigma, \\tag{41}\\] where \\(E_{\\rm PP}\\) is the interaction energy per unit area of the parallel-plate case. \\(S\\) represents the integration domain and denotes either one of the surfaces of the interacting bodies or a suitably chosen mean surface [8]. At this point, the proximity force approximation is ambiguous, and we will simply insert both surfaces in order to determine the variance. In Eq. (41), \\(d\\sigma\\) denotes the invariant surface measure, and \\(z\\) represents the separation between the two surfaces associated with the surface element \\(d\\sigma\\) on \\(S\\). Obviously, the proximity force approximation neglects any nonparallelity and any curvature - the latter because each surface element on \\(S1\\) is assumed to \"see\" only one surface element on \\(S2\\) at separation \\(z\\); but curvature effects require information about a whole neighborhood of the element on \\(S2\\). The proximity force approximation is expected to give reasonable results only if (i) the typical curvature radii of the surfaces elements is large compared to the element distance and (ii) the surface elements with strong nonparallelity are further separated than the more parallel ones.4 Footnote 4: The second condition is not so well discussed in the literature; it is the reason why the proximity force approximation gives reasonable results for a convex spherical lens over a plate (convex as seen from the plate), but fails for a concave lens. For configurations that do not meet the validity criteria of the proximity force approximation, a number of further approximations or improvements exist, such as an additive summation of interatomic pairwise interactions and the inclusion of screening effects of more distant layers by closer ones [3]. Though these methods have proved useful and even quantitatively precise for a number of examples, to our knowledge, a general, unambiguous and systematically improvable recipe without _ad hoc_ assumptions is still missing. In Sect. 5, we compare our results with the proximity force approximation in the simplest version as mentioned above, in order to gain insight into the effects of curvature. ### Casimir forces on the worldline As described in Sect. 2, we represent the rigid bodies by a potential \\(V(x)\\). The functional form of the potential leaves room enough for modeling many physical properties of real Casimir configurations. Let us confine ourselves to an idealized potential well which is represented by a \\(\\delta\\) function in space (for \"soft\" boundary conditions, see, e.g., [22]), \\[V(x)=\\lambda\\int_{\\Sigma}d\\sigma\\,\\delta^{d}(x-x_{\\sigma}), \\tag{42}\\] where the geometry of the Casimir configuration is represented by \\(\\Sigma\\), denoting a \\(d-1\\) dimensional surface. \\(\\Sigma\\) is generally disconnected (e.g., two disconnected plates, \\(\\Sigma=S_{1}+S_{2}\\)) and can be degenerate, i.e., effectively lower dimensional (a point). The surface measure \\(d\\sigma\\) is assumed to be reparametrization invariant, and \\(x_{\\sigma}\\) denotes a vector pointing onto the surface. The coupling \\(\\lambda\\) has mass dimension 1 and is assumed to be positive. It can roughly be viewed as a plasma frequency of the boundary matter: for fluctuations with frequency \\(\\omega\\gg\\lambda\\), the Casimir boundaries become transparent. In the limit \\(\\lambda\\to\\infty\\), the potential imposes the _Dirichlet boundary condition_, implying that all modes of the field \\(\\phi\\) have to vanish on \\(\\Sigma\\). Figure 4: Worldline loop contributions to Casimir energies between two surfaces (S1 and S2): loop (a) does not contribute at all, it is an ordinary vacuum fluctuation. Loop (b) contributes to the local energy density near the upper plate, but does not contribute to the Casimir force. Only loop (c) contributes to the Casimir force, since it “sees” both surfaces. Here, the loop picks up nonlocal information about a whole neighborhood, whereas the proximity force approximation employs only information about local distances indicated by the dashed line. Inserting this potential into the worldline formula (8), we encounter the integral \\[I_{V}[y(t);T,x]:=\\int_{0}^{1}dt\\,V(x+\\sqrt{T}y(t)) = \\lambda\\int_{0}^{1}dt\\int_{\\Sigma}d\\sigma\\,\\delta\\big{(}\\sqrt{T}y( t)+(x-x_{\\sigma})\\big{)} \\tag{43}\\] \\[= \\frac{\\lambda}{\\sqrt{T}}\\int_{\\Sigma}d\\sigma\\,\\sum_{\\{t_{i}|\\sqrt {T}y(t_{i})+x=x_{\\sigma}\\}}\\frac{1}{|\\dot{y}(t_{i})|},\\] where \\(\\{t_{i}\\}\\) is the set of all points where a given scaled unit loop \\(\\sqrt{T}y(t)\\) centered upon \\(x\\) pierces the Casimir surface \\(\\Sigma\\) at \\(x_{\\sigma}\\). If a loop does not pierce the surface (for given \\(T\\) and \\(x\\)), \\(I_{V}[y(t)]=0\\) for this loop. Of course, there are also loops that merely touch the Casimir surface but do not pierce it. For these loops, the inverse velocity \\(1/|\\dot{y}(t_{i})|\\) diverges on the surface. But since this divergence occurs in the argument of an exponential function, these loops remove themselves from the ensemble average. As an example, let \\(\\Sigma\\) consist of two disconnected surfaces (bodies), such that \\(V(x)=V_{1}(x)+V_{2}(x)\\). For a given propertime \\(T\\), the Casimir energy density at point \\(x\\) receives contributions only from those loops which pierces one of the surfaces. The _interaction energy density_ defined in Eq. (13) is even more restrictive: if a certain loop \\(y_{0}(t)\\) does not pierce one of the surfaces, then \\((W_{V_{1}+V_{2}}[y_{0}]-1)-(W_{V_{1}}[y_{0}]-1)-(W_{V_{2}}[y_{0}]-1)=0\\). Therefore, only those loops which pierce _both_ surfaces contribute to the interaction energy density, as illustrated in Fig. 4. If the loop \\(y_{0}(t)\\) does pierce both plates, its contribution to the energy density is \\[\\mbox{contrib. of }\\,y_{0}(t)=1-(e^{-T\\,I_{V_{1}}}+e^{-T\\,I_{V_{2}}}-e^{-T\\,I_{ V_{1}+V_{2}}})\\in(0,1]. \\tag{44}\\] From this general consideration, together with the global minus sign in Eq. (8), we learn that Casimir forces between rigid bodies in our scalar model are always attractive. This statement holds, independent of the shape of the bodies and the details of the potential (as long as \\(V(x)\\) is non-negative). In the Dirichlet limit, \\(\\lambda\\to\\infty\\), the exponential functions in Eq. (44) vanish, and the contribution of a loop is \\(=1\\) if it pierces both surfaces and \\(=0\\) otherwise. ## 5 Numerical results ### Parallel Plates Let us first consider the classic example of a Casimir configuration consisting of parallel plates separated by a distance \\(a\\) and located at \\(z=-a/2\\) and \\(z=a/2\\) orthogonal to the \\(z\\equiv x_{d}\\) axis. For this, Eq. (42) reduces to \\[V(x)\\equiv V(z)=\\lambda[\\delta(z+a/2)+\\delta(z-a/2)]\\equiv V_{1}+V_{2}. \\tag{45}\\] In order to test the numerical worldline approach, we compare our numerical estimates with the analytically known result [23] of the interaction Casimir energy for arbitrary coupling \\(\\lambda\\) and scalar mass \\(m\\) in units of the plate separation \\(a\\). In Fig. 5, we study a wide range of couplings and the approach to the Dirichlet limit, \\(\\lambda a\\gg 1\\); here, the energy per unit area tends to \\[\\lim_{\\lambda a\\to\\infty}E_{\\rm PP}(\\lambda,a)=\\frac{1}{2(4\\pi)^{2}}\\frac{\\pi^{ 4}}{45}\\,\\frac{1}{a^{3}}\\simeq\\frac{1}{2(4\\pi)^{2}}\\,\\times\\,2.16\\ldots\\, \\times\\,\\frac{1}{a^{3}}, \\tag{46}\\] which is the classic Casimir result for a massless scalar field.5 As is visible in Fig. 5, the agreement is satisfactory even for small ensembles with \\(N=20^{\\prime}000\\)ppl. Footnote 5: Here and in the following, we have explicitly displayed the common propertime prefactors \\(1/[2(4\\pi)^{D/2}]\\) for convenience (see prefactor in Eq. (8)). Let us finally discuss the Casimir energy as function of the distance \\(a\\) of two parallel plates for finite mass \\(m\\) and finite \\(\\lambda\\), in order to explore the strength of the worldline approach in various parameter ranges. The result is shown in Fig. 6. A finite value for \\(\\lambda\\) simulates a finite plasma frequency. Hence, for \\(a\\ll 1/\\lambda\\) the plates become more transparent for those modes of the quantum field which fit between the plates. This weakens the increase of the interaction Casimir energy for decreasing plate separation, which turns from \\(\\sim 1/a^{3}\\) into a \\(\\lambda^{2}/a\\) law [23]. For \\(a\\gg 1/m\\), we observe that the Casimir energy decreases exponentially with \\(a\\), as expected, since possible fluctuations are suppressed by the mass gap. In the intermediate distance regime, \\(1/\\lambda\\ll a<1/m\\), a reasonable Figure 5: **Parallel plates**: interaction Casimir energy per unit area for the parallel-plate configuration as a function of the coupling \\(\\lambda\\) (units are set by the plate separation \\(a\\)). The numerical estimate reproduces the exact result for a wide range of couplings including the Dirichlet limit (cf. Fig.3). approximation is given by the classic power law \\(E_{\\rm PP}\\sim 1/a^{3}\\), which is familiar from the ideal case \\(\\lambda\\to\\infty\\), \\(m=0\\). ### Sphere above plate The Casimir force between a sphere or a spherical lens above a plate is of utmost importance, because a number of high-precision measurements have been performed with this experimental configuration. Let us confine ourselves to the massless case, \\(m=0\\), in the Dirichlet limit \\(\\lambda\\to\\infty\\); generalizations to other parameter ranges are straightforward, as in the parallel-plate case. In order to gain some intuition for curvature effects, let us consider a sphere of radius \\(R\\) the center of which resides over a plate at distance \\(a=R\\) as an example. The interaction Casimir energy density along the symmetry axis is shown in Fig. 7. For comparison, the energy density of the case where the sphere is replaced by a plate is also shown. One observes that the energy density close to the sphere is well approximated by the energy density provided by the parallel-plates scenario. This is already at the heart of the nonlocal nature of the Casimir force and can easily be understood in the worldline approach. Figure 6: **Parallel plates**: interaction Casimir energy per unit area for the parallel-plate configuration as a function of the distance \\(a\\) in units of the mass \\(m\\) for \\(\\lambda=100m\\). The exact result (solid line) [23] is well reproduced by the numerical estimate over many orders of magnitude. For intermediate parameter values, the classic Casimir result (idealized Dirichlet limit Eq. (46), dashed line) represents a reasonable approximation. Recall that the dominant contribution to the interaction Casimir energy density arises from loops which intersect both surfaces. If the center of the loop is located close to the sphere, the loops which intersect both surfaces hardly experience the curvature of the sphere; this is because loops that are large enough to pierce the distant plate will also pierce the close-by sphere rather independent of its radius. By contrast, if the loop center is located close to the plate, the dominant (large) loops possess intersections with the sphere at many different points - not necessarily the closest point. In this case, the worldline loops \"see\" the curvature of the sphere that now enters the energy density. Let us now consider the complete interaction Casimir energy for the sphere-plate configuration as a function of the sphere-plate distance \\(a\\) (we express all dimensionful quantities as a function of the sphere radius \\(R\\)). In Fig. 8, we plot our numerical results in the range \\(a/R\\simeq{\\cal O}(0.001\\ldots 10)\\). Since the energy varies over a wide range of scales, already small loop ensembles with rather large errors suffice for a satisfactory estimate (the error bars of an ensemble of 1500 v loops with 4000 ppl cannot be resolved in Fig. 8). Let us compare our numerical estimate with the proximity force approximation (PFA): using the plate surface as the integration domain in Eq. (41), \\(S=S_{\\rm plate}\\), we obtain the Figure 7: **Sphere above Plate**: interaction Casimir energy density along the symmetry axis (\\(x\\) axis) for the sphere-plate configuration in comparison to the parallel-plate case. Close to the sphere, the worldline loops do not “see” the curvature; but at larger distances, curvature effects enter the energy density. For illustration, the sphere-plate geometry is also sketched (thin black lines). solid line in Fig. 8 (PFA, plate-based), corresponding to a \"no-curvature\" approximation. As expected, the PFA approximation agrees with our numerical result for small distances (large sphere radius). Sizable deviations from the PFA approximation of the order of a few percent occur for \\(a/R\\simeq 0.02\\) and larger. Here, the curvature-neglecting approximations are clearly no longer valid. This can be read off from Fig. 9, where the resulting interaction energies are normalized to the numerical result. In the PFA, we have the freedom to choose alternatively the sphere surface as the integration domain, \\(S=S_{\\rm sphere}\\). Although still no curvature-related fluctuation effects enter this approximation, one may argue that information about the curvature is accounted for by the fact that the integration domain now is a curved manifold. Indeed, Fig. 8 shows that this \"sphere-based\" PFA approximation deviates from the plate-based PFA in the same direction as the numerical estimate, but overshoots the latter by far. It is interesting to observe that the geometric mean, contrary to the arithmetic mean, of the two different PFA approximations lies rather close to the numerical estimate; we will comment on this in more detail in the next section. Figure 8: **Sphere above Plate**: logarithmic plot of the interaction Casimir energy for the sphere-plate configuration. For small separations/large spheres, \\(a/R\\lesssim 0.02\\), the proximity force approximation (PFA) approximates the numerical estimate well; but for larger \\(a/R\\), curvature effects are not properly taken into account. The PFA becomes ambiguous for larger \\(a/R\\), owing to possible different choices of the integration domain \\(S\\) in Eq. (41). A geometric mean (dotted-dashed line) of \\(S=S_{\\rm plate}\\) and \\(S=S_{\\rm sphere}\\) shows reasonable agreement with the numerical result. ### Cylinder above plate In order to study the relation between PFA approximations and the full numerical estimate a bit further, let us consider a second example of a cylinder above a plate. Apart from the difference in the third dimension, all parameters and conventions are as before. Again, we observe in Fig. 10 that the numerical estimate is well approximated by the PFA for \\(a/R\\lesssim 0.02\\), but curvature effects become important for larger distance-to-curvature-radius ratios. As in the sphere-plate case, the plate-based PFA neglects, but the cylinder-based PFA over-estimates, the curvature effects for \\(a/R\\) of order one. Our results seem to suggest that the various possible choices for the integration domain in the proximity force approximation may give upper and lower bounds for the correct answer. Indeed, the geometric mean between the two possible choices for the sphere-plate configuration is rather close to the numerical estimate (dotted-dashed line in Figs. 8 and 10). Similar positive results for the geometric mean have been found for the two-concentric-cylinder configuration [24] using semiclassical approximations [25] and for a \"chaotic\" geometry [8]. However, we believe that this \"agreement\" beyond the strict validity limit of the PFA is accidental. First, detailed inspection reveals that the geometric mean and the numerical estimate are not fully compatible within error bars; this is particularly visible in the cylinder-plate case in Fig. 10. Secondly, there are no fundamental arguments favoring the geometric mean; by contrast, the arithmetic mean (as well as the quadratic mean) are not good approximations. Thirdly, for even larger separations, \\(a/R\\to\\infty\\), it is known that the interaction Casimir energy in the sphere-plate case behaves as \\(\\sim R^{3}/a^{4}\\)[26], whereas even the sphere-based PFA decreases only with \\(\\sim R^{2}/a^{3}\\). From the viewpoint of the worldline, it is obvious anyway that true fluctuation-induced curvature effects cannot be taken into account by PFA-like arguments. Nevertheless, the geometric-mean prescription may yield a reasonable first guess for Casimir forces in a parameter range beyond the formal validity Figure 9: **Sphere above Plate**: interaction Casimir energies normalized to the numerical result (further conventions as in Fig. 8). For \\(a/R\\gtrsim 0.02\\), the fluctuation-induced curvature effects occur at the percent level. bounds of the PFA where the expansion parameter is maximally of order one. ## 6 Conclusions We have proposed and developed a new method to compute Casimir energies for arbitrary geometries from first principles in a systematic manner. The approach is based on perturbative quantum field theory in the string-inspired worldline formulation which maps field theoretic problems onto one-dimensional quantum mechanical path integrals with an evolution in a \"5th coordinate\", the propertime. These path integrals can easily be performed with numerical Monte-Carlo techniques. Beyond any technical and numerical advantages, we first would like to stress that the worldline formulation offers an intuitive approach to the phenomena induced by quantum fluctuations. The geometric dependence of Casimir forces between rigid bodies, curvature effects and nonlocalities can already be guessed when thinking in terms of worldline _loop clouds_. As to technical advantages, the (usually complicated) analysis of the fluctuation spectrum and the mode summation are performed at one fell swoop in the worldline approach. Above all, our algorithm is completely independent of the details of the Casimir geometry and no underlying symmetry is required. The algorithm is scalable: if higher precision is required, only the parameters of the loop ensemble (points per loop and number of loops) Figure 10: **Cylinder above Plate**: logarithmic plot of the interaction Casimir energy for the sphere-plate configuration (cf. Fig. 8). have to be adjusted6. Footnote 6: The numerical computations for this work have been performed on ordinary desktop PC’s. Improvement in precision can be obtained at comparatively low cost, since the computer resources required increase only linearly with our loop parameters. In this work, we have focused on Casimir forces between rigid bodies for which a computation of the interaction energy suffices; the latter is free of subtle problems with renormalization. Nevertheless, the worldline approach is in principle capable of isolating and classifying divergencies of general Casimir energy calculations, and the unambiguous program of quantum field theoretic renormalization can be performed. Confining ourselves to a fluctuating real scalar field, we tested our method using the parallel-plate configuration. New results have been obtained for the experimentally important sphere-plate configuration: here we studied the (usually neglected) nonlocal curvature effects which become sizable for a distance-to-curvature-radius ratio of \\(a/R\\gtrsim 0.02\\). Even though the proximity force approximation (PFA) as standard approximation method cannot correctly account for fluctuation-induced curvature effects, we found (accidental) agreement between our numerical estimate and the PFA with a \"geometric-mean prescription\": the latter implies a geometric mean over the possible choices of surface integration in Eq. (41). This geometric mean PFA might provide for a first guess of the Casimir force for \\(a/R\\) of order one, but has to be treated with strong reservations. In this work, we have accepted a number of simplifications, in order to illustrate our method. Many generalizations to more realistic systems are straightforward, as discussed in the remainder of this section: 1) We modeled the Casimir bodies by \\(\\delta\\) potentials, mostly taking the Dirichlet limit. In fact, this was not a real simplification, but numerically even more demanding. Modeling the bodies by finite and smooth potential wells requires worldline ensembles with a much smaller number of points per loop. The \\(\\delta\\) potentials represent the \"worst case\" for our algorithm, which has nevertheless proved to be applicable. 2) In experimental realizations, effects of finite temperature and surface roughness have to be taken into account. Both can be implemented in our formalism from first principles. Including finite temperature with the Matsubara formalism leads to a worldline integral with periodic boundary conditions of the worldline loops in Euclidean time direction [27, 15] which can easily be performed for Casimir configurations. The surface roughness can be accounted for by adding a characteristic random \"noise\" to the local support of the potential. In both cases, the observables can directly be computed by our formalism without any kind of perturbative expansion. 3) For obtaining the Casimir force, our results for the interaction energy have to be differentiated with respect to the separation parameter. Since numerical differentiation generally leads to accuracy reduction, it is alternatively possible to perform the differentiation first analytically; this yields a slightly more complicated worldline integrand which can nevertheless be easily evaluated without loss of precision. By a similar reasoning, we can also obtain the (expectation value of the) energy-momentum tensor, which is frequently at the center of interest in Casimir calculations. For this, we can exploit the fact that the energy momentum tensor can be obtained from the effective action by differentiating Eq. (8) with respect to the metric analytically; the resulting worldline integrand can then be put into the standard path integral machinery. 4) Radiative corrections to the Casimir effect can also be included in our method, employing the higher-loop techniques of the worldline approach [13]. We expect these computations to be numerically more demanding, since more integrations are necessary, but the general approach remains the same. 5) The implementation of finite conductivity corrections is less straightforward, since this generally requires a formulation for real electromagnetic fluctuations (an extension to complex scalars is not sufficient). For this, the starting point can be a field theoretic Lagrangian defining a model for the interaction of the electromagnetic field with the bodies as suggested, e.g., in [28]. Although these Lagrangians are generally not renormalizable, one may expect that the dispersive properties of the bodies provide for a physical ultraviolet cutoff (although this has to be studied with great care [29]). ## Acknowledgment We are grateful to W. Dittrich, M. Quandt, O. Schroder and H. Weigel for useful information and comments on the manuscript. We would like to thank M. Luscher for providing us with the latest double-precision version of the RANLUX random-number generator. H.G. acknowledges financial support by the Deutsche Forschungsgemeinschaft under contract Gi 328/1-2. L.M. is supported by the Deutsche Forschungsgemeinschaft under contract GRK683. ## References * [1] H. B. Casimir, Kon. Ned. Akad. Wetensch. Proc. **51**, 793 (1948). * [2] S. K. Lamoreaux, Phys. Rev. Lett. **78**, 5 (1997); U. Mohideen and A. Roy, Phys. Rev. Lett. **81**, 4549 (1998) [arXiv:physics/9805038]; A. Roy, C. Y. Lin and U. Mohideen, Phys. Rev. D **60**, 111101 (1999) [arXiv:quant-ph/9906062]; T. Ederth, Phys. Rev. A **62**, 062104 (2000); G. Bressi, G. Carugno, R. Onofrio and G. Ruoso, Phys. Rev. Lett. **88**, 041804 (2002) [arXiv:quant-ph/0203002]. * [3] V. M. Mostepanenko and N.N. Trunov, \"The Casimir Effect and its Applications,\" Clarendon Press, Oxford (1997); M. Bordag, U. Mohideen and V. M. Mostepanenko, Phys. Rept. **353**, 1 (2001) [arXiv:quant-ph/0106045]; K.A. Milton, \"The Casimir Effect: Physical Manifestations Of Zero-Point Energy,\" World Scientific, River Edge (2001). * [4] H.B. Chan, V.A. Aksyuk, R.N. Kleiman, D.J. Bishop, F. Capasso, Science **291**, 1941 (2001); Phys. Rev. Lett. **87**, 211801 (2001). * [5] D. E. Krause and E. Fischbach, Lect. Notes Phys. **562**, 292 (2001) [arXiv:hep-ph/9912276]; V. M. Mostepanenko and M. Novello, Phys. Rev. D **63**, 115003 (2001) [arXiv:hep-ph/0101306]. K. A. Milton, R. Kantowski, C. Kao and Y. Wang, Mod. Phys. Lett. A **16**, 2281 (2001) [arXiv:hep-ph/0105250]. * [6] D. E. Krause and E. Fischbach, Phys. Rev. Lett. **89**, 190406 (2002) [arXiv:quant-ph/0210045]. * [7] B.V. Derjaguin, I.I. Abrikosova, E.M. Lifshitz, Q.Rev. **10**, 295 (1956). * [8] J. Blocki, J. Randrup, W.J. Swiatecki, C.F. Tsang, Ann. Phys. (N.Y.) **105**, 427 (1977). * [9] D. Deutsch and P. Candelas, Phys. Rev. D **20**, 3063 (1979). * [10] N. Graham, R. L. Jaffe, V. Khemani, M. Quandt, M. Scandurra and H. Weigel, Nucl. Phys. B **645**, 49 (2002) [arXiv:hep-th/0207120]; arXiv:hep-th/0207205. * [11] K. A. Milton, arXiv:hep-th/0210081. * [12] R.P. Feynman, Phys. Rev. **80**, 440 (1950); **84**, 108 (1951); A. M. Polyakov, \"Gauge Fields And Strings,\" Harwood, Chur (1987); Z. Bern and D.A. Kosower, Nucl. Phys. **B362**, 389 (1991); **B379**, 451 (1992); M.J. Strassler, Nucl. Phys. **B385**, 145 (1992); M. G. Schmidt and C. Schubert, Phys. Lett. **B318**, 438 (1993) [hep-th/9309055]. * [13] C. Schubert, Phys. Rept. **355**, 73 (2001) [arXiv:hep-th/0101036]. * [14] H. Gies and K. Langfeld, Nucl. Phys. B **613**, 353 (2001) [arXiv:hep-ph/0102185]. * [15] H. Gies and K. Langfeld, Int. J. Mod. Phys. A **17**, 966 (2002) [arXiv:hep-ph/0112198]. * [16] K. Langfeld, L. Moyaerts and H. Gies, Nucl. Phys. B **646**, 158 (2002) [arXiv:hep-th/0205304]. * [17] M. Reuter and W. Dittrich, Eur. J. Phys. **6**, 33 (1985). * [18] C. Itzykson and J. M. Drouffe, \"Statistical Field Theory,\", Cambridge Univ. Pr. (1989) * [19] S. Samuel, Nucl. Phys. B **154**, 62 (1979). * [20] G.E.P. Box, M.E. Muller, Ann. Math. Stat. **29**, 610 (1958). * [21] M. G. Schmidt and I. O. Stamatescu, arXiv:hep-lat/0201002; arXiv:hep-lat/0209120. * [22] A. A. Actor and I. Bender, Phys. Rev. D **52**, 3581 (1995). * [23] M. Bordag, D. Hennig and D. Robaschik, J. Phys. A **25**, 4483 (1992). * [24] F. D. Mazzitelli, M. J. Sanchez, N. N. Scoccola and J. von Stecher, arXiv:quant-ph/0209097. * [25] M. Schaden and L. Spruch, Phys. Rev. A **58**, 935 (1998); Phys. Rev. Lett. **84**, 459 (2000). * [26] T. Datta and L.H. Ford, Phys. Lett. A **83**, 314 (1981). * [27] D. G. McKeon and A. Rebhan, Phys. Rev. D **47**, 5487 (1993) [arXiv:hep-th/9211076]. * [28] G. Feinberg, J.Sucher, Phys. Rev. A **2**, 2395-2415 (1970). * [29] G. Barton, Int. J. Mod. Phys. A **17**, 767 (2002); V. Sopova and L. H. Ford, Phys. Rev. D **66**, 045026 (2002) [arXiv:quant-ph/0204125].
We develop a method to compute the Casimir effect for arbitrary geometries. The method is based on the string-inspired worldline approach to quantum field theory and its numerical realization with Monte-Carlo techniques. Concentrating on Casimir forces between rigid bodies induced by a fluctuating scalar field, we test our method with the parallel-plate configuration. For the experimentally relevant sphere-plate configuration, we study curvature effects quantitatively and perform a comparison with the \"proximity force approximation\", which is the standard approximation technique. Sizable curvature effects are found for a distance-to-curvature-radius ratio of \\(a/R\\gtrsim 0.02\\). Our method is embedded in renormalizable quantum field theory with a controlled treatment of the UV divergencies. As a technical by-product, we develop various efficient algorithms for generating closed-loop ensembles with Gaussian distribution.
Provide a brief summary of the text.
arxiv-format/0304114v2.md
Stability of Semi-Implicit and Iterative Centred-Implicit Time Discretisations for Various Equation Systems Used in NWP P. Benard\\({}^{*}\\) \\({}^{*}\\) Centre National de Recherches Meteorologiques, Meteo-France, Toulouse, France 27 March 2003 Corresponding address: Pierre Benard CNRM/GMAP 42, Avenue G. Coriolis F-31057 TOULOUSE CEDEX F RANCE Telephone: +33 (0)5 61 07 84 63 Fax: +33 (0)5 61 07 84 53 e-mail: [email protected] ###### Introduction The classical semi-implicit (SI) technique (Robert et al., 1972) has been widely used in NWP since it provides efficient and simple algorithms, at least for spectral models. This classical SI method requires the definition of a constant in time linear \"reference\" operator \\(\\mathcal{L}^{*}\\), which usually consists in the linearisation of the original system \\(\\mathcal{M}\\), around a stationary reference-state, noted \\(\\mathcal{X}^{*}\\). For a given state \\(\\mathcal{X}\\) of the atmosphere, the evolution of the system, \\((\\partial\\mathcal{X}/\\partial t)=\\mathcal{M}.\\mathcal{X}\\), is then time-discretised through: \\[\\frac{\\delta\\mathcal{X}}{\\delta t}=(\\mathcal{M}-\\mathcal{L}^{*}).\\mathcal{X}+ \\mathcal{L}^{*}.\\overline{[\\mathcal{X}]}^{t} \\tag{1}\\] where \\((\\delta/\\delta t)\\) is the discretised time-derivative operator, and \\(\\overline{[\\ ]}^{t}\\) is the implicit-centred temporal average operator. The terms linked to the reference operator \\(\\mathcal{L}^{*}\\) are thus treated in a centred-implicit way, whilst the residual non-linear terms are treated explicitly. For this scheme, there is no formal proof of the stability in real atmospheric conditions, due to the explicit treatment of non-linear residuals. This prompted the authors of pioneering NWP applications of the SI scheme to examine theoretically its stability in idealised contexts. In a seminal study following this approach, Simmons et al., 1978 (SHB78 hereafter) analysed the stability of the SI scheme for the hydrostatic primitive equations (HPE) system with a Leap-Frog (3-TL hereafter) time-discretisation by considering the linearised equations around a stationary state \\(\\overline{\\mathcal{X}}\\) (referred to as \"atmospheric state\" hereafter) when the resulting linear \"atmospheric\" operator \\(\\overline{\\mathcal{L}}\\) deviates from the linear \"reference\" operator \\(\\mathcal{L}^{*}\\) of the SI scheme, thus generating potentially unstable explicitly-treated residuals. In the vertically-continuous context they performed a stability analysis valid when the eigenfunctions of \\(\\overline{\\mathcal{L}}\\) and \\(\\mathcal{L}^{*}\\) are identical. They showed that when the atmospheric and reference temperature profiles (respectively \\(\\overline{T}\\) and \\(T^{*}\\)) are isothermal, the stability of the SI scheme requires:\\[0\\leq\\overline{T}\\leq 2T^{*}, \\tag{2}\\] hence \\(T^{*}\\) cannot be chosen arbitrarily for applying the 3-TL SI scheme to the HPE system. In the finite-difference vertically-discretised context, they showed that a \"vertically-discretised analysis\" of stability following the same principle simply resulted in the solution of a standard eigenvalue problem. They found empirically that a large static-stability for the reference-state is necessary to maintain the stability of the scheme for realistic thermal atmospheric profiles. As a consequence, they recommended to use a warm isothermal state as reference-state, a rule which was then widely adopted for NWP applications using SI schemes. Finally, they examined the effect of applying a second-order time-filter in the temporal average of linear terms, and found that an improved stability is obtained, but at the expense of an increased misrepresentation of the wave propagation. Cote et al., 1983 (CBS83 hereafter), still in the HPE context, examined the stability of the 3-TL SI scheme for a finite-element vertical discretisation using the above vertically-discretised analysis method. They established a stability criterion for the 3-TL SI scheme in terms of the atmospheric and reference static stabilities (\\(\\overline{\\gamma}\\) and \\(\\gamma^{*}\\) respectively): \\[0\\leq\\overline{\\gamma}\\leq 2\\gamma^{*}, \\tag{3}\\] therefore generalizing (2) to not necessarily isothermal thermal profiles. Still in the HPE context, Simmons and Temperton, 1997 showed with the same method that extrapolating two-time level (2-TL) schemes have more stringent stability constraints than their 3-TL counterpart. For instance, in the isothermal framework of the SHB78 analysis, the stability of the 2-TL SI scheme requires: \\[0\\leq\\overline{T}\\leq T^{*}. \\tag{4}\\]As a consequence, they recommended to use a warmer reference temperature than in the 3-TL case. ST97 also showed that the 2-TL SI scheme was intrinsically damping when stable, a characteristic which was not present in the 3-TL SI scheme. However, 3-TL schemes require a time-filter to damp the unstable computational modes, and this time-filter also damps the transient physical modes. They argue that all these aspects being considered, the effective damping of 2-TL and 3-TL schemes is of comparable overall intensity. This particular debate will be ignored in this paper, in order to focus on stability aspects only. The relevance of the SI method for solving numerically the fully compressible Euler Equations (EE) was then advocated (Tanguay et al., 1990), and some numerical models in EE using this technique were effectively developed: Caya and Laprise (1999), presented a model in EE with a 3-TL SI scheme (with a moderate time-decentering first-order accurate in time). Semazzi et al. (1995) and Qian et al. (1998) also showed a model in EE but with a 2-TL SI scheme (with a strong time-filter however). Besides, the need of more robust schemes than the classical SI one for solving the EE system became recognized (e.g. Cote et al. 1998, BHBG95), probably motivated by some pathological behaviours with the classical SI scheme under some circumstances. Schemes with evolution terms treated in a more centred-implicit way are usually believed to have an increased robustness, hence fulfilling the latter emerging need, and some of them were developed for fine-scale models in the EE system. Bubnova et al., 1995 used a 3-TL scheme in which the leading non-linear terms of the EE system are treated in a centred-implicit way, through a partially iterative method. For a 2-TL scheme, Cote et al, 1998, used a fully iterative method, aiming to treat all the evolution terms of the EE system in a centred-implicit way. Cullen (2000) examined the benefit of using such a fully iterative scheme for the HPE system, arguing that an improved accuracy could be obtained beside the improved stability (this latter being not strictly required however for current HPE applications). As a formal justification, he examined the stability of this iterative scheme for the 2-TL shallow-water (SW) system. The analysis was limited to a scheme called \"predictor/corrector\", which consists in a single additional iteration after the SI scheme. The salient result was that the additional iteration in the \"predictor/corrector\" scheme allows to recover a extended range of stability as in (2) instead of (4). In the following, these fully iterative schemes with a more centred-implicit treatment of the evolution terms will be referred to as \"iterative centred-implicit\" (ICI) schemes. From the theoretical point of view, the current situation is that no stability analysis has been provided for the EE system with SI scheme, and for ICI schemes with more iterations, stability analyses are available only for the SW system. Here we present a general method to carry out space-continuous stability analyses of the various time-discretisation schemes mentioned above, for any usual meteorological system of NWP interest (SW, HPE, EE), on canonical problems similar to those examined in SHB78. Some original results concerning the EE system and iterative schemes are presented. This work may also be viewed as a first theoretical investigation about the suitability of various time-discretisation schemes for solving numerically the EE system. ## 2 General framework for analyses The general framework for the stability analyses presented here is basically the same as in most earlier studies: The flow is assumed adiabatic inviscid and frictionless in a non-rotating dry perfect-gas atmosphere with a Cartesian coordinate system. Moreover, the flow is assumed linear around an \"atmospheric\" basic-state \\(\\overline{\\mathcal{X}}\\). The actual evolution of the atmospheric flow is thus described by \\(\\overline{\\mathcal{L}}\\), the linear-tangent operator to \\(\\mathcal{M}\\) around \\(\\overline{\\mathcal{X}}\\). The atmospheric basic-state \\(\\overline{\\cal X}\\) is chosen stationary, resting, horizontally homogeneous, and hydrostatically balanced. The governing equation for the flow is then: \\[\\frac{\\partial{\\cal X}^{\\prime}}{\\partial t}=\\overline{\\cal L}.{\\cal X}^{\\prime} \\tag{5}\\] where \\({\\cal X}^{\\prime}={\\cal X}-\\overline{\\cal X}\\), and the primes are dropped henceforth for clarity. Following the usual practice in NWP, the linear operator\\({\\cal L}^{*}\\) in (1) is taken as the tangent-linear operator to \\({\\cal M}\\) around a reference-state \\({\\cal X}^{*}\\) which is also chosen stationary, resting, horizontally homogeneous, and hydrostatically balanced. Since \\(\\overline{\\cal X}\\) is a resting state, the linear Lagrangian time-derivative coincides with the Eulerian time-derivative, and the LHS operator of (5) hence holds for Eulerian models as well as for semi-Lagrangian models. ## 3 The class of ICI schemes in the linear framework In the restricted resting and linear framework of section 2, classical SI schemes as well as the iterative schemes mentioned in the introduction can be gathered in a single class of ICI schemes, differing only by their number of iterations. These ICI scheme are first presented for a 2-TL discretisation. In this case, the fully implicit-centred (FIC) scheme writes: \\[\\frac{{\\cal X}^{+}-{\\cal X}^{0}}{\\Delta t}=\\frac{\\overline{\\cal L}.{\\cal X}^{ +}+\\overline{\\cal L}.{\\cal X}^{0}}{2} \\tag{6}\\] where, according to a standard practice for time-discretised equations, the superscripts \"+\" and \"0\" stand for time levels \\((t+\\Delta t)\\) and \\(t\\) respectively. The principle of ICI schemes is to approach the FIC solution by starting from an initial \"guess\" noted \\({\\cal X}^{+(0)}\\), then iterating the following algorithm: \\[\\frac{{\\cal X}^{+(n)}-{\\cal X}^{0}}{\\Delta t} = \\frac{\\overline{\\cal L}.{\\cal X}^{+(n-1)}+\\overline{\\cal L}.{\\cal X }^{0}}{2}+\\frac{{\\cal L}^{*}.{\\cal X}^{+(n)}-{\\cal L}^{*}.{\\cal X}^{+(n-1)}}{2} \\tag{7}\\] \\[\\equiv \\frac{\\left(\\overline{\\cal L}-{\\cal L}^{*}\\right).{\\cal X}^{+(n-1 )}+\\left(\\overline{\\cal L}-{\\cal L}^{*}\\right).{\\cal X}^{0}}{2}+\\frac{{\\cal L }^{*}.{\\cal X}^{+(n)}+{\\cal L}^{*}.{\\cal X}^{0}}{2} \\tag{8}\\]for \\(n=1,2, ,N_{\\rm iter}\\). The \\({\\cal X}^{+}\\) state, valid a \\((t+\\Delta t)\\) is then taken as the last iterated state \\({\\cal X}^{+(N_{\\rm iter})}\\). An examination of this scheme for a model with a single prognostic variable without spatial dependency shows that it acts as a fixed-point algorithm for solving the implicit non-linear scalar equation \\(f(x)=x\\), by using an estimate \\(f^{\\prime*}\\) of the derivative \\(f^{\\prime}\\) as a preconditioner for convergence. The method converges if \\(|(f^{\\prime}-f^{\\prime*})/f^{\\prime*}|<1\\). This is a weaker condition than the one for the classical (i.e. not preconditioned) fixed-point method: \\(|f^{\\prime}|<1\\). The initial guess \\({\\cal X}^{+(0)}\\) is arbitrary in ICI schemes, but choosing an appropriate initial guess may help decreasing the magnitude of the discrepancy between FIC and ICI schemes after a fixed number of iterations. For 3-TL schemes, the ICI scheme can be defined by: \\[\\frac{{\\cal X}^{+(n)}-{\\cal X}^{-}}{2\\Delta t} = \\frac{\\overline{{\\cal L}}.{\\cal X}^{+(n-1)}+\\overline{{\\cal L}}.{ \\cal X}^{-}}{2}+\\frac{{\\cal L}^{*}.{\\cal X}^{+(n)}-{\\cal L}^{*}.{\\cal X}^{+(n- 1)}}{2} \\tag{9}\\] \\[\\equiv \\frac{\\left(\\overline{{\\cal L}}-{\\cal L}^{*}\\right).{\\cal X}^{+( n-1)}+\\left(\\overline{{\\cal L}}-{\\cal L}^{*}\\right).{\\cal X}^{-}}{2}+\\frac{{ \\cal L}^{*}.{\\cal X}^{+(n)}+{\\cal L}^{*}.{\\cal X}^{-}}{2} \\tag{10}\\] where the superscript \"-\" denotes a variable taken at the time-level \\((t-\\Delta t)\\). Here follows a list of some schemes proposed in the literature, with the corresponding characteristics (\\({\\cal X}^{+(0)}\\),\\(N_{\\rm iter}\\)), in the restricted framework of section 2: * Classical 2-TL SI extrapolating scheme: \\(N_{\\rm iter}=1\\) and \\({\\cal X}^{+(0)}=(2{\\cal X}^{0}-{\\cal X}^{-})\\). * 2-TL non-extrapolating SI scheme: \\(N_{\\rm iter}=1\\) and \\({\\cal X}^{+(0)}={\\cal X}^{0}\\). However, this scheme is not used in practice since it is only first-order accurate in time, as mentioned in Cullen (2000). * \"Predictor/corrector\" scheme of Cullen (2000): \\(N_{\\rm iter}=2\\) and \\({\\cal X}^{+(0)}={\\cal X}^{0}\\). * Iterative scheme of Cote et al., 1998: general iterative ICI scheme, but used with \\(N_{\\rm iter}=2\\) in practice. The choice of \\({\\cal X}^{+(0)}\\) is not explicitly indicated. * FIC scheme: \\(N_{\\rm iter}=\\infty\\) (does not depend on the choice of \\({\\cal X}^{+(0)}\\)). This scheme can not be achieved in practice for numerical models, but it may be useful for theoretical examination of the asymptotic behaviour of the ICI schemes. For 3-TL discretisations, the SI scheme, which corresponds to an ICI scheme with \\(N_{\\rm iter}=1\\) and \\({\\cal X}^{+(0)}=(2{\\cal X}^{0}-{\\cal X}^{-})\\) is the only one to be used in practice. However, iterated 3-TL ICI schemes could be used as well, and the 3-TL FIC scheme is equivalent to a 2-TL FIC scheme with a time-step twice as long. In the general framework, when \\({\\cal M}\\) is not linear, these various schemes cannot be gathered in the unique formalism (7) or (9). Addition of a second-order time-filter in the above definitions for 2-TL schemes is straightforward. Two main variants are usually considered, depending on whether the filtering is applied only to the time-averages of linear terms (as in SHB78), or also to the time-averages of non-linear terms, in (8). For instance, in a 2-TL SI scheme, the first variant of the filter consists in replacing \\({\\cal L}^{*}({\\cal X}^{+(n)}+{\\cal X}^{0})\\) by \\({\\cal L}^{*}[(1+\\kappa){\\cal X}^{+(n)}+(1-2\\kappa){\\cal X}^{0}+\\kappa{\\cal X }^{-}]\\) in (8), where \\(\\kappa\\) is a (small) positive parameter, and for the second variant, the same modification is also applied to the first RHS term of (8). The scheme then becomes essentially a 3-TL scheme since information at level \\({\\cal X}^{-}\\) is always used. However, the use of large values of \\(\\kappa\\) (e.g. \\(\\kappa=0.5\\), which eliminates the \\({\\cal X}^{0}\\) contribution) is known to deteriorate the solution through a spurious damping of transient perturbations (e.g. Hereil and Laprise, 1996). For 3-TL schemes, second-order time-filters are uneffective, and a first-order accurate time-decentering must be used. This consists in replacing \\({\\cal L}^{*}({\\cal X}^{+(n)}+{\\cal X}^{-})\\) by \\({\\cal L}^{*}[(1+\\epsilon){\\cal X}^{+(n)}+(1-\\epsilon){\\cal X}^{-}]\\) in (10), where \\(\\epsilon\\) is a (small) positive parameter. The scheme then ceases to be second-order accurate in time since the time-average is no longer centred in time. This results in a spurious damping of transient perturbations even for moderate values of \\(\\epsilon\\) (due to the weaker time-selectivity of the filter \\(\\epsilon\\) compared to \\(\\kappa\\)). ## 4 Conditions for space-continuous analyses ### Conditions on the upper and lower boundaries Space-continuous analyses are much easier to carry out when the equation system is defined in the whole unbounded atmosphere, because the expression of the normal modes of the system is more general. The following space-continuous analyses will restrict to this case (although this is not strictly required). For systems in which the vertical direction is represented (e.g. HPE and EE systems), this means that the upper and lower boundary conditions must not appear explicitly in the set of governing equations. However for systems cast in mass-based coordinates [such as HPE system in pressure-based coordinate (e.g. SHB78), and EE system in hydrostatic pressure-based coordinate (Laprise, 1992, L92 hereafter)], the upper and lower boundary conditions actually appear inside the set of equations through vertical integral operators with definite bounds at the boundaries of the vertical domain. When they are present, it is assumed that these integral operators can be eliminated, i.e. that \\(\\overline{\\mathcal{L}}\\), \\(\\mathcal{L}^{*}\\) can be transformed to \"unbounded\" operators by application of appropriate vertical linear differential operators to the prognostic equations which originally involve integral operators, in order that e.g. (5) rewrites as: \\[\\frac{\\partial}{\\partial t}\\left(\\begin{array}{c}l_{1}\\mathcal{X}_{1}\\\\ \\vdots\\\\ l_{P}\\mathcal{X}_{P}\\end{array}\\right)=\\left(\\begin{array}{ccc}l_{1} \\overline{\\mathcal{L}}_{11}&\\cdots&l_{P}\\overline{\\mathcal{L}}_{1P}\\\\ \\vdots&\\ddots&\\vdots\\\\ l_{1}\\overline{\\mathcal{L}}_{P1}&\\cdots&l_{P}\\overline{\\mathcal{L}}_{PP} \\end{array}\\right).\\left(\\begin{array}{c}\\mathcal{X}_{1}\\\\ \\vdots\\\\ \\mathcal{X}_{P}\\end{array}\\right) \\tag{11}\\] where \\(P\\) is the number of prognostic variables of the unbounded system, \\((l_{1},\\ldots,l_{P})\\) are linear vertical operators, and \\((l_{1}\\overline{\\mathcal{L}}_{11},\\ldots,l_{P}\\overline{\\mathcal{L}}_{PP})\\) are linear spatial operators which no longer contain any reference to the upper and lower boundaries. The transformed system obtained for \\(\\overline{\\cal L}\\) can then be written as: \\[\\frac{\\partial l.{\\cal X}}{\\partial t}=l.\\overline{\\cal L}.{\\cal X} \\tag{12}\\] where \\(l\\) is the diagonal matrix \\((l_{1},\\ldots,l_{P})\\). A similar condition must be true for \\({\\cal L}^{*}\\) as well: it is assumed that applying the same operator \\(l\\) to \\({\\cal L}^{*}\\) leads to an operator \\(l{\\cal L}^{*}\\) for which \\(l_{i}{\\cal L}^{*}_{ij}\\) does not contain any reference to the upper and lower boundaries for \\((i,\\;j)\\;\\in(1,\\ldots,P)\\). The first condition for the following analyses is: [C1]: There exists a linear operator \\(l\\) such as \\(l\\overline{\\cal L}\\) and \\(l{\\cal L}^{*}\\) have no reference to the upper and lower boundaries. The system (12) is henceforth referred to as the \"unbounded\" system. ### Conditions on the stability of the \\(\\overline{\\cal X}\\) state The aim of these analyses is to determine in which conditions a stationary state \\(\\overline{\\cal X}\\) for \\(l\\overline{\\cal L}\\) will remain a stable equilibrium-state in the time-discretised context, provided it is a stable equilibrium-state in the time-continuous context. Hence the analyses will be restricted to stationary states \\(\\overline{\\cal X}\\) which are in stable equilibrium. Given the linear context used here, a physical transposition of this condition is: [C2]: For any perturbation \\({\\cal X}(t=0)\\) around \\(\\overline{\\cal X}\\) with a bounded energy-density, the time-evolution \\({\\cal X}(t)\\) resulting from (12) must have a bounded energy-density. The condition is formulated with energy-density instead of total energy because the domain is unbounded in space. The complex eigenmodes of the unbounded system (12) are the complex functions of space \\({\\cal X}({\\bf r})\\) which satisfy: \\[l\\overline{\\cal L}{\\cal X}({\\bf r})=\\overline{\\lambda}l{\\cal X}({\\bf r}) \\tag{13}\\] \\[\\forall i\\in(1,\\ldots,P)\\;,\\;\\;l_{i}f_{i}({\\bf r})=\\xi_{i}f_{i}({\\bf r})\\;\\;\\mbox{ with }\\xi_{i}\\in{\\rm C}^{*}. \\tag{15}\\] and: [C4]: For any normal mode \\({\\cal X}\\) of the unbounded linear atmospheric system with a structure \\(f({\\bf r}),\\;\\;\\overline{{\\cal L}}_{ij}.f_{j}({\\bf r})\\) [resp. \\({\\cal L}^{*}_{ij}.f_{j}({\\bf r})\\)] must be proportional to \\(f_{i}({\\bf r})\\): \\[\\forall(i,j),\\;\\;l_{i}\\overline{{\\cal L}}_{ij}.f_{j}({\\bf r})=\\overline{\\mu}_{ ij}f_{i}({\\bf r})\\;\\;\\mbox{and}\\;\\;l_{i}{\\cal L}^{*}_{ij}.f_{j}({\\bf r})=\\mu^{*}_{ ij}f_{i}({\\bf r}),\\;\\mbox{with}\\;\\;(\\overline{\\mu}_{ij},\\,\\mu^{*}_{ij})\\;\\in\\;{\\rm C}. \\tag{16}\\] As will be seen below, these latter two conditions have the important consequence that for any normal mode of the unbounded system, each individual time-discretised prognostic equation for \\({\\cal X}_{i}({\\bf r})\\) becomes a scalar equation. This key ingredient makes the analysis straightforward for every member of the ICI class. ### Comments Since the set of normal modes for the unbounded system encompasses the set of normal mode of the bounded system, the transformation from the bounded system to the unbounded system is not likely to \"mask\" some instabilities of the original system, unless the causes of the instability lie in the boundary conditions themselves. However, discretised-analyses allow to clarify this point by showing that the stability of the bounded and unbounded systems are actually found to be similar in practice. Besides, discretised-analyses of the unbounded system are by nature impossible to perform, hence the continuous analysis is the only way to estimate the intrinsic stability of the unbounded system and to demonstrate that instabilities, when they occur in a practical application, are not due to a weakness in the spatial discretisation or even to the boundary conditions, but actually to the time-discretised propagation of free modes inside the atmosphere. In spite of their apparently abstract and constraining form, conditions [C1]\\(\\lnot\\)[C4] are easy to verify with routine normal mode analysis techniques when examining a particular concrete meteorological system. The condition [C2'] restricts the set of stationary states \\(\\overline{\\mathcal{X}}\\) around which the analysis is meaningful, and conditions [C1], [C3], [C4] restrict the spectrum of meteorological contexts accessible to the analysis, since they require a qualitative similarity between \\(l\\), \\(\\overline{\\mathcal{L}}\\) and \\(\\mathcal{L}^{*}\\) operators. As stated in SHB78, analyses performed under this type of conditions \" grossly exaggerate the stability of the scheme \" since in more realistic meteorological contexts, the atmospheric and reference operators can be qualitatively much more different than imposed by this condition. ## 5 Time-Discretised Space-Continuous Analysis The analysis examine the stability of the time-discretised system for perturbations which have a time-continuous normal mode structure. Hence we consider a given function \\(f=(f_{1},\\ldots,f_{P})\\) which is a normal mode structure for the time-continuous system, and we determine the normal modes of the time-discretised system which have the same structure \\(f\\), by solving the equation: \\[\\widehat{\\mathcal{X}}_{(t=\\Delta t)}f(\\mathbf{r})=\\lambda\\widehat{\\mathcal{X} }_{(t=0)}f(\\mathbf{r}) \\tag{17}\\] where \\(\\widehat{\\mathcal{X}}_{(t=0)}\\) and \\(\\lambda\\) are the unknowns, and \\(\\widehat{\\mathcal{X}}_{(t=\\Delta t)}\\) is determined using the time-discretisation scheme (7) or (9). For schemes using three time levels (as Leap-Frog or extrapolating 2-TL) a similar relationship \\(\\widehat{\\mathcal{X}}_{(t=-\\Delta t)}f(\\mathbf{r})=\\lambda^{-1}\\widehat{ \\mathcal{X}}_{(t=0)}f(\\mathbf{r})\\) must be added. If for some solution, \\(|\\lambda|>1\\) (resp. \\(<1\\)) the scheme is unstable (resp. damping) for this particular mode. The ratio \\(\\mathrm{Arg}(\\lambda)/(i\\overline{\\omega}\\Delta t)\\) gives the relative phase-speed error of the scheme for this mode. The analysis is described here for a 2-TL discretisation (7), but the transformation to a 3-TL scheme as well as the addition of time-filters such as \\(\\kappa\\) or \\(\\epsilon\\) are straightforward. In the following of this section, the notation \\(\\mathcal{X}(\\mathbf{r},t)\\) is replaced by the usual superscript notation for time-discretised variables \\(\\mathcal{X}^{t}(\\mathbf{r})\\) as in section 3. As a consequence of the discussion in section 3 and applying (17), \\({\\cal X}^{+(0)}\\) can be written as \\({\\cal X}^{+(0)}({\\bf r})=\\mu(\\lambda){\\cal X}^{0}({\\bf r})\\), where \\(\\mu(\\lambda)\\) depends on the choice of the initial guess \\({\\cal X}^{+(0)}\\) (e.g. \\(\\mu=1\\) for a 2-TL non-extrapolating scheme, \\(\\mu=2-1/\\lambda\\) for a 2-TL extrapolating scheme, etc ). The original unbounded system (12) is thus time-discretised following (7): \\[l{\\cal X}^{+(0)}({\\bf r}) = \\mu(\\lambda)\\,l{\\cal X}^{0}({\\bf r})\\] \\[\\frac{l{\\cal X}^{+(n)}({\\bf r})-l{\\cal X}^{0}({\\bf r})}{\\Delta t} = \\frac{l\\overline{\\cal \\[\\left(M_{1}\\right)_{ij} = -\\delta_{ij}-\\frac{\\Delta t}{2}\\frac{\\overline{\\mu}_{ij}}{\\xi_{i}} \\tag{20}\\] \\[\\left(M_{2}\\right)_{ij} = -\\frac{\\Delta t}{2}\\frac{1}{\\xi_{i}}\\left(\\overline{\\mu}_{ij}- \\mu_{ij}^{*}\\right)\\] (21) \\[\\left(M_{3}\\right)_{ij} = +\\delta_{ij}-\\frac{\\Delta t}{2}\\frac{\\mu_{ij}^{*}}{\\xi_{i}} \\tag{22}\\] where \\(\\delta_{ij}\\) is the \\((i,j)\\) Kronecker symbol. The possible values of \\(\\lambda\\) for the normal mode structure \\(f({\\bf r})\\) that we examine, are thus given by the roots of the following polynomial equation in \\(\\lambda\\): \\[{\\rm Det}({\\bf M})=0 \\tag{23}\\] The dependencies to \\(\\lambda\\) are limited to the top- and bottom-left blocks. For a non-extrapolating 2-TL scheme the degree of the polynomial is \\(P\\), and there are \\(P\\) physical modes associated with this structure \\(f({\\bf r})\\). For time-schemes making use of three time-levels (i.e. 3-TL schemes, or extrapolating 2-TL schemes, or 2-TL schemes with a time-filter), the degree becomes \\(2P\\), and there are \\(P\\) additional computational modes. The growth-rate for any of these modes is given by the modulus of the corresponding complex root of (23). The growth rate of the time scheme for the considered structure \\(f\\) is then defined by the maximum value of the modulus of these \\(P\\) or \\(2P\\) roots: \\[\\Gamma(f)={\\rm Max}\\left(|\\lambda_{i}(f)|\\right),\\,\\,\\,i\\in(1,\\ldots,P)\\,\\,\\, \\left[{\\rm or}\\,\\,\\,(1,\\ldots,2P)\\,\\right] \\tag{24}\\] Analytical solution of (23) is not possible for large values of \\(P\\), and a numerical solution is often needed. In this paper we will call \"asymptotic growth-rate\" the growth-rate for the matrix \\({\\bf M}\\) in the limit of large time-steps \\(\\Delta t\\rightarrow\\infty\\). The analysis of the asymptotic growth-rate is easier than for finite time-steps, since the matrix \\({\\bf M}\\) of (23) degenerates to a matrix \\({\\bf M}^{\\prime}\\) in which \\(\\delta_{ij}\\) vanishes and \\(\\Delta t/2\\) eliminates. Another advantage of asymptotic growth-rates is that they appear to be independent of the structure \\(f\\) in most of the cases examined below, thus simplifying considerably the interpretation of the results. When the asymptotic growth-rate is independent of the structure \\(f\\) the growth-rate of the scheme can be defined by the growth-rate obtained for this scheme with any structure \\(f\\). When the growth rate for a given scheme is one (or less) for any mode of any normal mode structure \\(f\\), the scheme is then said to be \"unconditionally stable\" (being understood \"in \\(\\Delta t\\)\"). The criterion for unconditional stability obtained through \\(\\mbox{Det}(\\mbox{\\bf M}^{\\prime})=0\\) is not only of academic interest since the considered time-schemes are actually used with large time-steps in NWP: the practice shows that a scheme which is not unconditionally stable in the simplified context of these analyses has few chance to be robust enough for use in real conditions. ## 6 Simple examples: 1D systems ### 1D Shallow-water system The linearised 1D shallow water system in an horizontal direction \\(x\\) can be classically written in terms of the wind \\(u\\) along \\(x\\), and the geopotential \\(\\phi\\): \\[\\frac{\\partial u}{\\partial t} = -\\frac{\\partial\\phi}{\\partial x} \\tag{25}\\] \\[\\frac{\\partial\\phi}{\\partial t} = -\\overline{\\phi}\\frac{\\partial u}{\\partial x} \\tag{26}\\] This system is also valid for the external mode of an isothermal atmosphere in HPE and EE system, replacing \\(\\overline{\\phi}\\) by \\(4(R^{2}/C_{p})\\overline{T}\\) (which is done in the following). The reference system is obtained by replacement of \\(\\overline{T}\\) by \\(T^{*}\\), and a \"non-linearity\" factor is defined through: \\(\\alpha=(\\overline{T}-T^{*})/T^{*}\\). Solution of (13) implies that if \\(\\overline{T}<0\\), \\(\\overline{\\lambda}\\in i\\mbox{\\bf R}\\implies u=\\widehat{u}\\exp(rx)\\), which has not a bounded energy-density for \\(rx\\longrightarrow+\\infty\\). Hence [C2'] requires \\(\\overline{T}\\geq 0\\) (i.e. \\(\\alpha\\geq-1\\)). The boundary conditions do not appear explicitly in the system, hence \\(l\\) can be taken as the identity operator to satisfy [C1], and [C3]. In the notations of section 4 and 5 we have \\(P=2\\), \\({\\cal X}_{1}=u\\) and \\({\\cal X}_{2}=\\phi\\). The normal modes of the system write \\(\\psi(x)=\\widehat{\\psi}\\exp(ikx)\\) with \\(k\\in{\\rm I\\!R}\\) and \\(\\psi=(u,\\phi)\\). Conditions \\({\\rm[C1]-[C4]}\\) are easily checked to be satisfied. For a 3-TL SI scheme, (23) writes: \\[\\left(\\frac{\\lambda^{2}-1}{2\\Delta t}\\right)^{2}=-\\frac{k^{2}c^{*2}}{4}\\left( \\lambda^{2}+1\\right)\\left(\\lambda^{2}+1+2\\alpha\\lambda\\right), \\tag{27}\\] where \\(c^{*}=2\\sqrt{(R/C_{p})RT^{*}}\\). In the limit of long time-steps, the LHS term disappears, and the four roots of the RHS give the \"asymptotic\" numerical growth-rate for the two physical and two computational modes of the system. The two roots of the first factor have a neutral stability, while those of the second factor have a modulus equal to 1 if \\(-1\\leq\\alpha\\leq 1\\). The criterion (on \\(\\overline{\\cal X}\\), \\({\\cal X}^{*}\\)) for unconditional stability (in \\(\\Delta t\\)) of the 3-TL SI scheme is thus : \\(0\\leq\\overline{T}\\leq 2T^{*}\\). Some further algebraic manipulations from (23) with \\(\\Delta t=\\infty\\) show that this criterion remains unchanged when increasing \\(N_{\\rm iter}\\). For a 2-TL SI non-extrapolating (\\(\\mu=1\\)) scheme, (23) becomes: \\[\\left(\\frac{\\lambda-1}{\\Delta t}\\right)^{2}=-\\frac{k^{2}c^{*2}}{4}\\left( \\lambda+1\\right)\\left(\\lambda+1+2\\alpha\\right), \\tag{28}\\] and the criterion for unconditional stability becomes more constraining than for the 3-TL scheme: \\(-1\\leq\\alpha\\leq 0\\) (i.e. \\(0\\leq\\overline{T}\\leq T^{*}\\)). The asymptotic growth-rate of a 2-TL non-extrapolating ICI scheme with \\(N_{\\rm iter}\\) iterations is given by: \\[\\Gamma={\\rm Max}\\left(1,\\left|2(-\\alpha)^{N_{\\rm iter}}-1\\right|\\right) \\tag{29}\\] The domain for unconditional stability is thus \\(-1\\leq\\alpha\\leq 0\\) for odd values of \\(N_{\\rm iter}\\), and \\(-1\\leq\\alpha\\leq 1\\) for even values of \\(N_{\\rm iter}\\). ### 1D vertical acoustic system in mass-based coordinate We consider a vertical 1D compressible atmospheric column satisfying the conditions listed in section 2. A regular mass-based coordinate \\(\\sigma=(\\pi/\\pi_{s})\\) is chosen following L92,by making \\(\\eta=\\sigma\\), \\(A(\\sigma)=0\\) and \\(B(\\sigma)=\\sigma\\) in equation (31) of L92. The variable \\(\\pi\\) denotes the hydrostatic pressure, and \\(\\pi_{s}\\), the surface hydrostatic pressure. The system is readily obtained from equations (36) - (45) of L92 by removing all horizontal dependencies. The surface hydrostatic-pressure does not evolve in time (see equation (45) in L92). The equations are linearised around a resting atmospheric-state \\(\\overline{\\cal X}\\) and a resting reference-state \\({\\cal X}^{*}\\), both satisfying the conditions of section 2. The temperatures \\(\\overline{T}\\) and \\(T^{*}\\) are taken uniform, and we still define the \"non-linearity\" factor by: \\(\\alpha=(\\overline{T}-T^{*})/T^{*}\\). The pressure values \\(\\overline{p}\\) and \\(p^{*}\\) are assumed to be equal to a common value \\(\\pi_{0}\\) at the origin (\\(\\sigma=1\\)). Since \\(\\overline{\\cal X}\\) and \\({\\cal X}^{*}\\) are hydrostatically balanced, \\(\\overline{p}=\\overline{\\pi}=\\sigma\\pi_{0}\\) and \\(p^{*}=\\pi^{*}=\\sigma\\pi_{0}\\) at any level. The thermodynamics equation decouples and the linear system around \\(\\overline{\\cal X}\\) for the vertical velocity \\(w\\), and the pressure deviation \\(p^{\\prime}=p-\\overline{p}\\) writes in standard notations: \\[\\frac{\\partial w}{\\partial t} = \\frac{g}{\\pi_{0}}\\frac{\\partial p^{\\prime}}{\\partial\\sigma} \\tag{30}\\] \\[\\frac{\\partial p^{\\prime}}{\\partial t} = \\frac{C_{p}}{C_{v}}\\frac{g\\pi_{0}}{R\\overline{T}}\\sigma^{2} \\frac{\\partial w}{\\partial\\sigma} \\tag{31}\\] The same derivation holds for \\({\\cal L}^{*}\\) and leads to an operator formally identical to the RHS of (30)-(31), still acting on \\((w,p^{\\prime})\\), but with \\(\\overline{T}\\) replaced by \\(T^{*}\\). The solution of (13), implies that if \\(\\overline{T}<0\\), \\(\\overline{\\lambda}\\in\\ i{\\rm R}\\implies w=\\widehat{w}\\ \\sigma^{r}\\) with \\(r<-1\\) or \\(r>0\\). For the mode with \\(r<-1\\), the energy-density is not bounded when \\(\\sigma\\to 0\\). If \\(\\overline{T}\\geq 0\\), the structure of the normal modes of (30)-(31) is given by: \\[w(\\sigma) = \\widehat{w}\\ \\sigma^{(i\ u-1/2)}=\\widehat{w}\\ f_{1}(\\sigma) \\tag{32}\\] \\[p^{\\prime}(\\sigma) = \\widehat{p^{\\prime}}\\ \\sigma^{(i\ u+1/2)}=\\widehat{p^{\\prime}}\\ f_{2}(\\sigma) \\tag{33}\\] where \\(\ u\\) is a real number, and they have a bounded energy-density. The condition [C2] therefore requires \\(\\overline{T}\\geq 0\\) (i.e. \\(\\alpha\\geq-1\\)). Finally, [C4] is trivially checked to be satisfied. For a 3-TL SI scheme, (23) writes: \\[\\left(\\frac{\\lambda^{2}-1}{2\\Delta t}\\right)^{2}=-\\frac{(\ u^{2}+1/4)c^{*2}}{4 H^{*2}}\\left(\\lambda^{2}+1\\right)\\left(\\lambda^{2}+1-\\frac{2\\alpha\\lambda}{1+ \\alpha}\\right), \\tag{34}\\]where \\(c^{*}=\\sqrt{(C_{p}/C_{v})RT^{*}}\\) and \\(H^{*}=RT^{*}/g\\). Comparison of (34) and (27) shows that the stability of the 1D vertical system for \\(\\alpha\\) is the same as the one of the previous shallow-water system for \\(\\alpha^{\\prime}=-\\alpha/(1+\\alpha)\\). Hence the criteria for unconditional stability directly follows from those of the previous case, by similarity arguments. The criterion for unconditional stability of the 3-TL SI scheme is \\(\\alpha\\geq(-1/2)\\), i.e. \\(\\overline{T}\\geq(1/2)T^{*}\\), and this criterion remains unchanged when increasing \\(N_{\\rm iter}\\). For a 2-TL SI non-extrapolating (\\(\\mu=1\\)) scheme, (23) becomes: \\[\\left(\\frac{\\lambda-1}{\\Delta t}\\right)^{2}=-\\frac{(\ u^{2}+1/4)c^{*2}}{4H^{*2 }}\\left(\\lambda+1\\right)\\left(\\lambda+1-\\frac{2\\alpha}{1+\\alpha}\\right), \\tag{35}\\] and the criterion for unconditional stability (\\(\\alpha\\geq 0\\), i.e. \\(\\overline{T}\\geq T^{*}\\)), which is more constraining than for the 3-TL SI scheme. For iterated 2-TL schemes, the criterion for unconditional stability is \\(\\alpha\\geq(-1/2)\\) for even values of \\(N_{\\rm iter}\\), and \\(\\alpha\\geq 0\\) for odd values. ### 1D vertical acoustic system in height-based coordinate In this example we show that the stability properties of the 1D vertical system may depend on the coordinate. The framework is taken as in the previous example except that the vertical coordinate is the height \\(z\\). The linearised system \\(\\overline{\\cal L}\\) (cf. e.g. Caya and laprise, 1999) writes: \\[\\frac{\\partial w}{\\partial t} = -R\\overline{T}\\frac{\\partial q^{\\prime}}{\\partial z}+g\\frac{T^{ \\prime}}{\\overline{T}} \\tag{36}\\] \\[\\frac{\\partial T^{\\prime}}{\\partial t} = -\\frac{R\\overline{T}}{C_{v}}\\frac{\\partial}{\\partial z}w\\] (37) \\[\\frac{\\partial q^{\\prime}}{\\partial t} = \\left(\\frac{g}{R\\overline{T}}-\\frac{C_{p}}{C_{v}}\\frac{\\partial }{\\partial z}\\right)w, \\tag{38}\\] where \\(q^{\\prime}=q-\\overline{q}\\), \\(q=\\ln(p/p_{0})\\), \\(\\overline{q}=-gz/R\\overline{T}\\), \\(p_{0}\\) is a reference pressure, and \\(p\\) is the true pressure. The normal modes of \\(\\overline{\\cal L}\\) have the following form, for \\(\\psi=(w,T^{\\prime},q^{\\prime})\\): \\[\\psi=\\widehat{\\psi}\\,\\exp\\left[\\left(i\ u+\\frac{1}{2\\overline{H}}\\right)z\\right] \\tag{39}\\]where \\(\\overline{H}=(R\\overline{T}/g)\\). The reference system \\({\\cal L}^{*}\\) is defined in a similar way replacing \\(\\overline{T}\\) by \\(T^{*}\\) and \\(\\overline{q}\\) by \\(q^{*}=-gz/RT^{*}\\). It should be noted that the structure of the normal modes of \\({\\cal L}^{*}\\) is not the same as for \\(\\overline{\\cal L}\\) since the characteristic height for \\({\\cal L}^{*}\\) is \\(H^{*}=(RT^{*}/g)\\) For a 2-TL SI non-extrapolating (\\(\\mu=1\\)) scheme, (23) becomes: \\[\\left(\\frac{\\lambda-1}{\\Delta t}\\right)^{2}=\\frac{c^{*2}}{4}\\left(i\ u+\\frac{1 }{2\\overline{H}}\\right)(\\lambda+1+2\\alpha)\\left[i\ u\\left(\\lambda+1\\right)- \\frac{1}{H^{*}}\\left(\\frac{1+2\\alpha}{1+\\alpha}\\right)\\left(\\lambda+1-\\frac{4 \\alpha}{1+2\\alpha}\\right)\\right] \\tag{40}\\] where \\(c^{*}=\\sqrt{(C_{p}/C_{v})RT^{*}}\\). In the height-coordinate framework, the asymptotic growth-rate depends on the structure since \\(\ u\\) appears in one of the factors which become dominant at large time-steps. The interpretation of the results is thus slightly complicated in comparison to the case of a mass-based coordinate. For the most external structure (\\(\ u=0\\)), the asymptotic growth-rate is given by the roots of: \\[(\\lambda+1+2\\alpha)\\left(\\lambda+1-\\frac{4\\alpha}{1+2\\alpha}\\right)=0. \\tag{41}\\] This polynomial consists in a combination of two factors similar to those obtained in the two previous examples (through a formal replacement of \\(2\\alpha\\) by \\(\\alpha\\) for the second factor). As a consequence, the unconditional stability domains can be readily deduced from these previous examples: the external structure \\(\ u=0\\) is unstable for any value \\(\\alpha\ eq 0\\) when \\(\\Delta t\\longrightarrow\\infty\\). The instability is thus much more severe than in the case of a mass-based vertical coordinate, for which \\(\\alpha\\geq 0\\) was sufficient to ensure unconditional stability. Moreover, slightly shorter structures with vertical wavelengths of the order of \\((1/\\overline{H})\\) are found to be more unstable than the external one, and for these structures, the unconditional stability criterion \\((\\alpha=0)\\) remains unchanged when \\(N_{\\rm iter}\\) is increased. Fig. 1 shows the asymptotic growth-rates for the 2-TL SI (\\(N_{\\rm iter}=1\\)) scheme and the the 2-TL ICI scheme with \\(N_{\\rm iter}=2\\) for \\(\ u=0.0001\\) m\\({}^{-1}\\), a structure for which the instability is close to its maximum. The severe instability of the 2-TL SI scheme is only alleviated but not suppressed by choosing \\(N_{\\rm iter}=2\\). For the 3-TL SI scheme, the external structure \\(\ u=0\\) is unconditionally stable for \\(-0.25\\leq\\alpha\\leq 1\\), but slightly shorter structures as above are found unstable at large time-steps as soon as \\(\\alpha\ eq 0\\) (very short modes are stable however). Fig. 1 depicts the asymptotic growth-rates for two structures: the external structure \\(\ u=0\\), and a long structure \\(\ u=0.0001\\) m\\({}^{-1}\\). The growth-rate of the long structure for a moderate time-step \\(\\Delta t=30\\) s with a time-decentering \\(\\epsilon=0.1\\)(as in Caya and Laprise, 1999) is also depicted: the practical instability becomes small in these conditions, and the 3-TL scheme cannot be positively rejected, especially considering the fact that dissipative processes could act in a way to stabilize the scheme. The practical impact of this predicted weak instability for NWP applications could be easily evaluated with a \\(z\\)-coordinate model, using an experimental set-up similar to the one used here, and then progressively extending the set-up to approach real-case experimental conditions. ### Comments In the three simple examples examined above, the criterion for unconditional stability is seen to be more constraining in the 2-TL non-extrapolating SI scheme than in the 3-TL SI scheme. The 2-TL extrapolating SI scheme is found to have similar domains of unconditional stability than its non-extrapolating counterpart (not shown). For mass-based coordinates, if both vertically propagating acoustic waves and external gravity waves are simultaneously allowed by a given equation system, the above analyses suggest that 2-TL SI schemes are so constraining that there is no domain for unconditional stability. For height-based coordinates, the long vertically propagating acoustic waves are always unstable in the 2-TL SI scheme. This leads to suspect that, in opposition to 3-TL SI schemes, classical 2-TL SI schemes are not suitable for the EE system with any vertical coordinate. The HPE system with 2-TL SI scheme did not suffer from this problem since vertically propagating waves are not allowed in HPE (i.e. the 1D column atmosphere is stationary). The intrinsic instability of the 2-TL SI scheme for EE system is confirmed in section 7 for mass-based coordinates. When a second-order time-filter with parameter \\(\\kappa\\) is applied to the system examined in the first example for a 2-TL SI non-extrapolating scheme, (28) becomes: \\[\\left(\\frac{\\lambda-1}{\\Delta t}\\right)^{2}=-\\frac{k^{2}c^{*2}}{4}\\left[( \\lambda+1)+\\kappa\\left(\\lambda-2+\\frac{1}{\\lambda}\\right)\\right]\\left[( \\lambda+1+2\\alpha)+\\kappa\\left(\\lambda-2+\\frac{1}{\\lambda}\\right)\\right], \\tag{42}\\] and the criterion for unconditional stability becomes \\(-1\\leq\\alpha\\leq 2\\kappa\\). For the second example, the similarity argument shows that the criterion for unconditional stability becomes: \\(\\alpha\\geq-2\\kappa/(1+2\\kappa)\\). The domains of stability of 3-TL and 2-TL time-filtered ICI schemes for the two first examples are summarized in Table 1. The application of a time-filter thus allows to alleviate the stability constraints for 2-TL SI schemes, and a non-vanishing domain for unconditional stability is recovered. The width of the unconditional stability domain increases with \\(\\kappa\\). This is found to hold for the 1D vertical system in height-based coordinates as well, which is consistent with the results of Semazzi et al. (1995) and Qian et al (1998): they succeeded to solve numerically the EE system at low-resolution with a 2-TL SI scheme, however, the use of a large value \\(\\kappa=0.5\\) was required to stabilize the model. As a consequence the forecasts suffered from a dramatic loss of energy with increasing forecast-range, and ceased to be of meteorological interest after 2-3 days. Moreover, at high resolutions (and consequently steep orography) the use of a time-filter \\(\\kappa\\) is found experimentally to be an insufficient solution for eliminating the intrinsic instability of the scheme (not shown). If a high level of accuracy is desired for the EE system with a 2-TL classical SI time-discretization and high resolution, a more robust scheme (e.g. with a larger value of \\(N_{\\rm iter}\\)) must be used. The above analyses show that the unconditional stability domain is dramatically reduced for odd values of \\(N_{\\rm iter}\\), hence ICI schemes with even values of \\(N_{\\rm iter}\\) are preferable for solving the EE system with a 2-TL scheme. The 1D vertical system in mass-based coordinates has been found to be more stable than its counterpart in height-based coordinates in a general way. For mass-coordinates, the 3-TL SI and the 2-TL ICI schemes with even values of \\(N_{\\rm iter}\\) have an extended domain of unconditional stability, whilst for height-based coordinates, they are unstable as soon as \\(\\alpha\ eq 0\\). For 3-TL SI schemes, the necessity to have recourse to a first-order time-decentering \\(\\epsilon>0\\) to overcome this instability is a significant drawback since it results in a spurious damping of transient phenomena, similarly to \\(\\kappa\\) but in an even less selective way, as mentioned above. We think these differences give a substantial theoretical advantage to mass-based coordinates for solving the EE system with classical SI or ICI schemes. ## 7 Analysis of the EE system for isothermal atmospheres in mass-coordinate The analysis of the isothermal HPE system for ICI schemes does not substantially modify the general conclusions drawn for the shallow-water case (not shown), hence the case of the EE system is directly examined. In this section, the EE system is cast in the pure unstretched terrain-following coordinate \\(\\sigma\\) which can be classically derived from the hydrostatic-pressure coordinate \\(\\pi\\) of L92, through \\(\\sigma=(\\pi/\\pi_{s})\\in[0,1]\\), where \\(\\pi_{s}\\) is the hydrostatic surface-pressure. The nonhydrostatic prognostic variables are the non-dimensionalised nonhydrostatic pressure departure \\({\\cal P}=(p-\\pi)/\\pi\\) (where \\(p\\) is the true pressure), and the vertical divergence \\({\\sf d}\\) which writes in \\(\\sigma\\) coordinate: \\[{\\sf d}=-\\frac{g}{RT}(1+{\\cal P})\\sigma\\frac{\\partial w}{\\partial\\eta} \\tag{43}\\]The adiabatic system writes: \\[\\frac{d{\\bf V}}{dt} = -RT\ abla q-\\frac{RT}{(1+{\\cal P})}\ abla{\\cal P}-\\left(1+{\\cal P}+ \\sigma\\frac{\\partial{\\cal P}}{\\partial\\sigma}\\right)\ abla\\phi \\tag{44}\\] \\[\\frac{d{\\sf d}}{dt} = -\\frac{g^{2}(1+{\\cal P})}{RT}\\left(\\sigma\\frac{\\partial}{\\partial \\sigma}\\right)\\left(1+\\sigma\\frac{\\partial}{\\partial\\sigma}\\right){\\cal P}\\] (45) \\[+ {\\sf d}(\ abla{\\bf V}-D_{3})+\\frac{g(1+{\\cal P})}{RT}\\left[\ abla w \\left(\\sigma\\frac{\\partial{\\bf V}}{\\partial\\sigma}\\right)\\right]\\] \\[\\frac{dT}{dt} = -\\frac{RT}{C_{v}}D_{3}\\] (46) \\[\\frac{d{\\cal P}}{dt} = -\\left(1+{\\cal P}\\right)\\left(\\frac{C_{p}}{C_{v}}D_{3}+\\frac{ \\dot{\\pi}}{\\pi}\\right)\\] (47) \\[\\frac{\\partial q}{\\partial t} = -\\int_{0}^{1}\\left(\ abla{\\bf V}+{\\bf V}\ abla q\\right)d\\sigma^{\\prime} \\tag{48}\\] where: \\[D_{3} = \ abla{\\bf V}+{\\sf d}+\\frac{(1+{\\cal P})}{RT}\ abla\\phi.\\left( \\sigma\\frac{\\partial{\\bf V}}{\\partial\\sigma}\\right) \\tag{49}\\] \\[\\phi = R\\int_{\\sigma}^{1}\\left(\\frac{T}{1+{\\cal P}}\\right)\\frac{d \\sigma^{\\prime}}{\\sigma^{\\prime}}\\] (50) \\[\\frac{\\dot{\\pi}}{\\pi} = {\\bf V}\ abla q-\\frac{1}{\\sigma}\\int_{0}^{\\sigma}\\left(\ abla{ \\bf V}+{\\bf V}\ abla q\\right)d\\sigma^{\\prime}, \\tag{51}\\] \\({\\bf V}\\) is the horizontal wind, and \\(\ abla\\) is the horizontal derivative operator. The domain is restricted to a vertical plane along \\((x,\\sigma)\\) directions for clarity. The system is linearized around a resting isothermal and hydrostatically-balanced state \\(\\overline{{\\cal X}}\\): \\[\\frac{\\partial D}{\\partial t} = -R{\\cal G}\ abla^{2}T+R\\overline{T}({\\cal G}-{\\cal I})\ abla^{2}{ \\cal P}-R\\overline{T}\ abla^{2}q\\] (52) \\[\\frac{\\partial{\\sf d}}{\\partial t} = -\\frac{g^{2}}{R\\overline{T}}\\left(1+\\sigma\\frac{\\partial}{ \\partial\\sigma}\\right)\\left(\\sigma\\frac{ where the vertical integral operators \\({\\cal G}\\), \\({\\cal S}\\) and \\({\\cal N}\\) are defined by: \\[{\\cal G}X = \\int_{\\sigma}^{1}(X/\\sigma^{\\prime})d\\sigma^{\\prime} \\tag{57}\\] \\[{\\cal S}X = (1/\\sigma)\\int_{0}^{\\sigma}Xd\\sigma^{\\prime}\\] (58) \\[{\\cal N}X = \\int_{0}^{1}Xd\\sigma^{\\prime} \\tag{59}\\] The \\({\\cal L}^{*}\\) operator is similar to the RHS of this system, simply replacing \\(\\overline{T}\\) by \\(T^{*}\\). ### Verification of conditions [C1] - [C4] The linear operator \\(l_{1}=\\sigma(\\partial/\\partial\\sigma)\\) is applied to (52), and \\(l_{4}=[{\\cal I}+\\sigma(\\partial/\\partial\\sigma)]\\) to (55). The \\(q\\) equation (56) decouples and we obtain a linear unbounded system, in which (52) and (55) are replaced by: \\[\\left(\\sigma\\frac{\\partial}{\\partial\\sigma}\\right)\\frac{ \\partial D}{\\partial t} = R\ abla^{2}T-R\\overline{T}\\left(\\sigma\\frac{\\partial}{\\partial \\sigma}+{\\cal I}\\right)\ abla^{2}{\\cal P} \\tag{60}\\] \\[\\left(\\sigma\\frac{\\partial}{\\partial\\sigma}+{\\cal I}\\right)\\frac {\\partial{\\cal P}}{\\partial t} = D-\\frac{C_{p}}{C_{v}}\\left(\\sigma\\frac{\\partial}{\\partial\\sigma}+{ \\cal I}\\right)(D+{\\sf d}) \\tag{61}\\] Hence we have \\(P=4\\), \\({\\cal X}=(D,{\\sf d},T,{\\cal P})\\). Using of the same operators (\\(l_{1}\\), \\(l_{4}\\)), the reference operator is also made free of any reference to the upper and lower boundary conditions, which shows that the condition [C1] is satisfied. Solution of (13) shows that [C2] requires \\(\\overline{T}\\geq 0\\) (i.e. \\(\\alpha\\geq-1\\)). The normal modes of the system are then: \\[\\psi(x,\\sigma)=\\widehat{\\psi}\\;\\exp(ikx)\\,\\sigma^{(i\ u-1/2)} \\tag{62}\\] where \\((k,\ u)\\in{\\rm I\\kern-1.8ptR}\\) and \\(\\psi\\) represents \\(D\\), \\({\\sf d}\\), \\(T\\) or \\({\\cal P}\\). In this particular case, the \\(f_{1},\\ldots,f_{4}\\) functions are all identical. The verification of [C3], [C4] proceeds easily, as in previous sections. ### Results As a first illustration of the results, the growth-rates of 2-TL non-extrapolating ICI schemes are shown in Fig. 2 as function of \\(\\alpha=(\\overline{T}-T^{*})/T^{*}\\) with a moderate time step \\(\\Delta t=20\\ s\\), for three particular mode structures: (i) an external mode (\\(k=0.0005\\ \\mathrm{m}^{-1}\\), \\(\ u=0\\)) (ii) a vertical very internal mode (\\(k=0\\), \\(\ u=100\\)) (ii) an intermediate slantwise mode (\\(k=0.0005\\ \\mathrm{m}^{-1}\\), \\(\ u=3\\)) The suspicions raised in the simple 1D examples are confirmed: the internal vertically-propagating mode is unstable for \\(\\alpha<0\\) while the external gravity mode is unstable for \\(\\alpha>0\\). Moreover, intermediate, slantwise-propagating modes are unstable in the whole domain, and the acoustic external mode (Lamb-wave) appears to be unstable for \\(\\alpha<0\\) as well. The domain of stability vanishes, which confirms that 2-TL SI scheme is not relevant for solving the EE system. The effect of introducing a time-filter to save the situation is discussed below. The asymptotic growth-rates resulting from the EE system for \\(\\Delta t=\\infty\\) are now examined. Similarly to most previous cases, they are independent of the geometry (\\(k\\), \\(\ u\\)) of the mode. Fig. 3 shows the asymptotic growth-rates as a function of \\(\\alpha\\) for 2-TL non-extrapolating ICI schemes with \\(N_{\\mathrm{iter}}=(1,2,3,4)\\). As stated above, the SI scheme (\\(N_{\\mathrm{iter}}=1\\)) is unstable for any value of \\(\\alpha\\). For even values of \\(N_{\\mathrm{iter}}\\), the scheme has an \"optimal\" domain of unconditional stability \\(-1/2\\leq\\alpha\\leq 1\\), while for odd values, the scheme is unstable for all values of \\(\\alpha\\). For 3-TL ICI schemes the domain of unconditional stability is \\(-1/2\\leq\\alpha\\leq 1\\) independently of the values of \\(N_{\\mathrm{iter}}\\) and \\(\\kappa\\). The curves (not shown) are similar to those obtained for even values of \\(N_{\\mathrm{iter}}\\) for 2-TL ICI schemes. The impact of applying a time-filter \\(\\kappa=0.1\\) to 2-TL ICI schemes is depicted on Fig 4 for the second variant (the first variant behave qualitatively in the same way). The global impact is to \"lower\" the curves of the asymptotic growth-rates, and consequently, to increase the width of the unconditional-stability domain. However, large values of \\(\\kappa\\)(e.g. \\(\\kappa\\approx 0.5\\)) are required in order to obtain a wide stability domain, especially for the 2-TL SI scheme, and this strategy is known to be irrelevant for NWP purpose. Finally, it is worth noting that the results obtained for the EE system are fully compatible with the conclusions that can be drawn from the intersection of the domains of unconditional stability in Table 1 for the two simple frameworks examined above in mass-coordinate. The ability of these very refined frameworks to capture the essence of the behaviour of the time-discretised EE system in the limit of long time-steps makes them very useful tools to fully understand the underlying causes of its stability or instability. ## 8 Conclusion A general method for investigating the stability of the ICI class of time-discretisations on canonical problems with various space-continuous equation systems has been presented. These ICI schemes are based on a separation of evolution terms between a simple linear operator and \"non-linear\" residuals. The method has been validated by confirming earlier results, then the application to new frameworks (equation systems or time-discretisation schemes) allowed to extend these results. The main conclusions drawn from this study are: 1. Even on very simple (1D) examples, the stability properties of time-discretisations for a given equation system are very dependent on fundamental choices (as e.g. the choice of the vertical coordinate). Hence, the import of conclusions drawn from a given analysis must be carefully limited to the examined framework. 2. For the EE system, height-based coordinates have a theoretical disavantage compared to mass-based coordinates since they exhibit an intrinsic instability for (long) vertically propagating waves. 3. The 2-TL SI scheme is found not to be appropriate for the EE set of equations, whatever coordinate is employed (using a time-filter results in an unacceptable degradation of the solution). 4. For the EE system, the 2-TL scheme with \\(N_{\\rm iter}=2\\) brings a dramatic increase of the stability compared to the 2-TL SI scheme (\\(N_{\\rm iter}=1\\)). This statement holds for even values of \\(N_{\\rm iter}\\), while odd values leads to a significantly weaker stability. 5. As a consequence of the latter point, the 2-TL ICI scheme with \\(N_{\\rm iter}=2\\) seems worth to be considered for the EE system. However, as mentioned in SHB78, the stability inferred from this type of analyses is overestimated, and flows in which the non-linearity comes from other sources than the discrepancy between the atmospheric and reference temperature profiles could reveal new instabilities in practice. For instance, in spite of its apparent \"optimal\" stability in the simplified context of this paper, the 3-TL SI scheme has proved to be not stable enough for solving numerically the EE system in realistic highly non-linear conditions at high resolutions, due to other terms treated explicitly. This point clearly demonstrates the limitations of this type of academic exercise. Nevertheless, in spite of its necessary limitations, this study can serve to distinguish schemes which are definitely not relevant for practical use from the others, and give a first theoretical justification for those which are worth considering. In agreement with Cote et al (1998) and Cullen (2000) we think that ICI schemes with \\(N_{\\rm iter}\\geq 2\\) are among the most appropriate alternatives for integrating the EE system in highly non-linear conditions at fine-scale, including from the point of view of efficiency. ## References * [1] * [2]Bubnova, R., G. Hello, P. Benard, and J.F. Geleyn, 1995: Integration of the fully elastic equations cast in the hydrostatic pressure terrain-following coordinate in the framework of the ARPEGE/Aladin NWP system. _Mon. Wea. Rev._, **123**, 515-535. * [3] * [4]Caya, D., and R. Laprise, 1999: A semi-implicit semi-Lagrangian regional climate model: the Canadian RCM. _Mon. Wea. Rev._, **127**, 341-362. * [5] * [6]Cote, J., M. Beland, and A. Staniforth, 1983: Stability of vertical discretization schemes for semi-implicit primitive equation models: theory and application. _Mon. Wea. Rev._, **111**, 1189-1207. * [7] * [8]Cote, J., S. Gravel, A. Methot, A. Patoine, M. Roch, and A. Staniforth, 1998: The Operational CMC-MRB Global Environmental Multiscale (GEM) Model. Part I: Design Considerations and Formulation. _Mon. Wea. Rev._, **126**, 1373-1395. * [9] * [10]Cullen, M. J. P., 2000: Alternative implementations of the semi-Lagrangian semi-implicit schemes in the ECMWF model. _Q. J. R. Meteorol. Soc._, **127**, 2787-2802. * [11] * [12]Hereil, P., and R. Laprise, 1996: Sensitivity of Internal Gravity Waves Solutions to the Time Step of a Semi-Implicit Semi-Lagrangian Nonhydrostatic Model. _Mon. Wea. Rev._, **124**, 972-999. * [13] * [14]Laprise, R., 1992: The Euler equations of motion with hydrostatic pressure as an independent variable. _Mon. Wea. Rev._, **120**, 197-207. * [15] * [16]Qian, J.-H., F. H. M. Semazzi, and J. S. Scroggs, 1998: A global nonhydrostatic semi-Lagrangian atmospheric model with orography. _Mon. Wea. Rev._, **126**, 747-771. * [17] * [18]Robert, A. J., J. Henderson, and C. Turnbull, 1972: An implicit time integration scheme for baroclinic models of the atmosphere. _Mon. Wea. Rev._, **100**, 329-335. * [19] * [20]Semazzi, F. H. M., J. H. Qian, and J. S. Scroggs, 1995: A global nonhydrostatic semi-Lagrangian atmospheric model without orography. _Mon. Wea. Rev._, **123**, 2534-2550. * Simmons, A. J., B. Hoskins, and D. Burridge, 1978: Stability of the semi-implicit method of time integration. _Mon. Wea. Rev._, **106**, 405-412. * Simmons, A. J., C. Temperton, 1997: Stability of a two-time-level semi-implicit integration scheme for gravity wave motion. _Mon. Wea. Rev._, **125**, 600-615. * Tanguay, M., A. Robert, and R. Laprise, 1990: A Semi-Implicit Semi-Larangian Fully Compressible Regional Forecast Model. _Mon. Wea. Rev._, **118**, 1970-1980. \\begin{table} \\begin{tabular}{|c|c|c|} \\hline & Shallow-water & 1D vertical (mass) \\\\ \\hline 3-TL ICI & \\(-1\\leq\\alpha\\leq 1\\) & \\(-1/2\\leq\\alpha\\) \\\\ \\hline 2-TL ICI (\\(N_{\\rm iter}\\) even) & \\(-1\\leq\\alpha\\leq 1\\) & \\(-1/2\\leq\\alpha\\) \\\\ \\hline 2-TL ICI (\\(N_{\\rm iter}\\) odd) & \\(-1\\leq\\alpha\\leq(2\\kappa)^{1/N_{\\rm iter}}\\) & \\(\\dfrac{-2\\kappa^{(1/N_{\\rm iter})}}{1+2\\kappa^{(1/N_{\\rm iter})}}\\leq\\alpha\\) \\\\ \\hline \\end{tabular} \\end{table} Table 1: Domains of unconditional stability for time-filtered schemes for the two first 1D examples. List of Figures * 1 Asymptotic growth-rates \\(\\Gamma\\) for 1D vertical system in \\(z\\) coordinate as a function of the nonlinearity * 2 Growth-rate \\(\\Gamma\\) with \\(\\Delta t=20\\)\\(s\\) for the EE system with a 2-TL SI scheme as a function of the nonlinearity * 3 Asymptotic growth-rates \\(\\Gamma\\) for EE system with 2-TL ICI scheme as a function of the nonlinearity * 4 Same as Fig 3, but with a time-filter \\(\\kappa=0.1\\). solid line: \\(N_{\\rm iter}=1\\); dashed line: \\(N_{\\rm iter}=2\\); dot-dashedFigure 1: Asymptotic growth-rates \\(\\Gamma\\) for 1D vertical system in \\(z\\) coordinate as a function of the nonlinearity parameter \\(\\alpha\\). thin line: long mode (\\(\ u=0.0001\\) m\\({}^{-1}\\)) with 2-TL SI scheme; thick line: long mode with 2-TL ICI scheme \\(N_{\\rm iter}=2\\); dotted line: external mode (\\(\ u=0\\))with 3-TL SI scheme; dashed line: long mode with 3-TL SI scheme. Circles: practical growth-rate of the 3-TL SI scheme for the long mode with \\(\\Delta t=30\\) s and \\(\\epsilon=0.1\\). Figure 2: Growth-rate \\(\\Gamma\\) with \\(\\Delta t=20\\)\\(s\\) for the EE system with a 2-TL SI scheme as a function of the nonlinearity parameter \\(\\alpha\\). solid line: external mode (i); dashed line: slantwise mode (ii); dot-dashed line: internal mode (iii). The left-part of the solid line represents an acoustic external mode. Figure 3: Asymptotic growth-rates \\(\\Gamma\\) for EE system with 2-TL ICI scheme as a function of the nonlinearity parameter \\(\\alpha\\). solid line: \\(N_{\\rm iter}=1\\); dashed line: \\(N_{\\rm iter}=2\\); dot-dashed line: \\(N_{\\rm iter}=3\\); dotted line: \\(N_{\\rm iter}=4\\). Figure 4: Same as Fig 3, but with a time-filter \\(\\kappa=0.1\\). solid line: \\(N_{\\rm iter}=1\\); dashed line: \\(N_{\\rm iter}=2\\); dot-dashed line: \\(N_{\\rm iter}=3\\); dotted line: \\(N_{\\rm iter}=4\\).
The stability of classical semi-implicit scheme, and some more advanced iterative schemes recently proposed for NWP purpose is examined. In all these schemes, the solution of the centred-implicit non-linear equation is approached by an iterative fixed-point algorithm, preconditioned by a simple, constant in time, linear operator. A general methodology for assessing analytically the stability of these schemes on canonical problems for a vertically unbounded atmosphere is presented. The proposed method is valid for all the equation systems usually employed in NWP. However, as in earlier studies, the method can be applied only in simplified meteorological contexts, thus overestimating the actual stability that would occur in more realistic meteorological contexts. The analysis is performed in the spatially-continuous framework, hence allowing to eliminate the spatial-discretisation or the boundary conditions as possible causes of the fundamental instabilities linked to the time-scheme itself. The general method is then shown concretely to apply to various time-discretisation schemes and equation-systems (namely shallow-water, and fully compressible Euler equations). Analytical results found in the literature are recovered from the proposed method, and some original results are presented.
Provide a brief summary of the text.
arxiv-format/0305078v3.md
# Long-term persistence and multifractality of river runoff records: Detrended fluctuation studies Eva Koscielny-Bunde Jan W. Kantelhardt Peter Braun Armin Bunde Shlomo Havlin Institut fur Theoretische Physik III, Justus-Liebig-Universitat, Giessen, Germany Potsdam Institute for Climate Impact Research, Potsdam, Germany Center for Polymer Studies, Dept. of Physics, Boston University, Boston, USA Bayerisches Landesamt fur Wasserwirtschaft, Munchen, Germany Minerva Center, Dept. of Physics, Bar-Ilan University, Ramat-Gan, Israel present addr.: Fachber. Physik, Martin-Luther-Universitat, Halle, Germany ###### keywords: runoff, scaling analysis, long-term correlations, multifractality, detrended fluctuation analysis, wavelet analysis, multiplicative cascade model + Footnote †: journal: Elsevier Science , Introduction The analysis of river flows has a long history. Already more than half a century ago Hurst found by means of his \\(R/S\\) analysis that annual runoff records from various rivers (including the Nile river) exhibit \"long-range statistical dependencies\" (Hurst, 1951), indicating that the fluctuations in water storage and runoff processes are self-similar over a wide range of time scales, with no single characteristic scale. Hurst's finding is now recognized as the first example for self-affine fractal behaviour in empirical time series, see e.g. Feder (1988). In the 1960s, the \"Hurst phenomenon\" was investigated on a broader empirical basis for many other natural phenomena (Hurst et al., 1965; Mandelbrot and Wallis, 1969). The scaling of the fluctuations with time is reflected by the scaling of the power spectrum \\(E(f)\\) with frequency \\(f\\), \\(E(f)\\sim f^{-\\beta}\\). For stationary time series the exponent \\(\\beta\\) is related to the decay of the corresponding autocorrelation function \\(C(s)\\) of the runoffs (see Eq. (1)). For \\(\\beta\\) between 0 and 1, \\(C(s)\\) decays by a power law, \\(C(s)\\sim s^{-\\gamma}\\), with \\(\\gamma=1-\\beta\\) being restricted to the interval between 0 and 1. In this case, the mean correlation time diverges, and the system is regarded as long-term correlated. For \\(\\beta=0\\), the runoff data are uncorrelated on large time scales (\"white noise\"). The exponents \\(\\beta\\) and \\(\\gamma\\) can also be determined from a fluctuation analysis, where the departures from the mean daily runoffs are considered as increments of a random walk process. If the runoffs are uncorrelated, the fluctuation function \\(F_{2}(s)\\), which is equivalent to the root-mean-square displacement of the random walk, increases as the square root of the time scale \\(s\\), \\(F_{2}(s)\\sim\\sqrt{s}\\). For long-term correlated data, the random walk becomes anomalous, and \\(F_{2}(s)\\sim s^{H}\\). The fluctuation exponent \\(H\\) is related to the exponents \\(\\beta\\) and \\(\\gamma\\) via \\(\\beta=1-\\gamma=2H-1\\). For monofractal data, \\(H\\) is identical to the classical Hurst exponent. Recently, many studies using these kinds of methods have dealt with scaling properties of hydrological records and the underlying statistics, see e. g. Lovejoy and Schertzer (1991); Turcotte and Greene (1993); Gupta et al. (1994); Tessier et al. (1996); Davis et al. (1996); Rodriguez-Iturbe and Rinaldo (1997); Pandey et al. (1998); Matsoukas et al. (2000); Montanari et al. (2000); Peters et al. (2002); Livina et al. (2003a,b). However, the conventional methods discussed above may fail when trends are present in the system. Trends are systematic deviations from the average runoff that are caused by external processes, e.g. the construction of a water regulation device, the seasonal cycle, or a changing climate (e.g. _global warming_). Monotonous trends may lead to an overestimation of the Hurst exponent and thus to an underestimation of \\(\\gamma\\). It is even possible that uncorrelated data, under the influence of a trend, look like long-term correlated ones when using the above analysis methods. In addition, long-term correlated data cannot simply be detrended by the common technique of moving averages, since this method destroys the correlations on long time scales (above the window size used). Furthermore, it is difficult to distinguish trends from long-term correlations, because stationary long-term correlated time series exhibit persistent behaviour and a tendency to stay close to the momentary value. This causes positive or negative deviations from the average value for long periods of time that might look like a trend. In the last years, several methods such as wavelet techniques (WT) and detrended fluctuation analysis (DFA), have been developed that are able to determine long-term correlations in the presence of trends. For details and applications of the methods to a large number of meteorological, climatological and biological records we refer to Peng et al. (1994); Taqqu et al. (1995); Bunde et al. (2000); Kantelhardt et al. (2001); Arneodo et al. (2002); Bunde et al. (2002). The methods, described in Section 2, consider fluctuations in the cumulated runoffs (often called the \"profile\" or \"landscape\" of the record). They differ in the way the fluctuations are determined and in the type of polynomial trend that is eliminated in each time window of size \\(s\\). In this paper, we apply these detrending methods to study the scaling of the fluctuations \\(F_{2}(s)\\) of river flows with time \\(s\\). We focus on 23 runoff records from international river stations spread around the globe and compare the results with those of 18 river stations from southern Germany. We find that above some crossover time (typically several weeks) \\(F_{2}(s)\\) scales as \\(s^{H}\\) with \\(H\\) varying from river to river between 0.55 and 0.95 in a nonuniversal manner independent of the size of the basin. The lowest exponent \\(H=0.55\\) was obtained for rivers on permafrost ground. Our finding is not consistent with the hypothesis that the scaling is universal with an exponent close to 0.75 (Hurst et al., 1965; Feder, 1988) with the same power law being applicable for all time scales from minutes till centuries. The above detrending approaches, however, are not sufficient to fully characterize the complex dynamics of river flows, since they exclusively focus on the variance which can be regarded as the second moment \\(F_{2}(s)\\) of the full distribution of the fluctuations. Note that the Hurst method actually focuses on the first moment \\(F_{1}(s)\\). To further characterize a hydrological record, we extend the study to include all moments \\(F_{q}(s)\\). A detailed description of the method, which is a multifractal generalization of the detrended fluctuation analysis (Kantelhardt et al., 2002) and equivalent to the Wavelet Transform Modulus Maxima (WTMM) method (Arneodo et al., 2002), is given in Section 3. Our approach differs from the multifractal approach introduced into hydrology by Lovejoy and Schertzer (see e.g. Schertzer and Lovejoy (1987); Lovejoy and Schertzer (1991); Lavallee et al. (1993); Pandey et al. (1998)) that was based on the concept of structure functions (Frisch and Parisi, 1985) and on the assumption of the existence of a universal cascade model. Here we per form the multifractal analysis by studying how the different moments of the fluctuations \\(F_{q}(s)\\) scale with time \\(s\\), see also Rodriguez-Iturbe and Rinaldo (1997). We find that at large time scales, \\(F_{q}(s)\\) scales as \\(s^{h(q)}\\), and a simple functional form with two parameters (\\(a\\) and \\(b\\)), \\(h(q)=(1/q)-[\\ln(a^{q}+b^{q})]/[q\\ln(2)]\\) describes the scaling exponent \\(h(q)\\) of all moments. On small time scales, however, a stronger multifractality is observed that may be partly related to the seasonal trend. The mean position of the crossover between the two regimes is of the order of weeks and increases with \\(q\\). ## 2 Correlation Analysis Consider a record of daily water runoff values \\(W_{i}\\) measured at a certain hydrological station. The index \\(i\\) counts the days in the record, \\(i=1\\),\\(2\\), ,\\(N\\). To eliminate the periodic seasonal trends, we concentrate on the departures \\(\\phi_{i}=W_{i}-\\overline{W_{i}}\\) from the mean daily runoff \\(\\overline{W_{i}}\\). \\(\\overline{W_{i}}\\) is calculated for each calendar date \\(i\\) (e.g. April \\(1^{st}\\)) by averaging over all years in the runoff series. In addition, we checked that our actual results remained unchanged when also seasonal trends in the variance have been eliminated by analysing \\(\\phi_{i}^{\\prime}=(W_{i}-\\overline{W_{i}})/(\\overline{W_{i}^{2}}-\\overline{W _{i}}^{2})^{1/2}\\) instead of \\(\\phi_{i}\\). The runoff autocorrelation function \\(C(s)\\) describes, how the persistence decays in time. If the \\(\\phi_{i}\\) are uncorrelated, \\(C(s)\\) is zero for all \\(s\\). If correlations exist only up to a certain number of days \\(s_{\\times}\\), the correlation function will vanish above \\(s_{\\times}\\). For long-term correlations, \\(C(s)\\) decays by a power-law \\[C(s)=\\langle\\phi_{i}\\phi_{i+s}\\rangle\\sim s^{-\\gamma},\\qquad 0<\\gamma<1, \\tag{1}\\] where the average \\(\\langle\\ldots\\rangle\\) is over all pairs with the same time lag \\(s\\). For large values of \\(s\\), a direct calculation of \\(C(s)\\) is hindered by the level of noise present in the finite hydrological records, and by nonstationarities in the data. There are several alternative methods for calculating the correlation function in the presence of long-term correlations, which we describe in the following sections. ### Power Spectrum Analysis If the time series is stationary, we can apply standard spectral analysis techniques and calculate the power spectrum \\(E(f)\\) of the time series \\(W_{i}\\) as a function of the frequency \\(f\\). For long-term correlated data, we have \\(E(f)\\sim f^{-\\beta}\\), where \\(\\beta\\) is related to the correlation exponent \\(\\gamma\\) by \\(\\beta=1-\\gamma\\). This relation can be derived from the Wiener-Khinchin theorem. If, instead of \\(W_{i}\\) the in tegrated runoff time series \\(z_{n}=\\sum_{i=1}^{n}\\phi_{i}\\) is Fourier transformed, the resulting power spectrum scales as \\(\\tilde{E}(f)\\sim f^{-2-\\beta}\\). ### Standard Fluctuation Analysis (FA) In the standard fluctuation analysis, we consider the \"runoff profile\" \\[z_{n}=\\sum_{i=1}^{n}\\phi_{i},\\qquad n=1,2,\\ldots,N, \\tag{2}\\] and study how the fluctuations of the profile, in a given time window of size \\(s\\), increase with \\(s\\). We can consider the profile \\(z_{n}\\) as the position of a random walker on a linear chain after \\(n\\) steps. The random walker starts at the origin and performs, in the \\(i\\)th step, a jump of length \\(\\phi_{i}\\) to the right, if \\(\\phi_{i}\\) is positive, and to the left, if \\(\\phi_{i}\\) is negative. To find how the square-fluctuations of the profile scale with \\(s\\), we first divide each record of \\(N\\) elements into \\(N_{s}={\\rm int}(N/s)\\) nonoverlapping segments of size \\(s\\) starting from the beginning and \\(N_{s}\\) nonoverlapping segments of size \\(s\\) starting from the end of the considered runoff series. Then we determine the fluctuations in each segment \\(\ u\\). In the standard fluctuation analysis, we obtain the fluctuations just from the values of the profile at both endpoints of each segment \\(\ u\\), \\(F^{2}(\ u,s)=[z_{\ u s}-z_{(\ u-1)s}]^{2}\\), and average \\(F^{2}(\ u,s)\\) over the \\(2N_{s}\\) subsequences to obtain the mean fluctuation \\(F_{2}(s)\\), \\[F_{2}(s)\\equiv\\left\\{\\frac{1}{2N_{s}}\\sum_{\ u=1}^{2N_{s}}F^{2}(\ u,s)\\right\\} ^{1/2}. \\tag{3}\\] By definition, \\(F_{2}(s)\\) can be viewed as the root-mean-square displacement of the random walker on the chain, after \\(s\\) steps. For uncorrelated \\(\\phi_{i}\\) values, we obtain Fick's diffusion law \\(F_{2}(s)\\sim s^{1/2}\\). For the relevant case of long-term correlations, where \\(C(s)\\) follows the power-law behaviour of Eq. (1), \\(F_{2}(s)\\) increases by a power law (see, e.g., Bunde et al. (2002)), \\[F_{2}(s)\\sim s^{H}, \\tag{4}\\] where the fluctuation exponent \\(H\\) is related to the correlation exponent \\(\\gamma\\) and the power-spectrum exponent \\(\\beta\\) by \\[H=1-\\gamma/2=(1+\\beta)/2. \\tag{5}\\]For power-law correlations decaying faster than \\(1/s\\), we have \\(H=1/2\\) for large \\(s\\) values, like for uncorrelated data. We like to note that the standard fluctuation analysis is somewhat similar to the rescaled range analysis introduced by Hurst (for a review see, e. g., Feder (1988)), except that it focusses on the second moment \\(F_{2}(s)\\) while Hurst considered the first moment \\(F_{1}(s)\\). For monofractal data, \\(H\\) is identical to the Hurst exponent. ### The Detrended Fluctuation Analysis (DFA) There are different orders of DFA that are distinguished by the way the trends in the data are eliminated. In lowest order (DFA1) we determine, for each segment \\(\ u\\), the best _linear_ fit of the profile, and identify the fluctuations by the variance \\(F^{2}(\ u,s)\\) of the profile from this straight line. This way, we eliminate the influence of possible linear trends on scales larger than the segment. Note that linear trends in the profile correspond to patch-like trends in the original record. DFA1 has been proposed originally by Peng et al. (1994) when analyzing correlations in DNA. It can be generalized straightforwardly to eliminate higher order trends (Bunde et al., 2000; Kantelhardt et al., 2001). In second order DFA (DFA2) one calculates the variances \\(F^{2}(\ u,s)\\) of the profile from best _quadratic_ fits of the profile, this way eliminating the influence of possible linear and parabolic trends on scales larger than the segment considered. In general, in \\(n\\)th-order DFA, we calculate the variances of the profile from the best \\(n\\)th-order polynomial fit, this way eliminating the influence of possible \\((n-1)\\)th-order trends on scales larger than the segment size. Explicitly, we calculate the best polynomial fit \\(y_{\ u}(i)\\) of the profile in each of the \\(2N_{s}\\) segments \\(\ u\\) and determine the variance \\[F^{2}(\ u,s)\\equiv\\frac{1}{s}\\sum_{i=1}^{s}\\left[z_{(\ u-1)s+i}-y_{\ u}(i) \\right]^{2}. \\tag{6}\\] Then we employ Eq. (3) to determine the mean fluctuation \\(F_{2}(s)\\). Since FA and the various stages of the DFA have different detrending capabilities, a comparison of the fluctuation functions obtained by FA and DFA\\(n\\) can yield insight into both long-term correlations and types of trends. This cannot be achieved by the conventional methods, like the spectral analysis. ### Wavelet Transform (WT) The wavelet methods we employ here are based on the determination of the mean values \\(\\overline{z}_{\ u}(s)\\) of the profile in each segment \\(\ u\\) (of length \\(s\\)), and the calculation of the fluctuations between neighbouring segments. The different order techniques we have used in analyzing runoff fluctuations differ in the way the fluctuations between the average profiles are treated and possible nonstationarities are eliminated. The first-, second- and third-order wavelet method are described below. (i) In the first-order wavelet method (WT1), one simply determines the fluctuations from the first derivative \\(F^{2}(\ u,s)=[\\overline{z}_{\ u}(s)-\\overline{z}_{\ u+1}(s)]^{2}\\). WT1 corresponds to FA where constant trends in the profile of a hydrological station are eliminated, while linear trends are not eliminated. (ii) In the second-order wavelet method (WT2), one determines the fluctuations from the second derivative \\(F^{2}(\ u,s)=[\\overline{z}_{\ u}(s)-2\\overline{z}_{\ u+1}(s)+\\overline{z}_{ \ u+2}(s)]^{2}\\). So, if the profile consists of a trend term linear in \\(s\\) and a fluctuating term, the trend term is eliminated. Regarding trend-elimination, WT2 corresponds to DFA1. (iii) In the third-order wavelet method (WT3), one determines the fluctuations from the third derivative \\(F^{2}(\ u,s)=[\\overline{z}_{\ u}(s)-3\\overline{z}_{\ u+1}(s)+3\\overline{z}_{ \ u+2}(s)-\\overline{z}_{\ u+3}(s)]^{2}\\). By definition, WT3 eliminates linear and parabolic trend terms in the profile. In general, in WT\\(n\\) we determine the fluctuations from the \\(n\\)th derivative, this way eliminating trends described by \\((n-1)\\)st-order polynomials in the data. Methods (i-iii) are called wavelet methods, since they can be interpreted as transforming the profile by discrete wavelets representing first-, second- and third-order cumulative derivatives of the profile. The first-order wavelets are known in the literature as Haar wavelets. One can also use different shapes of the wavelets (e.g. Gaussian wavelets with width \\(s\\)), which have been used by Arneodo et al. (2002) to study, for example, long-range correlations in DNA. Since the various stages of the wavelet methods WT1, WT2, WT3, etc. have different detrending capabilities, a comparison of their fluctuation functions can yield insight into both long-term correlations and types of trends. At the end of this section, before describing the results of the FA, DFA, and WT analysis, we note that for very large \\(s\\) values, \\(s>N/4\\) for DFA and \\(s>N/10\\) for FA and WT, the fluctuation function becomes inaccurate due to statistical errors. The difference in the statistics is due to the fact that the number of independent segments of length \\(s\\) is larger in DFA than in WT, and the fluctuations in FA are larger than in DFA. Hence, in the analysis we will concentrate on \\(s\\) values lower than \\(s_{\\rm max}=N/4\\) for DFA and \\(s_{\\rm max}=N/10\\)for FA and WT. When determining the scaling exponents \\(H\\) using Eq. (4), we manually chose an appropriate (shorter) fitting range of typically two orders of magnitude. ### Results In our study we analyzed 41 runoff records, 18 of them are from the southern part of Germany, and the rest is from North and South America, Africa, Australia, Asia and Europe (see Table 1). We begin the analysis with the runoff record for the river Weser in the northern part of Germany, which has the longest record (171 years) in this study. Figure 1(a) shows the fluctuation functions \\(F_{2}(s)\\) obtained from FA and DFA1-DFA5. In the log-log plot, the curves are approximately straight lines for \\(s\\) above 30 days, with a slope \\(H\\approx 0.75\\). This result for the Weser suggests that there exists long-term persistence expressed by the power-law decay of the correlation function, with an exponent \\(\\gamma\\approx 0.5\\) [see Eq. (4)]. To show that the slope \\(H\\approx 0.75\\) is due to long-term correlations and not due to a broad probability distribution (Joseph- versus Noah-Phenomenon, see Mandelbrot and Wallis (1968)), we have eliminated the correlations by randomly shuffling the \\(\\phi_{i}\\). This shuffling has no effect on the probability distribution function of \\(\\phi_{i}\\). Figure 1(b) shows \\(F_{2}(s)\\) for the shuffled data. We obtain \\(H=1/2\\), showing that the exponent \\(H\\approx 0.75\\) is due to long-term correlations. To show that the slope \\(H\\approx 0.75\\) is not an artefact of the seasonal dependence of the variance and skew, we also considered records where \\(\\phi_{i}\\) was divided by Figure 1: (a) The fluctuation functions \\(F_{2}(s)\\) versus time scale \\(s\\) obtained from FA and DFA1-DFA5 in double logarithmic plots for daily runoff departures \\(\\phi_{i}=W_{i}-\\overline{W_{i}}\\) from the mean daily runoff \\(\\overline{W_{i}}\\) for the river Weser, measured from 1823 till 1993 by the hydrological station Vlotho in Germany. (b) The analog curves to (a) when the \\(\\phi_{i}\\) are randomly shuffled. the variance of each calendar day and applied further detrending techniques that take into account the skew (Livina et al., 2003b). In both cases, we found no change in the scaling behaviour for large times (see also Sect. 3.4). This can be understood easily, since kinds of seasonal trends cannot effect the fluctuation behaviour on time scales well above one year. It is likely, however, that the seasonal dependencies of the variance and possibly also of the skew contribute to the behaviour at small times, where the slope \\(H\\) is much larger than the slope \\(H\\). Figure 2: The fluctuation functions \\(F_{2}(s)\\) versus time scale \\(s\\) obtained from FA and DFA1-DFA4 in double logarithmic plots for four additional representative hydrological stations: (a) the Zaire in Kinshasa, Zaire, (b) the Orinoco in Puente Angostura, Venezuela, (c) the Mary River in Miva, Australia, and (d) the Gaula River in Haga Bru, Norway. (e-h) the fluctuation functions \\(F_{2}(s)\\) obtained for the same rivers as in (a-d), from first to fifth order wavelet analysis (WT1-WT5). The straight lines are best linear fits to the DFA1 and the WT2 results on large time scales. than \\(0.75\\) in most cases (see also Sect. 3.4, where a seasonal trend in the variance is indeed used for modelling the crossover). Figure 2 shows the fluctuation functions \\(F_{2}(s)\\) of 4 more rivers, from Africa, South America, Australia, and Europe. The panels on the left-hand side show the FA and DFA1-4 curves, while the panels on the right-hand side show the results from the analogous wavelet analysis WT1-WT5. Most curves show a crossover at small time scales; a similar crossover has been reported by Tessier et al. (1996) for small French rivers without artificial dams or reservoirs. Above the crossover time, the fluctuation functions (from DFA1-4 and WT2-5) show power-law behaviour, with exponents \\(H\\simeq 0.95\\) for the Zaire, \\(H\\simeq 0.73\\) for the Orinoco, \\(H\\simeq 0.60\\) for the Mary river, and \\(H\\simeq 0.55\\) for the Gaula river. Accordingly, there is no universal scaling behaviour since the long-term exponents vary strongly from river to river and reflect the fact that there exist different mechanisms for floods where each may induce different scaling. This is in contrast to climate data, where universal long-term persistence of temperature records at land stations was observed (Koscielny-Bunde et al., 1998; Talkner and Weber, 2000; Weber and Talkner, 2001; Eichner et al., 2003). The Mary river in Australia is rather dry in the summer and the Gaula river in Norway is frozen in the winter. For the Mary river, the long-term exponent \\(H\\simeq 0.60\\) is well below the average value. For the Gaula river, the long-term correlations are not pronounced (\\(H=0.55\\)) and even hard to distinguish from the uncorrelated case \\(H=0.5\\). We obtained similar results for the other two \"frozen\" rivers (Tana from Norway and Dvina from Russia) that we analysed. For interpreting this distinguished behaviour of the frozen rivers we like to note that on permafrost ground the lateral inflow (and hence the indirect contribution of the water storage in the catchment basin) contributes to the runoffs in a different way than on normal ground, see also _Gupta and Dawdy_(1995). Our results (based on 3 rivers only) suggest that the contribution of snow melting leads to less correlated runoffs than the contribution of rainfall, but more comprehensive studies will be needed to confirm this interesting result. Figure 3(a) and Table 1 summarize our results for \\(H\\). One can see clearly that the exponents \\(H\\) do not depend systematically on the basin area \\(A\\). This is in line with the conclusions of Gupta et al. (1994) for the flood peaks, where a systematic dependence on \\(A\\) could also not be found. There is also no pronounced regional dependence: the rivers within a localized area (such as South Germany) tend to have nearly the same range of exponents as the international rivers. The three \"frozen\" rivers in our study have the lowest values of \\(H\\). As can be seen in the figure, the exponents spread from \\begin{table} \\begin{tabular}{|l|l|r|r|r|r|r|r|r|} \\hline **River name** & \\multicolumn{1}{c|}{**Station name**} & \\multicolumn{1}{c|}{**Period of observation (\\(\\mathbf{x}\\))**} & \\multicolumn{1}{c|}{**Basin area**} & \\multicolumn{1}{c|}{**\\(\\mathbf{h}\\)(\\(\\mathbf{2}\\))**} & \\multicolumn{1}{c|}{**\\(\\mathbf{a}\\)**} & \\multicolumn{1}{c|}{**\\(\\mathbf{b}\\)**} & \\multicolumn{1}{c|}{**\\(\\mathbf{\\Delta\\alpha}\\)**} \\\\ \\hline Barron River & Myola, Australia & 79 & 1 940 & 0.60 & 0.50 & 0.79 & 0.65 \\\\ \\hline Columbia River & The Dallas, USA & 114 & 613 830 & 0.59 & 0.54 & 0.76 & 0.50 \\\\ \\hline Danube & Orsova, Romania & 151 & 576 232 & 0.85 & 0.50 & 0.60 & 0.26 \\\\ \\hline Dvina & UST-Pinega, Russia & 89 & 348 000 & 0.56 & 0.53 & 0.79 & 0.58 \\\\ \\hline Fraser River & Hope, USA & 84 & 217 000 & 0.69 & 0.53 & 0.70 & 0.38 \\\\ \\hline Gaula & Haga Bru, Norway & 90 & 3 080 & 0.55 & 0.57 & 0.77 & 0.43 \\\\ \\hline Johnston River & Upstream Central Mill, Australia & 74 & 390 & 0.58 & 0.52 & 0.78 & 0.58 \\\\ \\hline Labe & Decin, Czechia & 102 & 51 104 & 0.80 & 0.45 & 0.68 & 0.61 \\\\ \\hline Maas & Borgharen, Netherland & 80 & 21 300 & 0.76 & 0.49 & 0.68 & 0.48 \\\\ \\hline Mary River & Miva, Australia & 76 & 4 830 & 0.60 & 0.52 & 0.78 & 0.57 \\\\ \\hline Mitta Mitta River & Hinommunije, Australia & 67 & 1 530 & 0.75 & 0.47 & 0.68 & 0.53 \\\\ \\hline Niger & Koulikoro, Mali & 79 & 120 000 & 0.60 & 0.51 & 0.78 & 0.62 \\\\ \\hline Orinoco & Puente Angostura, Venezuela & 65 & 836 000 & 0.73 & 0.50 & 0.69 & 0.46 \\\\ \\hline Rhein & Rees, Germany & 143 & 159 680 & 0.76 & 0.52 & 0.65 & 0.32 \\\\ \\hline Severn & Bewdley, England & 71 & 4 330 & 0.63 & 0.54 & 0.73 & 0.43 \\\\ \\hline Susquehanna & Harrisburg, USA & 96 & 62 419 & 0.58 & 0.55 & 0.77 & 0.48 \\\\ \\hline Tana & Polmak, Norway & 51 & 14 005 & 0.56 & 0.50 & 0.81 & 0.69 \\\\ \\hline Themse & Kingston, England & 113 & 9 948 & 0.80 & 0.47 & 0.67 & 0.51 \\\\ \\hline Weser & Vlotho, Germany & 171 & 17 618 & 0.76 & 0.50 & 0.68 & 0.43 \\\\ \\hline Zaire & Kinshasa, Zaire & 81 & 3 475 000 & 0.95 & 0.52 & 0.52 & 0.00 \\\\ \\hline Grand River & Gallatin, USA & 72 & 5 830 & 0.72 & 0.42 & 0.76 & 0.87 \\\\ \\hline Susquehanna & Marietta, USA & 61 & 67 310 & 0.60 & 0.53 & 0.79 & 0.57 \\\\ \\hline Mississippi & St. Louis, USA & 59 & 1 805 000 & 0.91 & 0.44 & 0.61 & 0.48 \\\\ \\hline **German data:** & & & & _h_(\\(\\mathbf{2}\\)) & \\(a\\) & \\(b\\) & _\\(\\mathbf{\\Delta\\alpha}\\)** \\\\ \\hline Amper & Furstenfeldbruck & 77 & 1 235 & 0.81 & 0.47 & 0.65 & 0.47 \\\\ \\hline Donau (Danube) & Achleiten & 97 & 76 653 & 0.82 & 0.49 & 0.63 & 0.35 \\\\ \\hline Donau (Danube) & Beuron & 70 & 1 309 & 0.65 & 0.53 & 0.72 & 0.45 \\\\ \\hline Donau (Danube) & Donauwörth & 74 & 15 037 & 0.81 & 0.49 & 0.63 & 0.37 \\\\ \\hline Donau (Danube) & Kehlheim & 97 & 22 950 & 0.85 & 0.48 & 0.63 & 0.39 \\\\ \\hline Isar & Bad Tolz & 39 & 1 554 & 0.68 & 0.53 & 0.71 & 0.41 \\\\ \\hline Jagst & Untergeriesheim & 73 & 1 826 & 0.76 & 0.45 & 0.69 & 0.61 \\\\ \\hline Kinzig & Schwaibach & 82 & 921 & 0.67 & 0.52 & 0.72 & 0.47 \\\\ \\hline Loisach & Kochel & 87 & 684 & 0.82 & 0.48 & 0.65 & 0.44 \\\\ \\hline Kocher & Stein & 111 & 1 929 & 0.75 & 0.53 & 0.64 & 0.26 \\\\ \\hline Murg & Rotenfels & 77 & 469 & 0.70 & 0.53 & 0.70 & 0.41 \\\\ \\hline Neckar & Horb & 65 & 1 118 & 0.68 & 0.44 & 0.75 & 0.78 \\\\ \\hline Neckar & Plochingen & 79 & 3 995 & 0.80 & 0.49 & 0.65 & 0.39 \\\\ \\hline Tauber & Bad Mergentheim & 66 & 1 018 & 0.80 & 0.44 & 0.70 & 0.68 \\\\ \\hline Wertach & Biessenhofen & 77 & 450 & 0.66 & 0.56 & 0.70 & 0.31 \\\\ \\hline Würm & Leutstetten & 77 & 413 & 0.90 & 0.39 & 0.66 & 0.77 \\\\ \\hline Wutach & Oberlauchringen & 85 & 1 129 & 0.75 & 0.52 & 0.67 & 0.37 \\\\ \\hline Vils & Grafenmühle & 58 & 1 436 & 0.61 & 0.50 & 0.78 & 0.62 \\\\ \\hline \\end{tabular} \\end{table} Table 1: Table of investigated international river basins (data from Global Runoff Data Center (GRDC), Koblenz, Germany) and investigated South German river basins. We list the river and station name, the duration of the investigated daily record, the size of the basin area, and the results of our analysis, \\(H=h(2)\\), the multifractal quantities \\(a\\), \\(b\\), and \\(\\Delta\\alpha\\). 0.55 to 0.95. Since the correlation exponent \\(\\gamma\\) is related to \\(H\\) by \\(\\gamma=2-2H\\), the exponent \\(\\gamma\\) spreads from almost 0 to almost 1, covering the whole range from very weak to very strong correlations. ## 3 Multifractal Analysis ### Method For a further characterization of hydrological records it is meaningful to extend Eq. (3) by considering the more general fluctuation functions (Barabasi and Vicsek (1991), see also Davis et al. (1996)), \\[F_{q}(s)=\\left\\{\\frac{1}{2N_{s}}\\sum_{\ u=1}^{2N_{s}}|z_{\ u s}-z_{(\ u-1)s}|^{ q}\\right\\}^{1/q}. \\tag{7}\\] where the variable \\(q\\) can take any real value except zero. For \\(q=2\\), the standard fluctuation analysis is retrieved. The question is, how the fluctuation functions depend on \\(q\\) and how this dependence is related to multifractal features of the record. In general, the multifractal approach is introduced by the partition function \\[Z_{q}(s)\\equiv\\sum_{\ u=1}^{N_{s}}|z_{\ u s}-z_{(\ u-1)s}|^{q}\\sim s^{\\tau(q)}, \\tag{8}\\] Figure 3: (a) Long-term fluctuation exponents \\(H\\) and (b) widths \\(\\Delta\\alpha\\) of the \\(f(\\alpha)\\) spectra for all international records (full symbols) and all records from south Germany (open symbols) that we analyzed, as a function of the basin area \\(A\\). Each symbol represents the result for one hydrological station. The dashed line in (b) is a linear fit to the data. where \\(\\tau(q)\\) is the Renyi scaling exponent. A record is called'monoffractal', when \\(\\tau(q)\\) depends linearly on \\(q\\); otherwise it is called multifractal. It is easy to verify that \\(Z_{q}(s)\\) is related to \\(F_{q}(s)\\) by \\[F_{q}(s)=\\left\\{\\frac{1}{N_{s}}Z_{q}(s)\\right\\}^{1/q}. \\tag{9}\\] Accordingly, Eq. (8) implies \\[F_{q}(s)\\sim s^{h(q)}, \\tag{10}\\] where \\[h(q)=[\\tau(q)+1]/q. \\tag{11}\\] Thus, \\(h(q)\\) defined in Eq. (10) is directly related to the classical multifractal scaling exponents \\(\\tau(q)\\). In general, the exponent \\(h(q)\\) may depend on \\(q\\). Since for stationary records, \\(h(1)\\) is identical to the well-known Hurst exponent (see e. g. Feder (1988)), we will call the function \\(h(q)\\) the generalized Hurst exponent. For monofractal self-affine time series, \\(h(q)\\) is independent of \\(q\\), since the scaling behaviour of the variances \\(F^{2}(\ u,s)\\) is identical for all segments \\(\ u\\), and the averaging procedure in Eq. (7) will give just this identical scaling behaviour for all values of \\(q\\). If small and large fluctuations scale differently, there will be a significant dependence of \\(h(q)\\) on \\(q\\): If we consider positive values of \\(q\\), the segments \\(\ u\\) with large variance \\(F^{2}(\ u,s)\\) (i. e. large deviations from the corresponding fit) will dominate the average \\(F_{q}(s)\\). Thus, for positive values of \\(q\\), \\(h(q)\\) describes the scaling behaviour of the segments with large fluctuations. Usually the large fluctuations are characterized by a smaller scaling exponent \\(h(q)\\) for multifractal series. On the contrary, for negative values of \\(q\\), the segments \\(\ u\\) with small variance \\(F^{2}(\ u,s)\\) will dominate the average \\(F_{q}(s)\\). Hence, for negative values of \\(q\\), \\(h(q)\\) describes the scaling behaviour of the segments with small fluctuations, which are usually characterized by a larger scaling exponent. In the hydrological literature (Rodriguez-Iturbe and Rinaldo, 1997; Lavallee et al., 1993) one often considers the generalized mass variogram \\(C_{q}(\\lambda)\\) (see also Davis et al. (1996)), \\[C_{q}(\\lambda)\\equiv\\langle|z_{i+\\lambda}-z_{i}|^{q}\\rangle\\sim\\lambda^{K(q)}. \\tag{12}\\]Comparing Eqs. (7), (10), and (12) one can verify easily that \\(K(q)\\) and \\(h(q)\\) are related by \\[h(q)=K(q)/q. \\tag{13}\\] Another way to characterize a multifractal series is the singularity spectrum \\(f(\\alpha)\\), that is related to \\(\\tau(q)\\) via a Legendre transform (e.g. Feder (1988); Rodriguez-Iturbe and Rinaldo (1997)), \\[\\alpha=\\frac{d\\tau(q)}{dq}\\quad\\mbox{and}\\quad f(\\alpha)=q\\alpha-\\tau(q). \\tag{14}\\] Here, \\(\\alpha\\) is the singularity strength or Holder exponent, while \\(f(\\alpha)\\) denotes the dimension of the subset of the series that is characterized by \\(\\alpha\\). Using Eq. (11), we can directly relate \\(\\alpha\\) and \\(f(\\alpha)\\) to \\(h(q)\\), \\[\\alpha=h(q)+q\\frac{dh(q)}{dq}\\quad\\mbox{and}\\quad f(\\alpha)=q[\\alpha-h(q)]+1. \\tag{15}\\] The strength of the multifractality of a time series can be characterized by the difference between the maximum and minimum values of \\(\\alpha\\), \\(\\alpha_{\\rm max}-\\alpha_{\\rm min}\\). When \\(q\\frac{dh(q)}{dq}\\) approaches zero for \\(q\\) approaching \\(\\pm\\infty\\), then \\(\\Delta\\alpha=\\alpha_{\\rm max}-\\alpha_{\\rm min}\\) is simply given by \\(\\Delta\\alpha=h(-\\infty)-h(\\infty)\\). The multifractal analysis described above is a straightforward generalization of the fluctuation analysis and therefore has the same problems: (i) monotonous trends in the record may lead to spurious results for the fluctuation exponent \\(h(q)\\) which in turn leads to spurious results for the correlation exponent \\(\\gamma\\), and (ii) nonstationary behaviour characterized by exponents \\(h(q)>1\\) cannot be detected by the simple method since the method cannot distinguish between exponents \\(>1\\), and always will yield \\(F_{2}(s)\\sim s\\) in this case (see above). To overcome these drawbacks the multifractal detrended fluctuation analysis (MF-DFA) has been introduced recently (Kantelhardt et al. (2002), see also Koscielny-Bunde et al. (1998); Weber and Talkner (2001)). According to Kantelhardt et al. (2002, 2003), the method is as accurate as the wavelet methods. Thus, we have used MF-DFA for the multifractal analysis here. In the MF-DFA, one starts with the DFA-fluctuations \\(F_{\ u}^{2}(s)\\) as obtained in Eq. (6). Then, we define in close analogy to Eqs. (3) and (7) the generalized fluctuation function, \\[F_{q}(s)\\equiv\\left\\{\\frac{1}{2N_{s}}\\sum_{\ u=1}^{2N_{s}}\\left[F^{2}(\ u,s) \\right]^{q/2}\\right\\}^{1/q}. \\tag{16}\\]Again, we can distinguish MF-DFA1, MF-DFA2, etc., according to the order of the polynomial fits involved. ### Multifractal Scaling Plots We have performed a large scale multifractal analysis on all 41 rivers. We found that MF-DFA2-5 yield similar results for the fluctuation function \\(F_{q}(s)\\). We have also cross checked the results using the Wavelet Transform Modulus Maxima (WTMM) method (Arneodo et al., 2002), and always find agreement within the error bars (Kantelhardt et al., 2003). Therefore, we present here only the results of MF-DFA4. Figure 4(a,b) shows two representative examples for the fluctuation functions \\(F_{q}(s)\\), for (a) the Weser river and (b) the Danube river. The standard fluctuation function \\(F_{2}(s)\\) is plotted in full symbols. The crossover in \\(F_{2}(s)\\) that was discussed in Sect. 2.5 can be also seen in the other moments. The position of the crossover increases monotonously with decreasing \\(q\\) and the crossover becomes more pronounced. We are only interested in the asymptotic behaviour of \\(F_{q}(s)\\) at large times \\(s\\). One can see clearly that above the crossover, the \\(F_{q}(s)\\) functions are straight lines in the double logarithmic plot, and the slopes increase slightly when going from high positive moments towards high negative moments (from the bottom to the Figure 4: The multifractal fluctuation functions \\(F_{q}(s)\\) versus time scale \\(s\\) obtained from multifractal DFA4 for two representative hydrological stations: (a) river Weser in Vlotho, Germany and (b) river Danube in Orsova, Romania. The curves correspond to different values of \\(q\\), \\(q=-10\\), -6, -4, -2, -1, -0.2, 0.2, 1, 2, 4, 6, 10 (from the top to the bottom) and are shifted vertically for clarity. top). For the Weser, for example, the slope changes from 0.65 for \\(q=10\\) to 0.9 for \\(q=-10\\) (see also Fig. 6(b)). The monotonous increase of the slopes, \\(h(q)\\), is the signature of multifractality. When the data are shuffled (see Figs. 4(c,d)), all functions \\(F_{q}(s)\\) increase asymptomatically as \\(F_{q}(s)\\ \\sim\\ s^{1/2}\\). This indicates that the multifractality vanishes under shuffling. Accordingly the observed multifractality originates in the long-term correlations of the record and is not caused by singularities in the distribution of the daily runoffs (see also Mandelbrot and Wallis (1968)). A reshuffling-resistant multifractality would indicate a'statistical' type of nonlinearity (Sivapalan et al., 2002). We obtain similar patterns for all rivers. Figure 5 shows four more examples; Figs. 5(a,b) are for two rivers (Amyer and Wertach) from southern Germany, while Figs. 5(c,d) are for Niger and Susquehama (Koulikoro, Mali and Harrisburg, USA). From the asymptotic slopes of the curves in Figs. 4(a,b) and 5(a-d), we obtain the generalized Hurst-exponents \\(h(q)\\), which are plotted in Fig. 6 (circles). One can see that in the whole \\(q\\)-range the exponents can be fitted well by the formula \\[h(q)=\\frac{1}{q}-\\frac{\\ln[a^{q}+b^{q}]}{q\\ln 2}, \\tag{17}\\] Figure 5: The multifractal fluctuation functions \\(F_{q}(s)\\) obtained from multifractal DFA4 for four additional hydrological stations: (a) Amper in Fürstenfeldbruck, Germany, (b) Wertach in Biessenhofen, Germany, (c) Susquehama in Harrisburg, USA, (d) Niger in Koulikoro, Mali. The \\(q\\)-values are identical to those used in Fig. 4. or \\[K(q)=1+\\tau(q)=1-\\frac{\\ln[a^{q}+b^{q}]}{\\ln 2}. \\tag{18}\\] The formula can be derived from a modification of the multiplicative cascade model that we describe in Sect. 3.3. Here we use the formula only to show that the infinite number of exponents \\(h(q)\\) can be described by only two independent parameters, \\(a\\) and \\(b\\). These two parameters can then be regarded as multifractal finger print for a considered river. This is particularly important when checking models for river flows. Again, we like to emphasize, that these parameters have been obtained from the asymptotic part of the generalized fluctuation function, and are therefore not affected by seasonal dependencies. We have fitted the \\(h(q)\\) spectra in the range \\(-10\\leq q\\leq 10\\) for all 41 runoff series by Eq. (17). Representative examples are shown in Fig. 6. The continuous lines in Fig. 6 are obtained by best fits of \\(h(q)\\) (obtained from Figs. 4 and 5 as described above) by Eq. (17). The respective parameters \\(a\\) and \\(b\\) are listed inside the panels of each figure. Our results for the 41 rivers are shown in Table 1. It is remarkable that in each single case, the \\(q\\) dependence of \\(h(q)\\) for positive and negative values of \\(q\\) can be characterized well by the two parameters, and all fits remain within the error bars of the \\(h(q)\\) values. Figure 6: The generalized Hurst exponents \\(h(q)\\) for the six representative daily runoff records analyzed in Figs. 4 and 5: (a) Amper in Fürstenfeldbruck, Germany, (b) Weser in Vlotho, Germany, (c) Susquehanna in Harrisburg, USA, (d) Wertach in Biessenhofen, Germany, (e) Danube in Orsova, Romania, and (f) Niger in Koulikoro, Mali. The \\(h(q)\\) values have been determined by straight line fits of \\(F_{q}(s)\\) on large time scales. The error bars of the fits correspond to the size of the symbols. From \\(h(q)\\) we obtain \\(\\tau(q)\\) (Eq. (11)) and the singularity spectrum \\(f(\\alpha)\\) (Eq. (15)). Figure 7 shows two typical examples for the Danube and the Niger. The width of \\(f(\\alpha)\\) taken at \\(f=0\\) characterizes the strength of the multifractality. Since both widths are very different, the strength of the multifractality of river runoffs appears to be not universal. In order to characterize and to compare the strength of the multifractality for several time series we use as a parameter the width of the singularity spectrum \\(f(\\alpha)\\) [see Eqs. (14) and (15)] at \\(f=0\\), which corresponds to the difference of the maximum and the minimum value of \\(\\alpha\\). In the multiplicative cascade model, this parameter is given by \\[\\Delta\\alpha=\\frac{\\ln a-\\ln b}{\\ln 2}. \\tag{19}\\] The distribution of the \\(\\Delta\\alpha\\) values we obtained from Eq. (19) is shown in Fig. 3(b), where we plot \\(\\Delta\\alpha\\) versus the basin area. The figure shows that there are rivers with quite strong multifractal fluctuations, i.e. large \\(\\Delta\\alpha\\), and one with almost vanishing multifractality, i.e. \\(\\Delta\\alpha\\approx 0\\). Two observation can be made from the figure: (1) There is no pronounced difference between the width of the distribution of the multifractality strength for the runoffs within the local area of southern Germany (open symbols in Fig. 3(b)) and for the international runoff records from all rivers around the globe (full symbols). In fact, without rivers Zaire and Grand River the widths would be the same. (2) There is a tendency towards smaller multifractality strengths at larger basin areas. This means that the river flows become less nonlinear with increasing basin area. We consider this as possible indication of river regulations that are more pronounced for large river basins. Our results for \\(K(q)=1+\\tau(q)\\) (see Eq. (18) and Table 1) may be compared Figure 7: The multifractal spectra \\(f(\\alpha)\\) for two representative runoff records (a) the Danube in Orsova, Romania and (b) Niger in Koulikoro, Mali. with the functional form \\[K(q)=(H^{\\prime}+1)q-\\frac{C_{1}}{\\alpha^{\\prime}-1}(q^{\\alpha^{\\prime}}-q)\\qquad q\\geq 0 \\tag{20}\\] with the three parameters \\(H^{\\prime}\\), \\(C_{1}\\), and \\(\\alpha^{\\prime}\\), that have been used by Lovejoy, Schertzer, and coworkers (Schertzer and Lovejoy, 1987; Lovejoy and Schertzer, 1991; Lavallee et al., 1993; Tessier et al., 1996; Pandey et al., 1998) successfully to describe the multifractal behaviour of rainfall and runoff records. The definition of \\(K(q)\\) we used in this paper is taken from Rodriguez-Iturbe and Rinaldo (1997) and differs slightly from their definition. We like to note that Eq. (18) for \\(K(q)\\) is not only valid for positive \\(q\\) values, but also for negative \\(q\\) values. This feature allows us to determine numerically the full singularity spectrum \\(f(\\alpha)\\). In the analysis we focused on long time scales, excluding the crossover regime, and used detrending methods. We consider it as particularly interesting that only two parameters \\(a\\) and \\(b\\) or, equivalently, \\(H\\) and \\(\\Delta\\alpha\\), are sufficient to describe \\(\\tau(q)\\) and \\(K(q)\\) for positive as well as negative \\(q\\) values. This strongly supports the idea of 'universal' multifractal behaviour of river runoffs as suggested (in different context) by Lovejoy and Schertzer. It is interesting to note that the generalized fluctuation functions we studied do not show any kind of multifractal phase transition at some critical value \\(q_{D}\\) in the \\(q\\)-regime (\\(-10\\leq q\\leq 10\\)) we analysed. Instead, our analysis shows a crossover at a specific time scale \\(s_{\\times}\\) (typically weeks) that weakly increases with decreasing moment \\(q\\). In this paper, we concentrated on the large-time regime (\\(s\\gg s_{\\times}\\)), where we obtained coherent multifractal behaviour and did not see any indication for a multifractal phase transition. But this does not exclude the possibility that at small scales a breakdown of mulifractality at a critical \\(q\\)-value may occur, as has been emphasized by Tessier et al. (1996) and Pandey et al. (1998). ### Extended Multiplicative Cascade Model In the following, we like to motivate the 2-parameter formula Eq. (17) and show how it can be obtained from the well known multifractal cascade model (Feder, 1988; Barabasi and Vicsek, 1991; Kantelhardt et al., 2002). In the model, a record \\(\\phi_{k}\\) of length \\(N=2^{n_{\\rm max}}\\) is constructed recursively as follows: In generation \\(n=0\\), the record elements are constant, i.e. \\(\\phi_{k}=1\\) for all \\(k=1,\\ldots,N\\). In the first step of the cascade (generation \\(n=1\\)), the first half of the series is multiplied by a factor \\(a\\) and the second half of the series is multiplied by a factor \\(b\\). This yields \\(\\phi_{k}=a\\) for \\(k=1,\\ldots,N/2\\) and \\(\\phi_{k}=b\\) for \\(k=N/2+1,\\ldots,N\\). The parameters \\(a\\) and \\(b\\) are between zero and one, \\(0<a<b<1\\). Note that we do not restrict the model to \\(b=1-a\\) as is often done in the literature (Feder, 1988). In the second step (generation \\(n=2\\)), we apply the process of step 1 to the two subseries, yielding \\(\\phi_{k}=a^{2}\\) for \\(k=1,\\ldots,N/4\\), \\(\\phi_{k}=ab\\) for \\(k=N/4+1,\\ldots,N/2\\), \\(\\phi_{k}=ba=ab\\) for \\(k=N/2+1,\\ldots,3N/4\\), and \\(\\phi_{k}=b^{2}\\) for \\(k=3N/4+1,\\ldots,N\\). In general, in step \\(n+1\\), each subseries of step \\(n\\) is divided into two subseries of equal length, and the first half of the \\(\\phi_{k}\\) is multiplied by \\(a\\) while the second half is multiplied by \\(b\\). For example, in generation \\(n=3\\) the values in the eight subseries are \\(a^{3}\\), \\(a^{2}b\\), \\(a^{2}b\\), \\(ab^{2}\\), \\(a^{2}b\\), \\(ab^{2}\\), \\(ab^{2}\\), \\(b^{3}\\). After \\(n_{\\rm max}\\) steps, the final generation has been reached, where all subseries have length 1 and no more splitting is possible. We note that the final record can be written as \\(\\phi_{k}=a^{n_{\\rm max}-n(k-1)}b^{n(k-1)}\\), where \\(n(k)\\) is the number of digits 1 in the binary representation of the index \\(k\\), e. g. \\(n(13)=3\\), since 13 corresponds to binary 1101. For this multiplicative cascade model, the formula for \\(\\tau(q)\\) has been derived earlier (Feder, 1988; Barabasi and Vicsek, 1991; Kantelhardt et al., 2002). The result is \\(\\tau(q)=[-\\ln(a^{q}+b^{q})+q\\ln(a+b)]/\\ln 2\\) or \\[h(q)=\\frac{1}{q}-\\frac{\\ln(a^{q}+b^{q})}{q\\ln 2}+\\frac{\\ln(a+b)}{\\ln 2}. \\tag{21}\\] It is easy to see that \\(h(1)=1\\) for all values of \\(a\\) and \\(b\\). Thus, in this form the model is limited to cases where \\(h(1)\\), which is the exponent Hurst defined originally in the \\(R/S\\) method, is equal to one. In order to generalize this multifractal cascade process such that any value of \\(h(1)\\) is possible, we have subtracted the offset \\(\\Delta h=\\ln(a+b)/\\ln(2)\\) from \\(h(q)\\). The constant offset \\(\\Delta h\\) corresponds to additional long-term correlations incorporated in the multiplicative cascade model. For generating records without this offset, we rescale the power spectrum. First, we fast-Fourier transform (FFT) the simple multiplicative cascade data into the frequency domain. Then, we multiply all Fourier coefficients by \\(f^{-\\Delta h}\\), where \\(f\\) is the frequency. This way, the slope \\(\\beta\\) of the power spectra \\(E(f)\\sim f^{-\\beta}\\) (the squares of the Fourier coefficients) is decreased from \\(\\beta=2h(2)-1=[2\\ln(a+b)-\\ln(a^{2}+b^{2})]/\\ln 2\\) into \\(\\beta^{\\prime}=2[h(2)-\\Delta h]-1=-\\ln(a^{2}+b^{2})/\\ln 2\\), which is consistent with Eq. (17). Finally, backward FFT is employed to transform the signal back into the time domain. A similar Fourier filtering technique has been used by Tessier et al. (1996) when generating surrogate runoff data. ### Comparison with Model Data In order to see how well the extended multiplicative cascade model fits to the real data (for a given river), we generate the model data as follows: (i) we determine \\(a\\) and \\(b\\) for the given river (by best fit of Eq. (17)), (ii) we generate the simple multiplicative cascade model with the obtained \\(a\\) and \\(b\\) values, and (iii) we implement the proper long-term correlations as described above. Figure 8(a) shows the DFA analysis of the model data with parameters \\(a\\) and \\(b\\) determined for the river Weser. By comparing with Fig. 4(a) we see that the extended model gives the correct scaling of the fluctuation functions \\(F_{q}(s)\\) on time scales above the crossover. By comparing Fig. 8(b) with Fig. 4(c) we see that the shuffled model series becomes uncorrelated without multifractality similar to the shuffled data. Below the crossover, however, the model does not yield the observed \\(F_{q}(s)\\) in the original data. In the following we show that in order to obtain the proper behaviour below the crossover, either seasonal trends that cannot be completely eliminated from the data or a different type of multifractality below the crossover, represented by different values of \\(a\\) and \\(b\\), have to be introduced. To show the effect of seasonal trends, we have multiplied the elements \\(\\phi_{i}\\) of the extended cascade model by \\(0.1+\\sin^{2}(\\pi i/365)\\), this way generating a seasonal trend of period 365 in the variance. Figure 8(c) shows the DFA4 Figure 8: The fluctuation functions \\(F_{q}(s)\\) obtained from the multifractal DFA4 for surrogate series generated by the extended multiplicative cascade model with parameters \\(a=0.50\\) and \\(b=0.68\\), that correspond to the values we obtained for the river Weser. The fluctuation function \\(F_{q}(s)\\) for (a) the original \\(\\phi_{i}\\) series and (b) the shuffled series are plotted versus scale \\(s\\) for the same values of \\(q\\) as in Figs. 4 and 5. In (c) the \\(\\phi_{i}\\) have been multiplied by \\(0.1+\\sin^{2}(\\pi i/365)\\) before the analysis to simulate a seasonal trend. In (d) modified values of the parameters \\(a\\) and \\(b\\) (\\(a=0.26\\), \\(b=0.59\\)) have been used on scales \\(s\\leq 256\\) to simulate the apparent stronger multifractality on smaller scales observed for most rivers. For the figure, results from 10 surrogate series of length 140 years were averaged. result for the generalized fluctuation functions, which now better resembles the real data than Fig. 8(a). Finally, in Fig. 8(d) we show the effect of a different multifractality below the crossover, where different parameters \\(a\\) and \\(b\\) characterize this regime. The results also show better agreement with the real data. When comparing Figs. 8(a,c,d) with Figs. 4 and 5, it seems that the Danube, Amper and Wertach fit better to Fig. 8(d), i.e. suggesting different multifractality for short and large time scales, while the Weser, Susquehanna and Niger fit better to Fig. 8(c) where seasonal trends in the variance (and possibly in the skew) are responsible for the behaviour below the crossover. ## 4 Conclusion In this study, we analyzed the scaling behaviour of daily runoff time series of 18 representative rivers in southern Germany and 23 international rivers using both Detrended Fluctuation Analysis and wavelet techniques. In all cases we found that the fluctuations exhibit self-affine scaling behaviour and long-term persistence on time scales ranging from weeks to decades. The fluctuation exponent \\(H\\) varies from river to river in a wide range between 0.55 and 0.95, showing non-universal scaling behaviour. We also studied the multifractal properties of the runoff time series using a multifractal generalization of the DFA method that was crosschecked with the WTMM technique. We found that the multifractal spectra of all 41 records can be described by a 'universal' function \\(\\tau(q)=-\\ln(a^{q}+b^{q})/\\ln 2\\), which can be obtained from a generalization of the multiplicative cascade model and has solely two parameters \\(a\\) and \\(b\\) or, equivalently, the fluctuation exponent \\(H=\\frac{1}{2}-\\ln(a^{2}+b^{2})/\\ln 4\\) and the width \\(\\Delta\\alpha=\\ln\\frac{a}{b}/\\ln 2\\) of the singularity spectrum. Since our function for \\(\\tau(q)\\) applies also for negative \\(q\\) values, we could derive the singularity spectra \\(f(\\alpha)\\) from the fits. We have calculated and listed the values of \\(H\\), \\(a\\), \\(b\\), and \\(\\Delta\\alpha\\) for all records considered. There are no significant differences between their distributions for rivers in South Germany and for international rivers. We also found that there is no significant dependence of these parameters on the size of the basin area, but there is a slight decrease of the multifractal width \\(\\Delta\\alpha\\) with increasing basin area. We suggest that the values of \\(H\\) and \\(\\Delta\\alpha\\) can be regarded as 'fingerprints' for each station or river, which can serve as an efficient non-trivial test bed for the state-of-the-art precipitation-runoff models. Apart from the practical use of Eq. (17) with the parameters \\(a\\) and \\(b\\) that was derived by extending the multiplicative cascade model and that can serve as finger prints for the river flows, we presently are lacking a physical model for this behaviour. It will be interesting to see, if physically based models, e.g. the random tree-structure model presented in Gupta et al. (1996), can be related to the multiplicative cascade model presented here. If so, this would give a physical explanation for how the multiplicative cascade model is able to simulate river flows. We have also investigated the origin of the multifractal scaling behaviour by comparison with the corresponding shuffled data. We found that the multifractality is removed by shuffling that destroys the time correlations in the series while the distribution of the runoff values is not altered. After shuffling, we obtain \\(h(q)\\approx 1/2\\) for all values of \\(q\\), indicating monofractal behaviour. Hence, our results suggest that the multifractality is not due to the existence of a broad, asymmetric (singular) probability density distribution (Anderson and Meerschaert, 2002), but due to a specific dynamical arrangement of the values in the time series, i.e. a self-similar 'clustering' of time patterns of values on different time scales. We believe that our results will be useful also to improve the understanding of extreme values (singularities) in the presence of multifractal long-term correlations and trends. Finally, for an optimal water management in a given basin, it is essential to know, whether an observed long-term fluctuation in discharge data is due to systematic variations (trends) or the results of long-term correlation. Our approach is also a step forward in this directions. _Acknowledgments:_ We would like to thank Daniel Schertzer and Diego Rybski for valuable discussions. This work was supported by the BMBF, the DAAD, and the DFG. We also would like to thank the Water Management Authorities of Bavaria and Baden-Wurttemberg (Germany), and the Global Runoff Center (GRDC) in Koblenz (Germany) for providing the observational data. ## References * Anderson and Meerschaert (1998) Anderson, P.L., Meerschaert, M.M., 1998. Modelling river flows with heavy tails. _Water Resources Research_, _34_(9), 2271-2280. * Arneodo et al. (2002) Arneodo, A., Audit, B., Decoster, N., Muzy, J.-F., Vaillant, C., 2002. Wavelet based multifractal formalism: Applications to DNA sequences, satellite images of the cloud structure, and stock market data. In: Bunde et al. (2002), pp. 27-102. * Barabasi and Vicsek (1991) Barabasi, A., Vicsek, T., 1991. Multifractality of self-affine fractals. _Phys. Rev. A_, _44_, 2730-2733. * Bunde et al. (2000) Bunde, A., Havlin, S., Kantelhardt, J.W., Penzel, T., Peter, J.-H., Voigt, K., 2000. Correlated and uncorrelated regions in heart-rate fluctuations during sleep. _Phys. Rev. Lett._, _85_(17), 3736-3739. * Bunde et al. (2002) Bunde, A., Kropp, J., Schellnhuber, H.-J. (eds.), 2002. The science of disaster: Climate disruptions, market crashes, and heart attacks. Springer, Berlin. * Davis et al. (1996) Davis, A., Marshak, A., Wiscombe, W., Cahalan, R., 1996. Multifractal char acterization of intermittency in nonstationary geophysical signals and fields. In: Current topics in nonstationary analysis, edited by Trevino, G., Harding, J., Douglas, B., Andreas, E., World Scientific, Singapore, pp. 97-158. * [19] Eichner, J. F., Koscielny-Bunde, E., Bunde, A., Havlin, S., Schellnhuber, H.-J., 2003. Power-law persistence and trends in the atmosphere: A detailed study of long temperature records. _Phys. Rev. E_, _68_, 046133. * [20] Feder, J., 1988. Fractals. Plenum Press, New York. * [21] Frisch, U., Parisi, G., 1985. Fully developed turbulence and intermittency, in: Turbulence and predictability in geophysical fluid dynamics. Edited by Ghil, M., Benzi, R., Parisi, G., North Holland, New York, pp. 84-92. * [22] Gupta, V.K., Mesa, O.J., Dawdy, D.R., 1994. Multiscaling theory of flood peaks: Regional quantile analysis. _Water Resources Research_, _30_(12), 3405-3421. * [23] Gupta, V.K., Dawdy, D.R., 1995. Physical Interpretations of regional variations in the scaling exponents of flood quantiles. In: Kalma, J.D., Scale issues in hydrological modelling, Wiley, Chichester, pp. 106-119. * [24] Gupta V.K., Castro, S.L., Over, T.M., 1996. On scaling exponents of spatial peak flows from rainfall and river network geometry. _J. Hydrol._, _187_(1-2), 81-104. * [25] Hurst, H.E., 1951. Long-term storage capacity of reservoirs. _Transactions of the American Society of Civil Engineering_, _116_, 770-799. * [26] Hurst, H.E., Black, R.P., Simaika, Y.M., 1965. Long-term storage: An experimental study. Constable & Co. Ltd., London. * [27] Kantelhardt, J.W., Koscielny-Bunde, E., Rego, H.H.A., Havlin, S., Bunde, A., 2001. Detecting long-range correlations with detrended fluctuation analysis. _Physica A_, _295_, 441-454. * [28] Kantelhardt, J. W., Zschiegner, S.A., Koscielny-Bunde, E., Havlin, S., Bunde, A., Stanley, H.E., 2002. Multifractal detrended fluctuation analysis of nonstationary time series. _Physica A_, _316_, 87-114. * [29] Kantelhardt, J. W., Rybski, D., Zschiegner, S.A., Braun, P., Koscielny-Bunde, E., Livina, V., Havlin, S., Bunde, A., 2003. Multifractality of river runoff and precipitation: Comparison of fluctuation analysis and wavelet methods. _Physica A_, _330_, 240-245. * [30] Koscielny-Bunde, E., Bunde, A., Havlin, S., Roman, H.E., Goldreich, Y., Schellnhuber, H.-J., 1998. Indication of a universal persistence law governing atmospheric variability. _Phys. Rev. Lett._, _81_(3), 729-732. * [31] Lavallee, D., Lovejoy, S., Schertzer, D., 1993. Nonlinear variability and landscape topography: analysis and simulation. In: _Fractals in Geography. PTR Prentic-Hall_, edited by DeCola, L., Lam, N., pp. 158-192, 1993. * [32] Livina, V. N., Ashkenazy, Y., Braun, P., Monetti, R., Bunde, A., Havlin, S., 2003a. Nonlinear volatility of river flux fluctuations. _Phys. Rev. E, 67_, 042101. * [33] Livina, V., Ashkenazy, Y., Kizner, Z., Strygin, V., Bunde, A., Havlin, S., 2003b. A stochastic model of river discharge fluctuations, _Physica A_, _330_, 283-290. * [34]Lovejoy, S., Schertzer, D., 1991. Nonlinear Variability in Geophysics: Scaling and Fractals. Kluver Academic Publ., Dortrecht, Netherlands. * Mandelbrot and Wallis (1968) Mandelbrot, B. B., Wallis, J. R., 1968. Noah, Joseph, and operational hydrology. _Water Resources Research_, _4_(5), 909. * Mandelbrot and Wallis (1969) Mandelbrot, B. B., Wallis, J. R., 1969. Some long-run properties of geophysical records. _Water Resources Research_, _5_(2), 321-340. * Matsoukas et al. (2000) Matsoukas C., Islam, S., Rodriguez-Iturbe, I., 2000. Detrended fluctuation analysis of rainfall and streamflow time series. _J. Geophys. Res. Atmosph._, _105_(D23), 29165-29172. * Montanari et al. (2000) Montanari, A., Rosso, R., Taqqu, M. S., 2000. A seasonal fractional ARIMA model applied to the Nile River monthly flows at Aswan. _Water Resources Research_, _36_(5), 1249-1259. * Pandey et al. (1998) Pandey, G., Lovejoy, S., Schertzer, D., 1998. Multifractal analysis of daily river flows including extremes for basins of five to two million square kilometers, one day to 75 years. _Journal of Hydrology_, _208_, 62-81. * Peng et al. (1994) Peng, C.-K., Buldyrev, S. V., Havlin, S., Simons, M., Stanley, H. E., Goldberger, A. L., (1994). Mosaic organization of DNA nucleotides, _Phys. Rev. E_,_49_(2), 1685-1689. * Peters et al. (2002) Peters, O., Hertlein, C., Christensen, K., 2002. A Complexity View of Rainfall. _Phys. Rev. Lett._, _88_, 018701. * Change and Self-Organization. Cambridge University Press, Cambridge. * Schertzer and Lovejoy (1987) Schertzer, D., Lovejoy, S., 1987. Physical modelling and analysis of rain and clouds by anisotropic scaling multiplicative processes. _J. Geophys. Res. Atmosph._, _92_, 9693. * Sivapalan et al. (2002) Sivapalan, M., Jothityangkoon, C., Menabde, M., 2002. Linearity and nonlinearity of basin response as a function of scale: Discussion of alternative definitions. _Water Resour. Res._, _38_, 1012. * Talkner and Weber (2000) Talkner, P., Weber, R. O., 2000. Power spectrum and detrended fluctuation analysis: Application to daily temperatures. _Phys. Rev. E_, _62_(1), 150-160. * Taqqu et al. (1995) Taqqu, M. S., Teverovsky, V., Willinger, W., 1995. Estimators for long-range dependence: An empirical study. Fractals **3**, 785-798. * Tessier et al. (1996) Tessier, Y., Lovejoy, S., Hubert, P., Schertzer, D., Pecknold, S., 1996. Multifractal analysis and modelling of rainfall and river flows and scaling, causal transfer functions. _J. Geophys. Res. Atmosph._, _101_(D21): 26427-26440. * Turcotte and Greene (1993) Turcotte, D. L., Greene, L., 1993. A scale-invariant approach to flood-frequency analysis. _Stoch. Hydrol. Hydraul._, \\(7\\), 33-40. * Weber and Talkner (2001) Weber, R. O., Talkner, P., 2001. Spectra and correlations of climate data from days to decades. _J. Geophys. Res. Atmosph._, _106_(D17), 20131.
We study temporal correlations and multifractal properties of long river discharge records from 41 hydrological stations around the globe. To detect long-term correlations and multifractal behaviour in the presence of trends, we apply several recently developed methods [detrended fluctuation analysis (DFA), wavelet analysis, and multifractal DFA] that can systematically detect and overcome nonstationarities in the data at all time scales. We find that above some crossover time that usually is several weeks, the daily runoffs are long-term correlated, being characterized by a correlation function \\(C(s)\\) that decays as \\(C(s)\\sim s^{-\\gamma}\\). The exponent \\(\\gamma\\) varies from river to river in a wide range between 0.1 and 0.9. The power-law decay of \\(C(s)\\) corresponds to a power-law increase of the related fluctuation function \\(F_{2}(s)\\sim s^{H}\\) where \\(H=1-\\gamma/2\\). We also find that in most records, for large times, weak multifractality occurs. The Renyi exponent \\(\\tau(q)\\) for \\(q\\) between \\(-10\\) and \\(+10\\) can be fitted to the remarkably simple form \\(\\tau(q)=-\\ln(a^{q}+b^{q})/\\ln 2\\), with solely two parameters \\(a\\) and \\(b\\) between 0 and 1 with \\(a+b\\geq 1\\). This type of multifractality is obtained from a generalization of the multiplicative cascade model.
Summarize the following text.
arxiv-format/0305208v1.md
CERN-TH/2003-078 HD-THEP-03-17 **Renormalizability of Gauge Theories** **in Extra Dimensions** Holger Gies _CERN, Theory Division, CH-1211 Geneva 23, Switzerland_ _and_ _Institut fur theoretische Physik, Universitat Heidelberg,_ _Philosophenweg 16, D-69120 Heidelberg, Germany_ _E-mail: [email protected]_ ## 1 Introduction The idea of supplementing our spacetime by compact extra dimensions has recently triggered a vast amount of research. The suggestion that the inverse radius of these extra dimensions does not have to be of Planck-scale order but might even range down to TeV scales has been inspiring and provided us with new machinery for tackling the open problems of the standard model and its extensions. Extra dimensions have at least taught us to consider these problems from another viewpoint [1, 2]. Compact extra dimensions receive strong motivation from string theory, where they appear in abundance. In this context, extra-dimensional field theories are regarded only as effective theories with a limited energy range of validity. Problems of defining extra-dimensional models as fundamental quantum field theories do not occur from this point of view. However, since a convincing and unambiguous derivation of extra-dimensional extensions of the standard model from string theory is not in sight, the important question remains as to whether extra-dimensional models may exist as fundamental quantum field theories. So far, this question has not been answered in the affirmative. The price to be paid for any deviation from the critical dimension \\(D=4\\) towards extra dimensions is the impossibility of renormalizing such theories within perturbation theory. This \"perturbative nonrenormalizability\" is usually taken as a strong hint that the quantum fields of these theories cannot be fundamental as well as interacting. In technical terms, one expects that shifting the ultraviolet (UV) cutoff to infinity yields a zero renormalized coupling (triviality). Nevertheless, perturbative nonrenormalizability does not constitute a \"no-go\" theorem. Despite this tarnish, theories can be fundamental and mathematically consistent down to arbitrarily small length scales, as proposed in Weinberg's \"asymptotic safety\" scenario [3]. It assumes the existence of a non-Gaussian (=nonzero) UV fixed point under the renormalization group (RG) operation at which the continuum limit can be taken. The theory is \"nonperturbatively renormalizable\" in Wilson's sense. If the non-Gaussian fixed point is UV attractive for finitely many couplings in the action, the RG trajectories along which the theory can flow as we send the cutoff to infinity are labeled by only a finite number of physical parameters. Then the theory is as predictive as any perturbatively renormalizable theory, and high-energy physics can be well separated from low-energy physics without tuning (infinitely) many parameters. Indeed, there are a number of well-established examples of theories which are perturbatively nonrenormalizable but nonperturbatively renormalizable, such as the nonlinear sigma model in \\(D=3\\) and models with four-fermion interactions in \\(D=3\\)[4, 5]. Quantum gravity in \\(D=2+\\epsilon\\) belongs to this class, and recently, evidence has been collected for a non-Gaussian UV fixed point even in four-dimensional gravity [6]. In this work, we study the renormalizability status of gauge theories beyond four dimensions, since they are the crucial element for particle-physics models in extra dimensions. We also confine ourselves to nonsupersymmetric theories in order to avoid an abundant particle content beyond that of the standard model.1 For an SU(\\(N\\)) gauge theory, the classical action is given by Footnote 1: With a sufficient amount of supersymmetry and further structure, large classes of models may, of course, be constructed in higher dimensions that exhibit the desired non-Gaussian UV fixed point [7, 8]. \\[S_{\\rm cl}=\\int_{x}d^{D}x\\,\\frac{1}{4}F^{a}_{\\mu\ u}F^{a}_{\\mu\ u},\\quad F^{a} _{\\mu\ u}=\\partial_{\\mu}A^{a}_{\ u}-\\partial_{\ u}A^{a}_{\\mu}+\\bar{g}_{D}f^{ abc}A^{b}_{\\mu}A^{c}_{\ u}, \\tag{1}\\] where, for \\(D>4\\), the bare coupling \\(\\bar{g}_{D}\\) has negative mass dimension \\([\\bar{g}_{D}]=(4-D)/2\\). Though the details of compactification of the extra dimensions constitute the properties of the four-dimensional low-energy theory, they are irrelevant for the far UV behavior; the short-distance fluctuations simply do not \"see\" the compactness of the extra dimensions. Hence, a suitable compactification is implicitly assumed in the following, while its effects on the UV behavior can be safely neglected. We will comment on RG effects at and above the compactification scale at the end of this work. In fact, it was noted long ago [9] that the dimensionless rescaled gauge coupling, \\(g^{2}\\sim k^{D-4}\\bar{g}_{D}^{2}\\), where \\(k\\) denotes an RG momentum scale, exhibits a non-Gaussian UV fixed point for SU(\\(N\\)) gauge theories in an \\(\\epsilon\\) expansion, \\[\\partial_{t}g^{2}\\equiv\\beta_{g^{2}}=(D-4)g^{2}-\\frac{22N}{3}\\,\\frac{g^{4}}{16 \\pi^{2}}+\\dots,\\quad\\partial_{t}\\equiv k\\frac{d}{dk}, \\tag{2}\\] where \\(\\epsilon=D-4\\ll 1\\) has to be assumed. The UV fixed point of the coupling, being a zero of the \\(\\beta_{g^{2}}\\) function with negative slope, can be found at \\(g_{*}^{2}=(24\\pi^{2}/11N)\\epsilon\\), see Fig. 1. The existence of the UV fixed point is a simple consequence of the purely dimensional running, implying a positive term \\(\\sim g^{2}\\), and asymptotic freedom in four dimensions, i.e., a negative term \\(\\sim g^{4}\\). The fixed point can be associated with a second-order phase transition between a deconfined and a \"confining\" phase2. At the fixed point, the continuum limit can be taken, yielding a renormalized theory. The dimensionful renormalized coupling is asymptotically free, \\(\\bar{g}_{D}^{2}\\sim g_{*}^{2}/k^{D-4}\\to 0\\) for increasing momentum scale \\(k\\), and the static quark potential becomes proportional to \\(1/r\\), independent of the dimensionality [9]. Obviously, these results are not trustworthy for five dimensions, with \\(\\epsilon=1\\), and beyond, where the fixed-point coupling is large. Footnote 2: Whether or not standard confinement criteria are truly satisfied in the “confining” phase in \\(D=4+\\epsilon\\) has, of course, not yet been checked. The lesson to be learned is that the question of renormalizability of extra-dimensional gauge theories is nonperturbative in nature, and perturbative power-counting arguments are simply useless. To answer this question, a number of lattice studies have been performed in \\(D=5,6\\)[10, 11, 12, 13, 14], but no real evidence for a non-Gaussian fixed point has been found (we will comment on these studies in more detail below). This puts the relevance of the \\(\\epsilon\\) expansion for \\(D=5,6,\\dots\\) even more into question. Figure 1: **extrapolated \\(\\boldsymbol{\\beta_{g^{2}}}\\) function in \\(\\boldsymbol{\\epsilon}\\) expansion:** whereas the \\(\\beta_{g^{2}}\\) function is negative for \\(D=4\\), the dimensional running of the dimensionless coupling \\(g^{2}\\) induces a positive branch of \\(\\beta_{g^{2}}\\) for small \\(g^{2}\\), leading to a non-Gaussian UV fixed point (violet dots) for any value of \\(D\\). Of course, the \\(\\epsilon=D-4\\) expansion is only justified for \\(\\epsilon\\ll 1\\), such that this plot represents a naive extrapolation. Incidentally, the UV fixed points which are discussed in the context of \"GUT precursors\" [15] are the direct analogue of the UV fixed point of the \\(\\epsilon\\) expansion with the contribution of the extra-dimensional modes taken into account (here the RG scale \\(k\\) is related to the number of Kaluza-Klein modes contributing to the flow). It has been argued that the perturbative expansion parameter \\(g^{2}/(4\\pi)^{2}\\) can be small even at the fixed point if the gauge group is sufficiently large. This would justify the use of perturbation theory and consequently the existence of the fixed point. However, as a caveat, let us remark that the smallness of the expansion parameter is not sufficient for perturbativity. For instance, the anomalous dimension at the non-Gaussian fixed point will always be large (cf. below), independent of the smallness of the fixed-point value itself. Such large anomalous dimensions have a strong influence on, e.g., the form of the effective gluon propagator [8, 16]. In section 2, we perform a nonperturbative analysis of the RG flow of gauge theories in \\(D>4\\) without the need of small \\(\\epsilon\\) or \\(g^{2}\\). But even without this quantitative tool, a qualitative scenario can be developed which relies on a few physical prerequisites. As is apparent from the \\(\\epsilon\\) expansion Eq. (2) but also valid beyond, the \\(\\beta_{g^{2}}\\) function will always have the structure \\[\\partial_{t}g^{2}=\\beta_{g^{2}}=(D-4)\\,g^{2}+\\beta_{\\rm fluc}^{D}(g^{2}), \\tag{3}\\] where \\(\\beta_{\\rm fluc}^{D}(g^{2})\\) is the quantum-fluctuation-induced part. For small coupling, its expansion has the form, \\(\\beta_{\\rm fluc}^{D}(g^{2})\\simeq-b_{0}^{D}\\,g^{4}+\\dots\\), with \\(b_{0}^{D}>0\\) being the analogue of the one-loop coefficient which will generally depend on \\(D\\).3 From this, we deduce that a non-Gaussian UV fixed point exists if \\(\\beta_{\\rm fluc}^{D}(g^{2})/g^{2}\\leq-(D-4)\\) for some \\(g^{2}>0\\). For instance, such a fixed point always exists if \\(\\beta_{\\rm fluc}^{D}(g^{2})\\) is unbounded from below, as is the case to lowest order in the \\(\\epsilon\\) expansion. Footnote 3: Contrary to \\(D=4\\), the first \\(\\beta_{g^{2}}\\) function coefficients are not universal in \\(D>4\\), but depend on the regularization scheme. Instead, a universal object is, e.g., the “critical exponent” \\(\ u=-d\\beta_{g^{2}}/dg^{2}\\big{|}_{g^{2}=g_{*}^{2}}\\). In the \\(\\epsilon\\) expansion, the nonuniversal terms appear at order \\(\\epsilon g^{4}\\) and are not displayed in Eq. (2). As we will show below within the exact RG framework, the statement \\(b_{0}^{D}>0\\) holds, independent of the regulator. Let us now assume that \\(\\beta_{\\rm fluc}^{D}(g^{2})\\) is a smooth function of \\(D\\), such that its functional form remains qualitatively similar to \\(\\beta_{\\rm fluc}^{D=4}(g^{2})\\) at least for a small number of extra dimensions (this will indeed be a result of our calculation in Sect. 2). As an analogue, one may think of dimensionally regularized amplitudes with divergencies already subtracted but with full dependence on \\(D\\) retained. As a first guess, it is tempting to conjecture that a UV fixed point always exists in this case. This is because in \\(D=4\\), the gauge coupling is frequently expected to diverge in the infrared at a \"confinement scale\". This would be a natural consequence of \\(\\beta_{\\rm fluc}^{D=4}(g^{2})\\) being unbounded from below with similar implications for \\(D>4\\). However, the situation is more subtle because of the inherent dependence of the running coupling on its nonperturbative definition. Here, we are interested in the UV behavior of gauge-invariant operators that are building blocks of the effective action, and we expect possible non-Gaussian fixed points to be related to low-dimensional operators. Hence, we have to look at the running of those couplings which are prefactors of whole operators such as, e.g., \\(F_{\\mu\ u}^{a}F_{\\mu\ u}^{a}\\); by contrast, the running coupling defined, e.g., by the three-gluon vertex at various momenta would be useless, because infinitely many (derivative) operatorscan contribute to such a coupling. An expansion in terms of low-dimensional operators suggests the study of a Wilsonian effective action (within a gauge-invariant formalism, as used below,) of the form \\[\\Gamma_{k}=\\int d^{D}x\\left(\\frac{Z_{F,k}}{4}F^{a}_{\\mu\ u}F^{a}_{\\mu\ u}+\\frac{ Y_{k}}{2}(D^{ab}_{\\mu}F^{b}_{\\kappa\\lambda})^{2}+\\frac{W_{2,k}}{2}\\frac{1}{16}(F^{a }_{\\mu\ u}F^{a}_{\\mu\ u})^{2}+\\frac{\\widetilde{W}_{2,k}}{2}\\frac{1}{16}( \\widetilde{F}^{a}_{\\mu\ u}F^{a}_{\\mu\ u})^{2}\\ldots\\right), \\tag{4}\\] where \\(k\\) is the scale at which we consider the theory with all fluctuations with momenta \\(p^{2}>k^{2}\\) already integrated out; the dependence of the wave function renormalization \\(Z_{F,k}\\) and the generalized couplings \\(Y,W_{2},\\widetilde{W}_{2}\\) on \\(k\\) has been displayed explicitly. A useful definition of the dimensionless running gauge coupling now is \\[g^{2}=k^{D-4}\\ Z^{-1}_{F,k}\\ \\bar{g}^{2}_{D}, \\tag{5}\\] such that a non-Gaussian UV fixed point in \\(g^{2}\\) corresponds to a renormalizable operator \\(\\sim F^{a}_{\\mu\ u}F^{a}_{\\mu\ u}\\). Further UV fixed points may exist in other couplings corresponding to further renormalizable operators which then form the UV critical surface \\({\\cal S}_{\\rm UV}\\) of RG trajectories hitting the UV fixed point as we send the cutoff to infinity, \\(k\\to\\infty\\). The running of the couplings depend also on the regularization. Working with the exact renormalization group, we will use a regulator that acts as a mass term for modes with momenta smaller than \\(k\\) but vanishes for the high-momentum modes larger than \\(k\\). Studying the flow of couplings with respect to a variation of \\(k\\) allows to probe the quantum system at different momentum scales. The exact RG hence provides for a natural setting to address the question of renormalizability, i.e., the behavior of the couplings for \\(k\\to\\infty\\). The regularization technique is particularly advantageous for the description of decoupling of massive modes. For a given RG cutoff scale \\(k\\), only particles with masses \\(m^{2}\\lesssim k^{2}\\) can contribute to the RG flow of running couplings. Heavy particles with \\(m^{2}\\gg k^{2}\\) are already integrated out and no modes are left that could possibly drive the flow. In \\(D=4\\) Yang-Mills theories, we are certain to encounter a mass gap in the spectrum of gluonic fluctuations. Therefore, once our RG cutoff scale \\(k\\) has dropped below the Yang-Mills mass gap in the infrared, no fluctuations are left to renormalize the couplings any further. A freeze-out of all couplings is naturally expected in the IR for these regularizations. In particular, we expect an IR fixed point for the running gauge coupling in \\(D=4\\), \\(g^{2}_{*,{\\rm IR}}>0\\) with \\(\\beta_{g^{2}}(g^{2}_{*,{\\rm IR}})\\equiv\\beta^{D=4}_{\\rm fluc}(g^{2}_{*,{\\rm IR }})=0\\) (not to be confused with the desired UV fixed point for \\(D>4\\)), see Fig. 2. Finally assuming that \\(\\beta^{D}_{\\rm fluc}(g^{2})\\) for \\(D>4\\) exhibits qualitatively the same functional form as in \\(D=4\\), we arrive at the following scenario. Owing to the dimensional scaling \\(\\sim(D-4)g^{2}\\), the \\(\\beta_{g^{2}}\\) function starts out positive for small \\(g^{2}\\), such that the Gaussian fixed point is always IR attractive in \\(D>4\\). For sufficiently small \\(D\\), the non-Gaussian IR fixed point persists as the analogue of \\(g^{2}_{*,{\\rm IR}}\\) in \\(D=4\\). In addition to that, a non-Gaussian UV fixed point arises in between, which is the _alter ego_ of the fixed point of the \\(\\epsilon\\) expansion. But contrary to the \\(\\epsilon\\) expansion, the non-Gaussian fixed points exist only up to a critical dimension \\(D=D_{\\rm cr}\\). Beyond \\(D_{\\rm cr}\\), the strong dimensional running simply wins out over the fluctuation-induced running, and the non-Gaussian fixed points vanish. This scenario is sketched in Fig. 2. As a result, we expect that extra-dimensional Yang-Mills theory is truly nonrenormalizable for \\(D>D_{\\rm cr}\\). But the gauge theories with a non-Gaussian UV fixed point for \\(4<D\\leq D_{\\rm cr}\\) are strong candidates for nonperturbatively renormalizable fundamental field theories. Therefore, this scenario has the potential to solve the long-standing contradiction between the \\(\\epsilon\\) expansion and the lattice results. The crucial quantity is the size of \\(D_{\\rm cr}\\) and, in particular, whether \\(4<D_{\\rm cr}<5\\), which would rule out extra-dimensional gauge models based purely on quantum field theory. In the next section, an estimate for \\(D_{\\rm cr}\\) will be derived within the framework of the exact renormalization group. These results will be summarized and discussed in Sect. 3. ## 2 RG flow of gauge theories in extra dimensions Our quantitative investigation is based on an exact RG flow equation for the effective average action \\(\\Gamma_{k}\\)[17] evaluated within a truncation which is discussed in detail in [18]. Here, we briefly summarize the main ingredients and focus on the generalization to \\(D\\) dimensions. The RG flow equation describes the evolution of the effective average action \\(\\Gamma_{k}\\) which governs the physics at a scale \\(k\\). The effects of all quantum fluctuations with momenta ranging from the UV down to \\(k\\) are already included in \\(\\Gamma_{k}\\), whereas the modes from \\(k\\) to Figure 2: \\(\\boldsymbol{\\beta_{g^{2}}}\\) **function scenario:** the lowest curve corresponds to a (\\(D\\)=4)-dimensional \\(\\beta_{g^{2}}\\) function with an IR fixed point in addition to the Gaussian UV fixed point (the arrows mark the flow from UV to IR). For increasing dimensionality \\(D\\) (ascending curves) the Gaussian fixed point becomes IR attractive for purely dimensional reasons and the fluctuations induce a non-Gaussian UV fixed point (violet dots). For \\(D>D_{\\rm cr}\\), the dimensional running dominates and the non-Gaussian UV fixed point vanishes. zero still have to be integrated out. The flow equation can formally be written as \\[\\partial_{t}\\Gamma_{k}=\\frac{1}{2}\\,{\\rm Tr}\\,\\Big{[}\\partial_{t}R_{k}\\left( \\Gamma_{k}^{(2)}+R_{k}\\right)^{-1}\\Big{]},\\quad\\partial_{t}\\equiv k\\frac{d}{dk}, \\tag{6}\\] where \\(\\Gamma_{k}^{(2)}\\) denotes the second functional derivative of \\(\\Gamma_{k}\\), corresponding to the inverse exact propagator at the scale \\(k\\). The momentum-dependent mass-like regulator \\(R_{k}\\) specifies the details of the regularization.The solution of Eq. (6) gives an RG trajectory that interpolates between the microscopic bare UV action, \\(\\Gamma_{k\\to\\infty}\\to S_{\\rm bare}\\), and the full quantum effective action \\(\\Gamma_{k\\to 0}\\equiv\\Gamma\\), the 1PI generating functional. Since Eq. (6) is equivalent to an infinite tower of coupled first-order differential equations, we usually have to rely on approximate solutions of a subset of this infinite tower. A powerful tool is the method of truncations in which we restrict the effective action to a limited number of operators that are considered to be the most relevant ones for a given physical problem. In [18, 19], a truncation of the form \\[\\Gamma_{k}[A]=\\int W_{k}(\\theta),\\quad\\theta:=\\frac{1}{4}F_{\\mu\ u}^{a}F_{\\mu \ u}^{a}, \\tag{7}\\] was advocated. This truncation still includes infinitely many operators, \\(W_{k}(\\theta)=W_{1,k}\\theta+W_{2,k}\\theta^{2}/2+W_{3,k}\\theta^{3}/3!+\\dots\\), with corresponding couplings \\(W_{i,k}\\), but is simple enough to be dealt with. Although a quantitative influence of further operators not contained in Eq. (7) has to be expected, this truncation has demonstrated its capability of controlling strong-coupling phenomena in \\(D=4\\) at least qualitatively [18]. In addition to the gauge-invariant gluonic operators in Eq. (7), we include the standard ghost and gauge-fixing terms, but neglect any non-trivial running in this sector. We choose the background-field gauge and its adaption to the flow-equation formalism [20].4 As an important ingredient, we use a regulator \\(R_{k}\\), which adjusts itself to the spectral flow of \\(\\Gamma_{k}^{(2)}\\) in order to account for a possible strong deformation of the fluctuation spectrum in the nonperturbative domain [18, 23]. For a detailed discussion of all explicit and implicit approximations and optimizations used in this work, see [18]. Footnote 4: For the flow equation in covariant gauges, see also [21]; for the contruction of a flow-equation formalism based on gauge-invariant variables, we refer to [22]. Inserting this truncation into the flow equation (6) leads to a differential equation for the function \\(W_{k}\\), which may symbolically be written as \\[\\partial_{t}W_{k}(\\theta)={\\cal F}[\\partial_{\\theta}W_{k},\\partial_{\\theta \\theta}W_{k},\\partial_{t}\\partial_{\\theta}W_{k},\\partial_{t}\\partial_{\\theta \\theta}W_{k},\\eta,\\bar{g}_{D}^{2}], \\tag{8}\\] where the extensive functional \\({\\cal F}\\) depends on derivatives of \\(W_{k}\\), on the bare coupling \\(\\bar{g}_{D}^{2}\\), and on the anomalous dimension \\[\\eta=-\\frac{1}{Z_{F,k}}\\,\\partial_{t}Z_{F,k}. \\tag{9}\\] Here we have identified \\(Z_{F,k}\\equiv W_{1,k}\\), cf. Eq. (4) (a propertime-integral representation of \\({\\cal F}\\) is given in Eq. (29) of [18]). The definition (5) of the running coupling implies for the \\(\\beta_{g^{2}}\\) function, \\[\\partial_{t}g^{2}=\\beta_{g^{2}}(g^{2})=(D-4+\\eta)\\,g^{2}, \\tag{10}\\] such that we can identify \\(\\beta_{\\rm{fluc}}^{D}=\\eta\\,g^{2}\\). A non-Gaussian fixed point exists if \\(D-4+\\eta=0\\) for \\(g^{2}=g_{*}^{2}>0\\). In the language of naive RG power-counting, the anomalous dimension of the gauge field has to become large enough to turn the gauge-field interactions from \"irrelevant\" to \"marginal\" or \"relevant\" in \\(D>4\\). Equation (8) is still an extremely complicated equation, and even numerical solutions will require strong analytical guidance. Therefore, we concentrate on the lowest-order term \\(W_{1,k}=Z_{F,k}\\), from which we can deduce the running coupling. At this point, it should be stressed that the spectrally adjusted regulator used in this work strongly entangles the flows of the single couplings \\(W_{i,k}\\). As discussed in [18], a consistent expansion requires that the _flows_ of \\(W_{2,k},W_{3,k},\\dots\\) contribute to the running coupling even if \\(W_{2,k},W_{3,k},\\dots\\) themselves are dropped in the end.5 This results in an \"all-order\" coupling expansion for the anomalous dimension of the form (see Eq. (40) of [18]) Footnote 5: Neglecting the flows of \\(W_{2,k},W_{3,k},\\dots\\) leads to an unphysical pole in the anomalous dimension, \\(\\eta\\to-\\infty\\) for \\(g^{2}\\to g_{\\rm pole}^{2}\ earrow\\), which, if taken seriously, would induce a non-Gaussian UV fixed point for all \\(4<D<26\\)[19]. \\[\\eta=\\sum_{m=0}^{\\infty}a_{m}\\,G^{m},\\quad G\\equiv\\frac{g^{2}}{2(4\\pi)^{D/2}}, \\tag{11}\\] where the coefficients \\(a_{m}\\) depend on the dimension \\(D\\), the number of colors \\(N\\), and the details of the shape function \\(r(y)\\) of the regulator \\(R_{k}(p^{2})=p^{2}\\,r(p^{2}/k^{2})\\); this shape function has to satisfy \\(r(y)\\to 1/y\\) for \\(y\\to 0\\) and should be positive and drop off sufficiently fast for \\(y\\to\\infty\\) in order to provide for proper IR and UV regularizations but is otherwise arbitrary. It is instructive to take a closer look at the one-loop term, i.e., the \\(m=1\\) term of Eq. (11): \\[\\eta=-\\frac{26-D}{3}\\,N\\,h_{2-D/2}\\,\\,\\frac{g^{2}}{(4\\pi)^{D/2}}+\\dots,\\quad h _{-j}=\\frac{1}{\\Gamma(j+1)}\\int_{0}^{\\infty}dy\\,y^{j}\\frac{d}{dy}\\,\\frac{y\\,r ^{\\prime}(y)}{1+r(y)}. \\tag{12}\\] For \\(D=4\\), we find \\(h_{0}=1\\) because the \\(y\\) integrand is a total derivative and fixed to \\(-1\\) at the lower bound. Hence, we rediscover the correct one-loop \\(\\beta_{g^{2}}\\) function coefficient which is universal, i.e., independent of the regulator in \\(D=4\\), as expected. By contrast, this coefficient does depend on the regulator for \\(D>4\\) which signals the scheme-dependence of the higher-dimensional \\(\\beta_{g^{2}}\\) function already to lowest order in the fluctuations; however, for all admissible regulators, this \\(\\beta_{g^{2}}\\) function coefficient is negative and therefore a universal property, justifying our claim in footnote 3. In the following, we employ an exponential regulator shape function \\(r(y)=1/(e^{y}-1)\\) which is commonly used and for which the \\(D=4\\) two-loop coefficient in our approximation is reproduced to within 99% for SU(2). It turns out that the expansion (11) is asymptotic and the coefficients \\(a_{m}\\) grow stronger than factorially. This does not come as a surprise, since small-coupling expansions in field theory are expected to be asymptotic expansions. Moreover, since the expansion is derived from a finite integral representation of the functional \\({\\cal F}\\) in Eq. (8), we know that a finite integral representation for this asymptotic series must exist. From the method of Borel resummation [24], it is well known that good approximations of the desired integral representation can be obtained by taking only the leading growth of the coefficients into account. This program has successfully been performed in [18] for \\(D=4\\), which we generalize to \\(D>4\\) in Appendix A. The finite resummed integral representation of the anomalous dimension resulting from the leading- and subleading-growth coefficients of the series (11) can be found in Eqs. (A.3,A.5,A.8). As asserted in the introduction, the fluctuation-induced contribution \\(\\beta_{\\rm fluc}^{D}(g^{2})\\) to the \\(\\beta_{g^{2}}\\) function varies smoothly as a function of \\(D\\), and its properties remain qualitatively the same for \\(D\\) not too far from 4. For SU(2), the function \\(\\beta_{g}^{2}=(D-4+\\eta^{\\rm SU(2),D})g^{2}\\) is displayed in Fig. 3 for increasing \\(D\\), confirming the scenario developed in the introduction. A non-Gaussian UV fixed point is found for \\(4<D<D_{\\rm cr}\\) dimensions with \\[D_{\\rm cr}\\simeq 5.46,\\quad{\\rm for\\ SU(2)}. \\tag{13}\\] Beyond \\(D_{\\rm cr}\\), the \\(\\beta_{g^{2}}\\) function remains strictly positive and the dimensional running wins out over the fluctuation-induced running. For SU(3), we are not able to resolve the full color structure completely. Therefore, we simply compute the \\(\\beta_{g^{2}}\\) function by scanning the whole Cartan subalgebra, as described in Appendix A and B. The error introduced by this strategy is rather small in the coupling region of interest (\\(\\alpha\\lesssim 6\\)). Figure 4 depicts our numerical results, and we identify the Figure 3: \\(\\boldsymbol{\\beta_{g^{2}}}\\) **function for SU(2):** the SU(2) \\(\\beta_{g^{2}}\\) function is plotted versus the dimensionless coupling \\(\\alpha=g^{2}/(4\\pi)\\) for increasing dimensionality \\(D\\). For \\(D<4\\leq D_{\\rm cr}\\simeq 5.46\\), a non-Gaussian fixed point exists (big violet dots). Beyond \\(D_{\\rm cr}\\), the pure dimensional running becomes dominant, whereas the fluctuations induce only a modulation of the \\(\\beta_{g^{2}}\\) function. critical dimension as \\[D_{\\rm cr}\\simeq 5.26\\pm 0.01,\\quad{\\rm for\\ SU(3)}, \\tag{14}\\] where the uncertainty arises from the unresolved color structure. The value of the critical dimension as well as the value of the non-Gaussian fixed point in a given dimension \\(D<D_{\\rm cr}\\) decrease with increasing \\(N\\). We expect this behavior to persist for higher gauge groups. For instance, we located the critical dimension at \\(D_{\\rm cr}\\simeq 5\\ldots 5.1\\) for SU(5) (the unresolved color structure inhibits a more precise estimate). ## 3 Conclusions The Wilsonian approach to renormalization allows us to replace the restrictive concept of perturbative renormalizability by Weinberg's principle of asymptotic safety. A theory is asymptotically safe if its RG flow is characterized by a finite number of ultraviolet fixed points. Whereas perturbative renormalization requires these fixed points to be Gaussian, non-Gaussian fixed points can equally serve for a continuum definition of quantum field theories. These theories are as predictive and as fundamental as their perturbatively renormalizable counterparts, and the finite number of UV fixed points determines the number of physical parameters. We have searched for non-Gaussian UV fixed points in perturbatively nonrenormalizable (\\(D\\!>\\!4\\))-dimensional Yang-Mills theories, since the prospect of a fundamental extra Figure 4: \\(\\boldsymbol{\\beta_{g^{2}}}\\) **function for SU(3):** similarly to SU(2), a non-Gaussian fixed point exists for \\(D<4\\leq D_{\\rm cr}\\simeq 5.25\\). The critical dimension \\(D_{\\rm cr}\\) as well as the fixed-point values decrease with increasing \\(N\\). (The curves here correspond to \\(\\eta_{3}^{\\rm SU(3)}\\) of Eq. (A.9); the corresponding curves for \\(\\eta_{8}^{\\rm SU(3)}\\) would be slightly deformed towards lower values.) dimensional quantum field theory without the need of a penumbral embedding in a larger framework is promising. Assuming a smooth dependence of the fluctuation effects on \\(D\\) and employing the Wilsonian idea of integrating fluctuations momentum shell by momentum shell, we have developed a simple scenario for possible renormalizability. Already on a heuristic level, this scenario suggests the existence of a critical dimension \\(D_{\\rm cr}\\) below which a non-Gaussian fixed point exists and nonperturbative renormalizability is possible. We have computed \\(D_{\\rm cr}\\) by quantizing the systems with the aid of a nonperturbative RG flow equation for the effective average action. Whereas this technique is equivalent to perturbation theory if expanded around the Gaussian fixed point, it moreover allows for an exploration of a possible non-Gaussian fixed point structure which is inaccessible to perturbation theory. In other words, the RG flow equation can be used to search for a quantizable microscopic action. In practice, this search is performed within an ansatz - a truncation - which should contain the RG \"relevant\" operators. In this work, we have explored a truncation based on an arbitrary function \\(W_{k}\\) of the square of the non-abelian field strength, \\(F^{a}_{\\mu\ u}F^{a}_{\\mu\ u}\\). Even though we have not extracted the RG behavior of the complete function \\(W_{k}\\), we have determined the \\(\\beta_{g^{2}}\\) function for the running coupling from the term linear in \\(F^{a}_{\\mu\ u}F^{a}_{\\mu\ u}\\). Apart from the Gaussian fixed point which is IR attractive in \\(D>4\\), we find a non-Gaussian UV fixed point of the dimensionless gauge coupling \\(g^{2}\\to g^{2}_{*}\\) as long as \\(4<D\\leq D_{\\rm cr}\\) with \\[D_{\\rm cr}^{\\rm SU(2)}\\simeq 5.46,\\quad D_{\\rm cr}^{\\rm SU(3)}\\simeq 5.26\\pm 0.01,\\quad D_{\\rm cr}^{\\rm SU(5)}\\simeq 5.05\\pm 0.05, \\tag{15}\\] where the uncertainty arises from an unresolved color structure. The fact that \\(D_{\\rm cr}>5\\) for all cases studied in this work, SU(\\(N=2,3,5\\)), appears to point to the possibility that (\\(D\\)=5)-dimensional Yang-Mills theories can be asymptotically safe and renormalizable. But in view of the number of approximations involved, improvements are expected to modify these results quantitatively such that \\(D_{\\rm cr}\\) strictly \\(>5\\) should not be rated as a firm prediction. At least for intermediate values of the coupling, quantitative improvements are expected from additional low-dimensional operators such as those displayed in Eq. (4). By analogy to the (\\(D\\)=4)-dimensional case, one may argue that such additional operators contribute positively to the fluctuation part of the \\(\\beta_{g^{2}}\\) function, decreasing the value of \\(D_{\\rm cr}\\). This leads us to the conservative viewpoint that the UV fixed points observed in (\\(D\\)=5)-dimensional Yang-Mills theory are likely to be an artifact of the approximation, and the computed values for \\(D_{\\rm cr}\\) should be considered as upper bounds. This conclusion is compatible with (most of the) lattice simulations available for \\(D=5,6\\): in [10, 11, 12], extra-dimensional lattice gauge systems were found to have a weak-coupling \"spin-wave\" and a strong-coupling \"confinement\" phase separated by a first-order phase transition. The latter does not allow for a continuum limit that would give rise to a renormalizable quantum field theory.6Footnote 6: The \\(D=4\\) curve is a smooth \\(D=4\\) curve, but it is in perfect agreement with our investigation. In contrast to the conservative viewpoint, there is yet another alternative explanation for our observation of a non-Gaussian fixed point in \\(D=5\\). It may be that this fixed point for the running coupling reflects only a one-dimensional projection of a higher-dimensional critical surface \\({\\cal S}_{\\rm UV}\\). In other words, there might be a true non-Gaussian fixed point with a larger number \\(\\Delta_{\\rm UV}\\) of non-Gaussian UV attractive components corresponding to a number of \\(\\Delta_{\\rm UV}\\) RG \"relevant\" operators. Since our calculation also involves higher-order operators \\((F^{a}_{\\mu\ u}F^{a}_{\\mu\ u})^{n}\\), our truncation could be sensitive to the influence of these operators stabilizing the UV fixed point of the coupling. This would not necessarily be in contradiction to the lattice results which have only employed the Wilson action or small modifications thereof.7 If the Wilson action is not in the domain of attractivity of the true fixed point, i.e., in the same universality class as the renormalizable action, the line of \"constant physics\" towards the continuum limit will not be visible on the lattice. If this second alternative turned out to be true, a purely field theoretic fundamental and renormalizable extra-dimensional model could be constructed, but a larger number of \\(\\Delta_{\\rm UV}\\) physical parameters would have to be fixed for the model to be predictive. For a detailed investigation of this issue, a systematic inclusion of all low-dimensional operators such as those displayed in Eq. (4) seems mandatory. As a final caveat, let us mention that, even if such a renormalizable \\(D=5\\) theory existed, it would not be immediate that its compactified low-energy limit is effectively four-dimensional and confining. Footnote 7: Only in [12], two higher-order operators have been included with a negative result for an UV fixed point. But since this result applies to \\(D=6\\) and SU(\\(N=27\\) or \\(64\\)) lattice gauge theory, it is in perfect agreement with our investigation. Up to now, we have only focused on pure gauge theory. In fact, we believe that this is the most stringent test for the existence of a non-Gaussian fixed point in \\(D=5,6,\\dots\\). Matter fields are expected to make positive contributions to the \\(\\beta_{g^{2}}\\) function, thus lifting the curves and decreasing \\(D_{\\rm cr}\\). We do not have a full nonperturbative proof for this, but this tendency is clearly observed in perturbation theory even to higher loop orders. As a first guess, we have included a fundamental quark loop with \\(N_{\\rm f}\\) flavors in the calculation, and observed that all \\(D_{\\rm cr}\\)'s dropped below 5, except for the case of SU(2) and \\(N_{\\rm f}=1\\), where \\(D_{\\rm cr}\\) stays slightly larger than 5. If this tendency also holds for more reliable computations, extra-dimensional systems with the full standard-model particle content will not be nonperturbatively renormalizable. Let us finally comment on the effects of compactification, which relates the effective four-dimensional low-energy theory to the extra-dimensional theory, be it renormalizable or not. During the transition from low-energy, \\(k\\ll 1/R\\), to high-energy scales, \\(k\\gg 1/R\\), separated by the inverse compactification radius \\(1/R\\), the \\(\\beta_{g^{2}}\\) function is explicitly dependent on \\(kR\\). Pictorially, this \\(\\beta_{g^{2}}\\) function interpolates between the \\(D=4\\) curve and the corresponding \\(D>4\\) curve for increasing \\(k\\) in a smooth fashion that depends on the details of the boundary conditions. (The ascending curves of Fig. 2 may also be viewed as snapshots of \\(\\beta_{g^{2}}\\) for increasing \\(k\\).) Starting from the four-dimensional low-energy theory, the coupling first gets weak for increasing \\(k\\), owing to asymptotic freedom. As soon as the extra dimensions become \"visible\" due to the fluctuations of the lowest Kaluza-Klein modes, the positive \\(\\sim g^{2}\\) term appears effectively in \\(\\beta_{g^{2}}\\) together with the non-Gaussian UV fixed point. Hence, the coupling grows stronger and quickly approaches the UV fixed point value. Since the \\(\\beta_{g^{2}}\\) function itself changes its shape with increasing \\(kR\\), the UV fixed point moves to larger values and so does the coupling. If the theory is renormalizable, \\(D\\leq D_{\\rm cr}\\), the UV fixed point remains and marks the limiting value of the dimensionless coupling. If the theory is nonrenormalizable, \\(D>D_{\\rm cr}\\), the fixed point vanishes and the coupling will eventually hit a Landau pole, signaling the onset of \"new physics\". As is obvious from this discussion, a non-Gaussian UV fixed point does exist at least for intermediate scales \\(kR\\sim 1\\), even in the nonrenormalizable case. Although this \"freezes\" the coupling at the intermediate scales, it does not help to separate the compactification scale far from the scale of new physics in the nonrenormalizable case, since the UV fixed point vanishes as soon as \\(kR\\gg 1\\), and the coupling will generally grow quickly. As a consequence, this line of argument may serve to exclude extra-dimensional models with perturbative gauge-coupling unification at a high scale \\(\\sim 10^{16}\\)GeV, but low-scale extra dimensions separated by many orders of magnitude, \\(M_{\\rm GUT}R\\gg 1\\). ## Acknowledgment The author is grateful to R. Hofmann, J. Jaeckel, U.D. Jentschura, J.M. Pawlowski, Z. Tavartkiladze and C. Wetterich for useful discussions and J.M. Pawlowski for detailed comments on the manuscript. The author acknowledges financial support by the Deutsche Forschungsgemeinschaft under contract Gi 328/1-2. ## Appendix A Resummation of the anomalous dimension In this appendix, we list some details of the calculation of the anomalous dimension, taking the leading growth of the coefficients of the series (11) into account. The following formulas should be read side-by-side with the calculations given in [18]. These leading-growth (l.g.) coefficients read for \\(D\\geq 4\\), \\[a_{m}^{\\rm l.g.} = 4\\left(-\\frac{8c}{D}\\right)^{m-1}\\frac{\\Gamma(m+\\frac{D(D-1)}{4} (N^{2}-1))}{\\Gamma(1+\\frac{D(D-1)}{4}(N^{2}-1))}\\,\\Gamma(m+1)\\,\\tau_{m}\\] (A.1) \\[\\times h_{2m-D/2}\\left((D-2)\\frac{2^{2m}-2}{\\Gamma(2m+1)}B_{2m}- \\frac{4}{\\Gamma(2m)}\\right),\\] where we abbreviated \\(c=(D/2)\\zeta(1+D/2)-1>0\\), and \\(B_{2m}\\) are the Bernoulli numbers. Actually, Eq. (A.1) also contains subleading terms, since the last term \\(\\sim 1/\\Gamma(2m)\\) is negligible compared to the term \\(\\sim B_{2m}\\) for large \\(m\\). Nevertheless, we also retain this subleading term, since it contributes significantly to the one-loop \\(\\beta_{g^{2}}\\) function coefficient which we want to maintain in our approximation. Let us first concentrate on SU(2), where the color factor \\(\\tau_{m}=2\\) for all \\(m=1,2,\\dots\\) (for its definition, see Appendix B); let us nevertheless retain the \\(N\\) dependence in all other terms in order to facilitate the generalization to higher gauge groups. The scheme-dependent coefficient \\(h_{2m-D/2}\\) can be represented by \\[h_{2m-D/2} = \\left(D/2-2m\\right)\\zeta(1-2m+D/2),\\] (A.2) \\[= \\frac{1}{2^{2m-1-D/2}-1}\\,\\frac{1}{\\pi^{2m-d/2}}\\,(-\\cos D\\pi/4) \\,(-1)^{m}\\int_{0}^{\\infty}dt\\,t^{2m-D/2}\\,\\frac{e^{t}}{(e^{t}+1)^{\\frac{d}{2}}},\\] for the exponential regulator shape function. The last equality holds only for \\(D<6\\), which will be sufficient for our purposes.8 The remaining resummation is performed similarly to [18]: we split the anomalous dimension into two parts, Footnote 8: For larger extra dimensions, valid representations can be found by partial integration of the \\(t\\) integral. \\[\\eta=\\eta_{\\rm a}+\\eta_{\\rm b},\\] (A.3) where \\(\\eta_{\\rm a}\\) corresponds to the resummation of the term \\(\\sim B_{2m}\\), and \\(\\eta_{\\rm b}\\) to the term \\(\\sim 1/\\Gamma(2m)\\) in Eq. (A.1), representing the leading and subleading growth, respectively. For resumming \\(\\eta_{\\rm a}\\), we use the standard integral representation of the \\(\\Gamma\\) functions, such that all \\(m\\) dependent terms lead to the sum \\[-\\sum_{m=1}^{\\infty}\\frac{(-q)^{m-1}}{1-2^{D/2+1-2m}}=\\frac{1}{2^{D/2-1}-1}+ \\sum_{j=0}^{\\infty}2^{(D/2-2)j}\\,\\frac{q}{2^{j}+\\frac{q}{2^{j}}}=:S_{\\rm a}^{D }(q).\\] (A.4) The first sum is strictly valid only for \\(|q|<1\\); however, the second sum is valid for arbitrary \\(q\\), apart from simple poles at \\(q=-2^{2j}\\), and rapidly converging, so that this equation should be read from right to left. With this definition, the leading-growth part of the anomalous dimension can be written as \\[\\eta_{\\rm a}^{{\\rm SU}(2),D} = \\frac{(D-2)2^{D/2+3}(-\\cos D\\pi/4)NG}{\\Gamma(1+\\frac{D(D-1)}{4}( N^{2}-1))\\pi^{4-D/2}}\\int\\limits_{0}^{\\infty}dt\\,L_{D}(t)\\] (A.5) \\[\\qquad\\qquad\\times\\int\\limits_{0}^{\\infty}ds\\,\\widetilde{K}_{D}(s )\\,\\left[S_{\\rm a}^{D}\\left(\\frac{2cGst^{2}}{D\\pi^{4}}\\right)-\\frac{1}{2}\\,S_ {\\rm a}^{D}\\left(\\frac{cGst^{2}}{2D\\pi^{4}}\\right)\\right],\\] where the auxiliary functions \\(L_{D}(t)\\), \\(\\widetilde{K}_{D}(s)\\) are defined as \\[L_{D}(t):=\\sum_{l=1}^{\\infty}\\frac{1}{2}\\frac{1}{1+\\cosh lt}\\,\\frac{1}{l^{D/2 -1}},\\quad\\widetilde{K}_{D}(s):=s^{\\frac{1}{2}[\\frac{D(D-1)}{4}(N^{2}-1)+1]}\\, K_{\\frac{D(D-1)}{4}(N^{2}-1)-1}(2\\sqrt{s}),\\] (A.6)and \\(K_{\ u}\\) is the modified Bessel function. For resumming the subleading-growth part \\(\\eta_{\\rm b}\\), we use an integral representation of the Euler Beta function for the ratio of \\(\\Gamma\\) functions, and the resulting \\(m\\) sum can be transformed analogously to Eq. (A.4), yielding \\[S_{\\rm b}^{D}(q)=\\frac{1}{2^{D/2-1}-1}+\\sum_{j=0}^{\\infty}2^{(D/2-1)j}\\left[\\!1 -\\left(\\frac{2^{2j}}{2^{2j}+q}\\right)^{\\gamma}+\\gamma\\left(\\frac{2^{2j}}{2^{2j }+q}\\right)^{\\!\\gamma}\\frac{q}{w^{2j}+q}\\right],\\] (A.7) where we abbreviated \\(\\gamma=1+\\frac{D(D-1)}{4}(N^{2}-1)\\). The subleading-growth part \\(\\eta_{\\rm b}\\) of the anomalous dimension finally reads \\[\\eta_{\\rm b}^{{\\rm SU}(2),D}=-\\frac{2^{D/2+4}(-\\cos D\\pi/4)}{(6-D)\\pi^{2-D/2}}\\, NG\\,{\\rm Re}\\!\\int_{0}^{\\infty}\\frac{d\\lambda\\,\\widetilde{I}^{\\frac{6-D}{2}}e^{ \\widetilde{I}\\lambda\\frac{2}{6-D}}}{(1+e^{\\widetilde{I}\\lambda\\frac{2}{6-D}} )^{2}}\\int_{0}^{1}ds\\,S_{\\rm b}^{D}\\!\\!\\left(\\!-{\\rm i}\\frac{2cG}{D\\pi^{2}}s( 1\\!-\\!s)\\lambda^{\\frac{4}{6-D}}\\!\\right)\\!,\\] (A.8) where \\(\\widetilde{I}=(1+{\\rm i})/\\sqrt{2}\\) and \\(G=g^{2}/[2(4\\pi)^{D/2}]\\). In arriving at Eq. (A.8), we implicitly used a principal-value prescription for the poles of \\(S_{\\rm b}^{D}(q)\\) on the negative \\(q\\) axis. This has been physically motivated in [18] and moreover agrees with systematic studies of the resummation procedure [26]. Both integral representations in Eqs. (A.5), (A.8) are finite, can be evaluated numerically, and reproduce the asymptotic-series coefficients of Eq. (A.1) upon expansion in \\(G\\sim g^{2}\\). For \\(D=4\\), they agree with the results of [18]. For the gauge group SU(3), we do not have the explicit representation of the color factors \\(\\tau_{m}\\) at our disposal. As discussed in Appendix B, we instead scan the Cartan subalgebra for the possible range of the \\(\\tau_{m}\\). Inserting the extrema \\(\\tau_{i,3}^{{\\rm SU}(3)}\\) or \\(\\tau_{i,8}^{{\\rm SU}(3)}\\) as found in Eq. (B.4) into Eq. (A.1) allows us to display the anomalous dimension \\(\\eta^{{\\rm SU}(3)}\\) in terms of the formulas deduced for SU(2): \\[\\eta_{3}^{{\\rm SU}(3)} = \\frac{2}{3}\\,\\eta^{{\\rm SU}(2)}\\Big{|}_{N\\to 3}+\\frac{1}{3}\\eta^{{\\rm SU }(2)}\\Big{|}_{N\\to 3,c\\to c/4},\\] \\[\\eta_{8}^{{\\rm SU}(3)} = \\eta^{{\\rm SU}(2)}\\Big{|}_{N\\to 3,c\\to 3c/4}.\\] (A.9) The notation here indicates that the quantities \\(N\\) and \\(c=(D/2)\\zeta(1+D/2)-1\\) appearing on the right-hand sides of Eqs. (A.5) and (A.8) should be replaced in the prescribed way. The SU(5)-case works similarly with the help of Eq. (B.5). ## Appendix B Color factors Gauge group information enters the flow equation via color traces over products of field strength tensors and gauge potentials. For the calculation within the present truncation, it suffices to consider these quantities as pseudo-abelian, pointing into a constant color direction \\(n^{a}\\). In this case, the color traces reduce to \\[n^{a_{1}}n^{a_{2}}\\ldots n^{a_{2i}}\\,{\\rm tr}_{c}[T^{(a_{1}}T^{a_{2}}\\ldots T^ {a_{2i})}],\\] (B.1)where the parentheses at the color indices denote symmetrization. For general gauge groups, these factors are not independent of the direction of \\(n^{a}\\). Contrary to this, the left-hand side of the flow equation is a function of \\({{{\\frac{ 1}{ 4}}}}F^{a}_{\\mu\ u}F^{a}_{\\mu\ u}\\) which is independent of \\(n^{a}\\). Therefore, we do not need the complete factor of Eq. (B.1), but only that part of the symmetric invariant tensor \\({\\rm tr}_{\\rm c}[T^{(a_{1}}\\ldots T^{a_{2i})}]\\) which is proportional to the trivial one, \\[{\\rm tr}_{\\rm c}[T^{(a_{1}}T^{a_{2}}\\ldots T^{a_{2i})}]=\\tau_{i}\\,\\delta_{(a_{ 1}a_{2}}\\ldots\\delta_{a_{2i-1}a_{2i})}+\\ldots,\\] (B.2) where we omitted further nontrivial symmetric invariant tensors. The latter do not contribute to the flow of \\(W_{k}(\\theta)\\), but to that of other operators which do not belong to our truncation. For the gauge group SU(2), all complications are absent, since there are no further symmetric invariant tensors in Eq. (B.2), implying \\[\\tau^{{\\rm SU}(2)}_{i}=2,\\quad i=1,2,\\ldots\\.\\] (B.3) For higher gauge groups, we do not evaluate the \\(\\tau_{i}\\)'s from Eq. (B.2) directly; instead, we explore the possible values of the whole trace of Eq. (B.1) for different choices of \\(n^{a}\\). For this, we exploit the fact that the color unit vector can always be rotated into the Cartan sub-algebra. For SU(3), we choose a color vector \\(n^{a}\\) pointing into the 3 or 8 direction in color space, representing the two possible extremal cases for which the trace boils down to \\[\\tau^{{\\rm SU}(3)}_{i,3}=2+\\frac{1}{4^{i-1}},\\quad\\tau^{{\\rm SU}(3)}_{i,8}=3 \\,\\left(\\!\\!\\frac{3}{4}\\!\\right)^{i-1}.\\] (B.4) We follow the same strategy for SU(5), where the color factors for the 3,8,15, and 24 direction reduce to \\[\\tau^{{\\rm SU}(5)}_{i,3}=2+3\\,\\left(\\!\\frac{1}{4}\\!\\right)^{i-1}, \\tau^{{\\rm SU}(5)}_{i,8}=\\frac{4}{3}\\left(\\!\\frac{1}{3}\\!\\right)^{i-1}+ \\frac{2}{3}\\,\\left(\\!\\frac{1}{12}\\!\\right)^{i-1}+3\\,\\left(\\!\\frac{3}{4}\\! \\right)^{i-1},\\] \\[\\tau^{{\\rm SU}(5)}_{i,15}=4\\,\\left(\\!\\frac{2}{3}\\!\\right)^{i-1}+ \\frac{3}{4}\\,\\left(\\!\\frac{3}{8}\\!\\right)^{i-1}+\\frac{1}{4}\\,\\left(\\!\\frac{1} {24}\\!\\right)^{i-1},\\hskip 14.226378pt\\tau^{{\\rm SU}(5)}_{i,24}=5\\,\\left(\\! \\frac{5}{8}\\!\\right)^{i-1}.\\] (B.5) The uncertainty introduced by the artificial \\(n^{a}\\) dependence of the color factors is responsible for the uncertainties of our results for the SU(3) and SU(5) critical dimension \\(D_{\\rm cr}\\). Obviously, the uncertainty increases with the size of the Cartan sub-algebra, i.e., the rank of the gauge group. ## References * [1] I. Antoniadis, Phys. Lett. B **246**, 377 (1990); A. Pomarol and M. Quiros, Phys. Lett. B **438**, 255 (1998) [arXiv:hep-ph/9806263]; K. R. Dienes, E. Dudas and T. Gherghetta, Nucl. Phys. B **537**, 47 (1999) [arXiv:hep-ph/9806292]; arXiv:hep-ph/9807522. * [2] Y. Kawamura, Prog. Theor. Phys. **103**, 613 (2000) [arXiv:hep-ph/9902423]; G. Altarelli and F. Feruglio, Phys. Lett. B **511**, 257 (2001) [arXiv:hep-ph/0102301];* [3] S. Weinberg, in _C76-07-23.1_ HUTP-76/160, Erice Subnucl. Phys., 1, (1976). * [4] K. G. Wilson, Phys. Rev. D **7**, 2911 (1973). * [5] B. Rosenstein, B. J. Warr and S. H. Park, Phys. Rev. Lett. **62**, 1433 (1989); K. Gawedzki and A. Kupiainen, Phys. Rev. Lett. **55**, 363 (1985); C. de Calan, P. A. Faria da Veiga, J. Magnen and R. Seneor, Phys. Rev. Lett. **66**, 3233 (1991). * [6] O. Lauscher and M. Reuter, Phys. Rev. D **65**, 025013 (2002) [arXiv:hep-th/0108040]; Class. Quant. Grav. **19**, 483 (2002) [arXiv:hep-th/0110021]; W. Souma, Prog. Theor. Phys. **102**, 181 (1999) [arXiv:hep-th/9907027]; R. Percacci and D. Perini, Phys. Rev. D **67**, 081503 (2003) [arXiv:hep-th/0207033]; P. Forgacs and M. Niedermaier, arXiv:hep-th/0207028. * [7] N. Seiberg, Phys. Lett. B **388**, 753 (1996) [arXiv:hep-th/9608111]; Phys. Lett. B **390**, 169 (1997) [arXiv:hep-th/9609161]. * [8] D. I. Kazakov, arXiv:hep-th/0209100. * [9] M. E. Peskin, Phys. Lett. B **94**, 161 (1980). * [10] M. Creutz, Phys. Rev. Lett. **43**, 553 (1979) [Erratum-ibid. **43**, 890 (1979)]. * [11] H. Kawai, M. Nio and Y. Okamoto, Prog. Theor. Phys. **88**, 341 (1992). * [12] J. Nishimura, Mod. Phys. Lett. A **11**, 3049 (1996) [arXiv:hep-lat/9608119]. * [13] S. Ejiri, J. Kubo and M. Murata, Phys. Rev. D **62**, 105025 (2000) [arXiv:hep-ph/0006217]; S. Ejiri, S. Fujimoto and J. Kubo, Phys. Rev. D **66**, 036002 (2002) [arXiv:hep-lat/0204022]. * [14] K. Farakos, P. de Forcrand, C. P. Korthals Altes, M. Laine and M. Vettorazzo, Nucl. Phys. B **655**, 170 (2003) [arXiv:hep-ph/0207343]. * [15] K. R. Dienes, E. Dudas and T. Gherghetta, arXiv:hep-th/0210294; F. Paccetti Correia, M. G. Schmidt and Z. Tavartkiladze, arXiv:hep-ph/0302038. * [16] N. V. Krasnikov, Phys. Lett. B **273**, 246 (1991). * [17] C. Wetterich, Phys. Lett. B **301**, 90 (1993); Nucl. Phys. B **352**, 529 (1991); for a review, see J. Berges, N. Tetradis and C. Wetterich, arXiv:hep-ph/0005122; D. F. Litim and J. M. Pawlowski, arXiv:hep-th/9901063. * [18] H. Gies, Phys. Rev. D **66**, 025006 (2002) [arXiv:hep-th/0202207]. * [19] M. Reuter and C. Wetterich, Phys. Rev. D **56**, 7893 (1997) [arXiv:hep-th/9708051]. * [20] M. Reuter and C. Wetterich, Nucl. Phys. B **417**, 181 (1994); F. Freire, D. F. Litim and J. M. Pawlowski, Phys. Lett. B **495**, 256 (2000) [arXiv:hep-th/0009110]. * [21] M. Bonini, M. D'Attanasio and G. Marchesini, Nucl. Phys. B **421**, 429 (1994) [arXiv:hep-th/9312114]; U. Ellwanger, Phys. Lett. B **335**, 364 (1994) [arXiv:hep-th/9402077]. * [22] T. R. Morris, Nucl. Phys. B **573**, 97 (2000) [arXiv:hep-th/9910058]; S. Arnone, A. Gatti and T. R. Morris, Phys. Rev. D **67**, 085003 (2003) [arXiv:hep-th/0209162]. * [23] D. F. Litim and J. M. Pawlowski, Phys. Rev. D **66**, 025030 (2002) [arXiv:hep-th/0202188]; Phys. Lett. B **546**, 279 (2002) [arXiv:hep-th/0208216]. * [24] G. Hardy, \"Divergent Series,\" Oxford Univ. Press (1949); C.M. Bender and S.A. Orszag, \"Advanced Mathematical Methods for Scientists and Engineers,\" McGraw-Hill, New York (1978). * [25] N. Arkani-Hamed, A. G. Cohen and H. Georgi, Phys. Rev. Lett. **86**, 4757 (2001) [arXiv:hep-th/0104005]; C. T. Hill, S. Pokorski and J. Wang, Phys. Rev. D **64**, 105005 (2001) [arXiv:hep-th/0104035]. * [26] U. D. Jentschura, E. J. Weniger and G. Soff, J. Phys. G **26**, 1545 (2000) [arXiv:hep-ph/0005198]; U. D. Jentschura, habilitation thesis, Dresden Tech.U. (2002).
We analyze the possibility of nonperturbative renormalizability of gauge theories in \\(D>4\\) dimensions. We develop a scenario, based on Weinberg's idea of asymptotic safety, that allows for renormalizability in extra dimensions owing to a non-Gaussian ultraviolet stable fixed point. Our scenario predicts a critical dimension \\(D_{\\rm cr}\\) beyond which the UV fixed point vanishes, such that renormalizability is possible for \\(D\\leq D_{\\rm cr}\\). Within the framework of exact RG equations, the critical dimension for various SU(\\(N\\)) gauge theories can be computed to lie near five dimensions: \\(5\\lesssim D_{\\rm cr}<6\\). Therefore, our results exclude nonperturbative renormalizability of gauge theories in \\(D=6\\) and higher dimensions.
Give a concise overview of the text below.
arxiv-format/0305389v1.md
# Pierre Auger Atmosphere-Monitoring Lidar System A. Filipcic M. Horvat D. Veberic D. Zavrtanik M. Zavrtanik M. Chiosso R. Mussa G. Sequeiros M.A. Mostafa M.D. Roberts (1) Laboratory for Astroparticle Physics, Nova Gorica Polytechnic, Slovenia (2) INFN-Torino, Italy (3) University of New Mexico, Albuquerque, USA ## 1 Introduction The error in shower energy estimation is directly proportional to the uncertainty in the optical depth between the fluorescence-light origin (within the extensive air shower) and FD cameras [5]. Although reasonable predictions can be obtained using atmospheric models (e.g. US Standard Atmosphere), they do not satisfactorily cover seasonal variations nor occurrence of aerosol layers, typically accompanying windy days and reaching up to 3 km over the ground. As a calorimeter of the FD, atmosphere thus requires on-line or at least periodic monitoring of its optical properties. The lidar seems to be a reasonable choice for this task and it is adopted not only by the Pierre Auger project but apparently also by other cosmic-ray related experiments [4]. In the following sections construction and analysis methods used for the reconstruction of the lidar signal are presented. Analysis So called _lidar equation_[1, 2, 3], describing returned photon flux, in fact represents an under-determined system of nonlinear equations. Therefore, explicit solution of the equation does not exist. In order to obtain some useful results, certain assumptions on optical properties have to be made. One way is to postulate simple potential expression relating the backscatter coefficient \\(\\beta\\) to the attenuation (extinction) \\(\\alpha\\), as used by the Klett method [1]. In the case of the Fernald method [2], both optical properties are separated into molecular and aerosol part. Molecular part, i.e. Rayleigh scattering, is approximated with the assumed atmospheric model, and the lidar equation is solved for the aerosol part. In Fig. 2, Fernald method is applied on the representative measurement taken with the Los Leones lidar station of the Pierre Auger observatory. Integration of the attenuation \\(\\alpha(h)\\) results in _vertical optical depth_ (VOD) \\(\\tau\\) that directly enters estimation of the amount of fluorescence light [5], as measured by the FD. Due to the aerosol layer near the ground (in Fig. 2-left reaching up to 1.5 km) the resulting VOD clearly differs from the predictions of the (clean) atmosphere model (solid line). Judging from Fig. 2-right, the difference \\(\\Delta\\tau\\) between the atmospheric model prediction and the result of the Fernald method can be as high as several tenths of the unit. Neglecting such differences produces a systematic underestimation of the shower energy (and correspondingly the energy of primary particle), since in the first order \\(\\Delta E\\propto\\Delta\\tau\\). Due to the fairly calm and stratified atmosphere above the huge plane (Pampa Amarilla) where the Pierre Auger observatory is placed, adequate assumption of the horizontal invariance can be made. Under such an assumption the lidar equation is solved in a unique way with the two- or multi-angle method [3] for both quantities, the Figure 2: Total (molecular and aerosol) attenuation coefficient \\(\\alpha(h)\\) obtained with Fernald inversion of a vertical shot of the Los Leones lidar (left). Corresponding vertical optical depth \\(\\tau(h)\\), (right). For comparison, attenuation and optical depth as predicted by the US Standard Atmosphere (1976) model are drawn in solid line. backscatter coefficient and the VOD measured relative to some reference height (\\(h_{0}\\), common in all lidar signals taken at different angles). In Fig. 3, multi-angle reconstructions of simultaneous lidar signals from two telescopes in the case of relatively clear atmosphere is compared to the predictions of the atmospheric model. Note, that the relative backscatter coefficient is proportional to the relative atmospheric density, so that lidar system can also serve as a monitoring tool for the atmospheric grammage, important for the description of lateral shower development. ## 4 Conclusion Aerosol layers near ground, occurrence of haze/clouds, and in a lesser way seasonal fluctuations greatly influence shower energy estimation as obtained by the FD measurements. Periodic monitoring of this sensitive properties is thus unavoidable for any type of detection of the fluorescence light originating from air showers in large atmospheric volumes. Steerable lidar system can be successfully used for such a demanding task. Nevertheless, careful selection, optimization, and calibration of the corresponding lidar analysis methods is strongly adverted. ## References * [1] Klett J.D. 1981, Appl. Opt. 20, 211 * [2] Fernald F.G. 1984, Appl. Opt. 23, 652 * [3] Filipcic A. et al. 2003, Astropart. Phys. 18, 501 * [4] Yamamoto T. et al. 2002, Nucl. Instr. and Meth. A 488, 191 * [5] Argiro S. 2003, this Proceeding Figure 3: Vertical optical depth \\(\\tau(h)\\) (left) and relative backscatter coefficient \\(\\ln\\beta(h)/\\beta_{0}\\) (right) obtained with multi-angle analysis of lidar scans. Points are reconstructions of simultaneous signals from two telescopes, solid line represents prediction of the US Standard Atmosphere (1976) model, dashed: the same model with depolarization effects included.
The fluorescence-detection techniques of cosmic-ray air-shower experiments require precise knowledge of atmospheric properties to reconstruct air-shower energies. Up to now, the atmosphere in desert-like areas was assumed to be stable enough so that occasional calibration of atmospheric attenuation would suffice to reconstruct shower profiles. However, serious difficulties have been reported in recent fluorescence-detector experiments causing systematic errors in cosmic ray spectra at extreme energies. Therefore, a scanning backscatter lidar system has been constructed for the Pierre Auger Observatory in Malargue, Argentina, where on-line atmospheric monitoring will be performed. One lidar system is already deployed at the Los Leones fluorescence detector (FD) site and the second one is currently (April 2003) under construction at the Coihueco site. Next to the established ones, a novel analysis method with assumption on horizontal invariance, using multi-angle measurements is shown to unambiguously measure optical depth, as well as absorption and backscatter coefficient.
Give a concise overview of the text below.
arxiv-format/0306054v1.md
# Universal early-time response in high-contrast electromagnetic scattering Peter B. Weichman ALPHATCH, Inc., 6 New England Executive Place, Burlington, MA 01803 November 3, 2021 ###### pacs: 03.50.De, 41.20.-q, 41.20.Jb Remote detection and classification of buried targets is a key goal in a number of important environmental geophysical applications, such as toxic waste drum, and landmine and unexploded ordnance (UXO) remediation [1]. A common tool used for detection of highly conducting metallic targets is the time-domain electromagnetic (TDEM) method, in which an inductive coil is used to transmit EM pulses into the ground. Following each pulse, the voltage \\(V(t)\\) induced by the scattered field is detected by a receiver coil [2]. The magnitude and lifetime of the currents induced in the target, and hence \\(V(t)\\), increase with its size and conductivity. Standard TDEM sensors are capable of resolving anomalies from very small (of order 1 gram) metal targets [3], and are therefore well suited to detection of relatively large buried conducting bodies such as UXO. However, since TDEM is a very low frequency (typically of order 100 Hz) method, its spatial resolution (limited by the target depth and the sensor diameter) is also very low [4]. Therefore, the raw signal amplitude and lifetime provide gross measures of the target size and conductivity, but give no direct information about its geometry and other physical characteristics that would enable _discrimination_ between, say, UXO and similarly sized clutter. Lacking direct target geometry signatures in TDEM data (analogous to, e.g., optical and radar images of uncoluded targets), one seeks _indirect_ measures via more detailed analysis of \\(V(t)\\). This signal contains information about both _intrinsic_ (target size, shape, geometry, and other physical characteristics) and _extrinsic_ (relative target-sensor orientation, transmitter and receiver coil geometries, pulse waveform, etc.) properties, and the key to discrimination is the extraction of the former from the \"background\" of the latter. We shall show that such an analysis divides naturally into early, intermediate and late time domains. Intermediate time is characterized by a finite superposition of exponential decays, the slowest of which eventually dominates and defines late time. Early time is characterized by an essentially infinite number of exponential decays which superimpose to generate a \\(1/\\sqrt{t}\\) universal power law divergence in \\(V(t)\\)[5]. The importance of this latter interval is greatly enhanced for ferrous targets whose response is so slow that it may in fact comprise the _full measured range_ of \\(V(t)\\). To focus the discussion consider the following model. At low frequencies the dielectric function in the ground and in the target is dominated by the its imaginary part, \\(\\epsilon=4\\pi i\\sigma/\\omega\\)[6], where \\(\\sigma({\\bf x})\\) is the dc conductivity (in Gaussian units), and the Maxwell equations may be reduced to a single equation for the vector potential, \\[\ abla\\times\\left(\\frac{1}{\\mu}\ abla\\times{\\bf A}\\right)+\\frac{4\\pi\\sigma}{c ^{2}}\\partial_{t}{\\bf A}=\\frac{4\\pi}{c}\\mathbf{j}_{S}, \\tag{1}\\] with magnetic induction \\({\\bf B}=\ abla\\times{\\bf A}\\) and gauge chosen so that the electric field is \\({\\bf E}=-(1/c)\\partial_{t}{\\bf A}\\). The transmitter loop is modelled by the source current density \\(\\mathbf{j}_{S}({\\bf x},t)=I_{0}(t){\\bf C}_{T}({\\bf x})\\), where the current \\(I_{0}\\) consists of a periodic sequence of rapidly terminated pulses, and \\({\\bf C}_{T}({\\bf x})\\) defines the transmitter loop. The magnetic field is \\({\\bf H}={\\bf B}/\\mu\\), where \\(\\mu({\\bf x})\\) is the (relative) permeability. The conductivity and permeability are separated into background (\\(\\sigma_{b}({\\bf x})\\), \\(\\mu_{b}({\\bf x})\\)) and conducting target (\\(\\sigma_{c}({\\bf x})\\), \\(\\mu_{c}({\\bf x})\\)) components, where \\(\\sigma_{c}\\), \\(\\mu_{c}\\) vanish outside the target volume \\(V_{c}\\), and it is assumed only that \\(\\sigma_{b}/\\sigma_{c}\\ll 1\\). Equation (1) is a vector diffusion equation with diffusion constant \\(D=c^{2}/4\\pi\\mu\\sigma\\). Typical values are \\(D_{b}=8.0\\times 10^{10}\\) m\\({}^{2}\\)/s for a nonmagnetic background with resistivity of 10 \\(\\Omega\\)m; \\(D_{c}=2.3\\times 10^{2}\\) m\\({}^{2}\\)/s for an aluminum target with resistivity \\(2.8\\times 10^{-8}\\)\\(\\Omega\\)m; and \\(D_{c}=4.0\\) m\\({}^{2}\\)/s for a steel target with relative permeability 200 and resistivity \\(8.9\\times 10^{-8}\\)\\(\\Omega\\)m. EM signal propagation distance \\(d\\) in time \\(t\\) may be estimated via \\(d\\sim\\sqrt{Dt}\\). Early-time results will require that \\(\\tau_{b}=R^{2}/D_{b}\\ll\\tau_{c}=L_{c}^{2}/D_{c}\\), where \\(R\\) is the distance between the sensor and the target, and \\(L_{c}\\) is the latter's linear size: target-sensor propagation time should be instantaneous on the time scale of the electrodynamics of the target itself [7]. The associated condition \\(R\\ll L_{c}\\sqrt{D_{b}/D_{c}}\\) is easily satisfied for centimeter scale targets at tens of meters depth, and is even less stringent for larger targets. The off-ramp time, \\(\\tau_{r}\\), for the transmitted pulse is assumed to satisfy \\(\\tau_{r}\\ll\\tau_{c}\\), so that pulse termination also occurs essentially instantaneously on the scale of the target dynamics (no particular relation between \\(\\tau_{r}\\) and \\(\\tau_{b}\\) is required). In order to further elucidate the various time scales in the problem, consider the homogeneous version of (1), valid between pulses. The general solution takes the form of a superposition of exponentially decaying eigenmodes \\[{\\bf A}({\\bf x},t)=\\sum_{n=1}^{\\infty}A_{n}{\\bf a}^{(n)}({\\bf x})e^{-\\lambda_{n} (t-t_{0})} \\tag{2}\\] in which \\(t_{0}\\) marks the beginning of the free decay window, and the mode shapes \\({\\bf a}^{(n)}\\) and decay rates \\(\\lambda_{n}\\) satisfy the eigenvalue equation \\[\ abla\\times\\left(\\frac{1}{\\mu}\ abla\\times{\\bf a}^{(n)}\\right)=\\frac{4\\pi \\sigma\\lambda_{n}}{c^{2}}{\\bf a}^{(n)}. \\tag{3}\\] These modes (which may be orthonormalized by noting that \\(\\sqrt{\\sigma}{\\bf a}^{(n)}\\) are eigenfunctions of a self adjoint operator) correspond to special current density patterns \\({\\bf j}^{(n)}({\\bf x})=(\\lambda_{n}/c)\\sigma({\\bf x}){\\bf a}^{(n)}({\\bf x})\\) with decaying amplitude, but time-independent spatial structure [8]. The spectrum \\(\\{\\lambda_{n}\\}\\) is bounded below, with fundamental decay rate \\(\\lambda_{1}\\sim 1/\\tau_{c}\\) governed by the target size, but unbounded from above [9; 10; 11], with more rapidly decaying modes having spatial structure on ever smaller scales. Since \\(\\sigma_{b}\\ll\\sigma_{c}\\) currents in the background are negligible compared to those in the target, and it follows that \\(\\lambda_{n}\\), as well as the _internal_ structure of \\({\\bf a}^{(n)}({\\bf x})\\), \\({\\bf x}\\in V_{c}\\), are essentially independent of the background [12]. In this sense \\(\\lambda_{n}\\) and \\({\\bf a}^{(n)}\\) are _intrinsic_ properties of the target. Explicitly, one finds \\[A_{n} = \\frac{4\\pi}{c^{2}}I_{n}\\int_{C_{T}}{\\bf a}^{(n)*}({\\bf x})\\cdot d {\\bf l}\\] \\[I_{n} = \\int_{-\\infty}^{t_{0}}dt^{\\prime}I_{0}(t^{\\prime})e^{-\\lambda_{n} (t_{0}-t^{\\prime})}, \\tag{4}\\] where the transmitter coil here is an idealized 1D curve \\(C_{T}\\). If the coil has \\(N_{T}\\) windings then \\(I_{0}(t)=N_{T}i_{0}(T)\\), where \\(i_{0}\\) is the actual current. The voltage measured in the receiver loop is then \\[V(t) = \\sum_{n=1}^{\\infty}V_{n}e^{-\\lambda_{n}(t-t_{0})}\\] \\[V_{n} = \\frac{\\lambda_{n}N_{R}}{c}A_{n}\\int_{C_{R}}{\\bf a}^{(n)}({\\bf x} )\\cdot d{\\bf l}, \\tag{5}\\] in which \\(C_{R}\\) is the idealized 1D receiver loop and \\(N_{R}\\) is the number of windings. The excitation coefficients \\(A_{n}\\) and \\(V_{n}\\) depend on both intrinsic (eigenmode) and extrinsic (transmitter/receiver loop geometry, position, orientation, etc.) information. Given only \\(V(t)\\), absent any information regarding the measurement geometry, target classification relies entirely on the extractable subset of decay rates \\(\\lambda_{n}\\). The mathematical problem is equivalent to the famous \"Can you hear the shape of a drum?\" (i.e., to what extent is the shape of a struck drumhead determined by its frequency spectrum?), but is practically much more difficult because no analogue of the Fourier transform exists for directly estimating the \\(\\lambda_{n}\\). In contrast, if detailed measurement information is available, direct prediction of the amplitudes, and hence of the full signal \\(V(t)\\) is possible. A classification scheme may then be developed based on a search for the target model that directly minimizes the difference between the measured and predicted data [11], thus circumventing the (generally unstable) problem of direct estimation of \\(\\{\\lambda_{n}\\}\\) from noisy data. The number of substantially excited modes in (2) depends on \\(\\tau_{r}\\) (50-100 \\(\\mu\\)s in many commercial systems). Roughly, the terminating pulse will excite a subset (depending on the extrinsic parameters) of those modes with \\(\\lambda_{n}\\lesssim\\lambda_{r}\\equiv 1/\\tau_{r}\\). The higher order modes will decay very rapidly, but still contribute strongly at early time \\(t-t_{0}={\\cal O}(\\tau_{r})\\). Realistically, one can hope to accurately compute only the first few hundred modes [10; 11]. If the largest computable decay rate \\(\\lambda_{\\rm max}\\) is smaller than \\(\\lambda_{r}\\), then the interval \\(0\\leq t-t_{0}\\leq 1/\\lambda_{\\rm max}\\) will not be accurately modelled. For ferrous targets it is often the case that \\(1/\\lambda_{1}\\)_exceeds_ the measurement window and the response is _entirely_ early time. The remainder of this paper is therefore concerned with the development of a complementary theory that deals with this interval. By combining this theory with the mode analysis, a comprehensive model of the entire time-domain signal emerges. The analysis proceeds in three steps. (1) An \"initial condition\" for the free dynamics, consisting of a pattern of currents confined to the surface of the target, is computed. (2) The time-development of this surface current, namely its diffusion into the interior of the target, is computed. (3) Finally, this solution is used to compute the external field generated at the sensor. _Step 1:_ The rapid quenching of the transmitter current leads to an outgoing EM pulse that scatters off the target in a complicated way, but exits the target region by some transient time \\(t_{\\rm tr}=t_{0}+\\tau_{\\rm tr}\\), with \\(\\tau_{\\rm tr}={\\cal O}(\\tau_{b})\\). The assumption \\(\\tau_{b}\\ll\\tau_{c}\\) implies that the internal field \\({\\bf A}({\\bf x},t_{0}-\\tau_{r})\\) just prior to the pulse termination, remains essentially fixed during the interval \\(-\\tau_{r}<t-t_{0}<\\tau_{\\rm tr}\\), responding only in a thin shell near the boundary, \\(\\partial V_{c}\\). More precisely, at high frequencies where the target skin depth \\(\\delta_{c}=\\sqrt{2D_{c}/\\omega}\\) is much smaller than the scale of tangential variation of \\({\\bf A}\\), the internal field near the surface, with local normal \\({\\bf\\hat{n}}\\), takes the form [13] \\[{\\bf A}({\\bf x},\\omega)={\\bf A}^{\\parallel}({\\bf r},\\omega)e^{-|z|\\sqrt{-i \\omega/D_{c}}}, \\tag{6}\\] in which \\(z\\) the coordinate along \\({\\bf\\hat{n}}\\), and \\({\\bf r}\\) is orthogonal to it, and \\({\\bf\\hat{n}}\\cdot{\\bf A}^{\\parallel}=0\\). Continuity of \\({\\bf\\hat{n}}\\times{\\bf A}\\) implies that \\({\\bf A}^{\\parallel}\\) is also the tangential component of the external field. In the time domain, (6) becomes \\[{\\bf A}({\\bf x},t) = {\\bf A}_{0}({\\bf x},t)+\\Delta{\\bf A}({\\bf x},t) \\tag{7}\\] \\[\\Delta{\\bf A}({\\bf x},t) = \\int_{t_{0}-\\tau_{r}}^{t}dt^{\\prime}{\\bf A}^{\\parallel}({\\bf r},t ^{\\prime})\\frac{|z|}{t-t^{\\prime}}\\frac{e^{-z^{2}/4D_{c}(t-t^{\\prime})}}{\\sqrt{4 \\pi D_{c}(t-t^{\\prime})}},\\] valid for \\(t-t_{0}\\ll\\tau_{c}\\), demonstrating the diffusion of the signal inwards from the surface. The current density is given by the same expression, but with \\({\\bf A}^{\\parallel}\\) replaced by \\(\\sigma_{c}{\\bf E}^{\\parallel}=-(\\sigma_{c}/c)\\partial_{t}{\\bf A}^{\\parallel}\\). Integrating over \\(z\\), at \\(t_{\\rm tr}\\) there is an effective surface current, \\[{\\bf K}({\\bf r},t_{\\rm tr})=\\int_{t_{0}-\\tau_{r}}^{t_{0}+\\tau_{\\rm tr}}\\frac{dt^ {\\prime}}{2\\pi}\\sqrt{\\frac{\\sigma_{c}}{\\mu_{c}(t-t^{\\prime})}}\\partial_{t^{ \\prime}}^{\\prime}{\\bf A}^{\\parallel}({\\bf r},t^{\\prime}) \\tag{8}\\] confined to a thin shell with width \\(\\sqrt{D_{c}(\\tau_{r}+\\tau_{\\rm tr})}\\). Equation (8) provides a rigorous foundation for \\({\\bf K}\\), expressing it in terms the external field at the boundary, but the latter has no simple form and is generally unknown. We now describe an alternate procedure for its direct computation via a self-consistency argument. At time \\(t_{\\rm tr}\\) (1) is to be solved with \\({\\bf j}_{S}=0\\). Since all background transients have died out, the \\(\\sigma\\partial_{t}{\\bf A}\\) term is of relative order \\(D_{c}R^{2}/D_{b}L_{c}^{2}\\) compared to the curl term and may be dropped. It follows that \\({\\bf H}=\\mu_{b}^{-1}\ abla\\times{\\bf A}=-\ abla\\Phi\\) is the gradient of a magnetic potential satisfying \\[\ abla\\cdot(\\mu_{b}\ abla\\Phi)=0. \\tag{9}\\] The solution \\(\\Phi_{0}\\) to this equation must satisfy appropriate boundary conditions on \\(\\partial V_{t}\\), namely \\({\\bf\\hat{n}}\\cdot(\\mu_{b}{\\bf H}_{b}-\\mu_{c}{\\bf H}_{c})=0\\) and \\({\\bf\\hat{n}}\\times({\\bf H}_{b}-{\\bf H}_{c})=(4\\pi/c){\\bf K}\\)[13]. In both cases, \\({\\bf H}_{c}=\ abla\\times{\\bf A}_{c}\\) is obtained from the initial internal field \\({\\bf A}({\\bf x},t_{0}-\\tau_{r})\\) evaluated at the boundary. The first condition imposes a unique solution on \\(\\Phi\\) via the Neumann boundary condition \\[-{\\bf\\hat{n}}\\cdot\ abla\\Phi_{0}=\\frac{\\mu_{c}}{\\mu_{b}}{\\bf\\hat{n}}\\cdot{\\bf H }_{c}({\\bf r},z=0^{-}), \\tag{10}\\] with formal solution \\[\\Phi_{0}({\\bf x})=\\int_{\\partial V_{e}}d^{2}r^{\\prime}g_{N}({\\bf x},{\\bf r}^{ \\prime})\\frac{\\mu_{c}}{\\mu_{b}}{\\bf\\hat{n}}\\cdot{\\bf H}_{c}({\\bf r}^{\\prime}), \\tag{11}\\] where \\(g_{N}\\) is the Neumann green function satisfying \\(-\ abla\\cdot(\\mu_{b}\ abla g_{N})=\\delta({\\bf x}-{\\bf x}^{\\prime})\\) with boundary condition \\({\\bf\\hat{n}}\\cdot\ abla g_{N}=0\\). The second condition determines \\({\\bf K}\\): \\[{\\bf K}=-\\frac{c}{4\\pi}{\\bf\\hat{n}}\\times(\ abla\\Phi_{0}+{\\bf H}_{c}). \\tag{12}\\] _Step 2:_ In order to investigate the subsequent evolution of the surface current \\({\\bf K}\\) we take advantage of the rapid variation of the fields near the surface with \\(z\\). Thus, the \\(z\\)-derivatives dominate (1), and to leading order in the small parameter \\(\\epsilon=\\sqrt{D_{c}(t-t_{\\rm tr})/L_{c}^{2}}\\) one need only solve the one-dimensional diffusion equation \\[D_{c}\\partial_{z}^{2}{\\bf E}^{\\perp}+\\partial_{t}{\\bf E}=0,\\ z<0, \\tag{13}\\] with initial condition \\({\\bf E}(t_{\\rm tr})=\\sigma_{c}^{-1}{\\bf K}\\delta(z)\\). Here \\({\\bf E}^{\\perp}={\\bf E}-{\\bf\\hat{n}}({\\bf\\hat{n}}\\cdot{\\bf E})\\) is the tangential part of \\({\\bf E}\\), and \\(\\mu,\\sigma,D\\) are treated as constants on either side of the boundary. Since the external field varies only on the scales \\(L_{c},R\\), to leading order one has \\(\\partial_{z}{\\bf E}^{\\perp}(z=0^{+})=0\\). Continuity of \\({\\bf E}^{\\perp}\\) therefore imposes the Neumann boundary condition \\(\\partial_{z}{\\bf E}^{\\perp}(z=0^{-})=0\\). The solution to (13) is therefore \\[{\\bf E}({\\bf x},t)={\\bf E}_{0}({\\bf x},t)+\\frac{2}{\\sigma_{c}}\\frac{e^{-z^{2}/4 D_{c}(t-t_{\\rm tr})}}{\\sqrt{4\\pi D_{c}(t-t_{\\rm tr})}},\\ z<0, \\tag{14}\\] corresponding to a diffusive Gaussian spread with rapid \\(z\\)-dependence is on the scale \\(\\sqrt{D_{c}(t-t_{\\rm tr})}=\\epsilon L_{c}\\). By integrating with respect to time, and enforcing the condition that \\({\\bf A}\\) should approach the background solution \\({\\bf A}_{0}\\) for large \\(z/\\epsilon L_{c}\\), one obtains \\[\\Delta{\\bf A}({\\bf x},t) = \\frac{4\\pi\\mu_{c}}{c}{\\bf K}({\\bf r})\\left[4D_{c}(t-t_{\\rm tr}) \\frac{e^{-z^{2}/4D_{c}(t-t_{\\rm tr})}}{\\sqrt{4\\pi D_{c}(t-t_{\\rm tr})}}\\right. \\tag{15}\\] \\[\\left.\\hskip 14.226378pt-\\ |z|{\\rm erfc}\\left(\\frac{|z|}{\\sqrt{4D_{c}(t-t_{\\rm tr })}}\\right)\\right],\\] where \\({\\rm erfc}(z)=(2/\\sqrt{\\pi})\\int_{z}^{\\infty}e^{-s^{2}}ds\\) is the complementary error function. Since \\({\\bf K}\\) is, in fact, spread over a width \\(\\sqrt{D_{c}\\tau_{\\rm tr}}\\), equations (14) and (15) are accurate only in the range \\(\\tau_{\\rm tr}\\ll t-t_{0}\\ll\\tau_{c}\\) where the precise microscopic structure (7) has been washed out by the diffusion kernel. _Step 3:_ Equation (14) evaluated at \\(z=0^{-}\\) provides the necessary boundary condition for evaluation of the external field to leading order in \\(\\epsilon\\). Note that \\({\\bf E}(z=0^{-})\\approx{\\bf K}\\sqrt{4\\mu_{c}/\\sigma_{c}c^{2}(t-t_{\\rm tr})}\\)_diverges,_ and continuity of \\({\\bf E}^{\\perp}\\) leads one to expect a corresponding divergence in the external electric field. We exhibit this formally through a correction \\(\\Delta\\Phi\\) to the magnetic potential. Thus, the normal component of the curl of (15) leads to the boundary value \\({\\bf\\hat{n}}\\cdot{\\bf B}(z=0^{-})={\\bf\\hat{n}}\\cdot({\\bf B}_{0}+\\Delta{\\bf B})\\), where \\[{\\bf\\hat{n}}\\cdot\\Delta{\\bf B}=4\\sqrt{t-t_{\\rm tr}}{\\bf\\hat{n}}\\cdot\ abla\\times \\left(\\sqrt{\\mu_{c}/\\sigma_{c}}{\\bf K}\\right). \\tag{16}\\] involves only derivatives with respect to the tangential coordinate \\({\\bf r}\\), and is valid even if \\(\\mu_{c},\\sigma_{c}\\) vary on the scale \\(L_{c}\\). We therefore obtain \\(\\Phi=\\Phi_{0}+\\Delta\\Phi\\), with boundary condition \\(-\\mu_{b}{\\bf\\hat{n}}\\cdot\\Delta\\Phi={\\bf\\hat{n}}\\cdot\\Delta{\\bf B}\\), and hence to \\({\\cal O}(\\epsilon)\\), \\[\\Delta\\Phi({\\bf x})=\\int d^{2}r^{\\prime}g_{N}({\\bf x},{\\bf r}^{\\prime})\\frac{1}{ \\mu_{b}}{\\bf\\hat{n}}\\cdot\\Delta{\\bf B}({\\bf r}^{\\prime}), \\tag{17}\\] which is proportional to \\(\\sqrt{t-t_{\\rm tr}}\\). The correction to the external vector potential is obtained by solving the auxiliary pair of equations \\[\ abla\\times\\Delta{\\bf A} = -\\mu_{b}\ abla\\Delta\\Phi\\] \\[\ abla\\cdot(\\sigma_{b}\\Delta{\\bf A}) = 0, \\tag{18}\\] and is clearly also proportional to \\(\\sqrt{t-t_{\\rm tr}}\\). The electric field correction \\(\\Delta{\\bf E}=-(1/c)\\partial_{t}\\Delta{\\bf A}\\propto(t-t_{\\rm tr})^{-1/2}\\) therefore has the promised square root early time divergence. Measurements of magnetic field or voltage (via the time derivative of the of the integral of the magnetic flux through receiver loop area) follow directly from (17). We end by illustrating the early time behavior using exact analytical results for a homogeneous sphere of radius \\(a\\) in a homogeneous background [14]. We consider also an initial static transmitted field, so that the initial magnetic field is everywhere described by a scalar potential. The initial solution is a superposition of spherical harmonics, \\[\\Phi_{\\rm init}^{lm}=Y_{lm}\\left\\{\\begin{array}{ll}(r/a)^{l},&r<a\\\\ b_{\\rm init}^{lm}(r/a)^{l}+c_{\\rm init}^{lm}(a/r)^{l+1},&r>a.\\end{array}\\right. \\tag{19}\\]with \\(c_{\\rm init}^{lm}=1-b_{\\rm init}^{lm}=(1-\\mu_{c}/\\mu_{b})\\frac{l}{2l+1}\\) (\\(l=1\\) corresponds to the standard case of a uniform illumination field, leading to a dipole response). At \\(t_{\\rm tr}\\) the \\(r<a\\) solution remains the same, while \\(b_{0}^{lm}\\) vanishes. The boundary condition (10) then leads to \\(\\Phi_{0}^{lm}=c_{0}^{lm}Y_{lm}\\), with \\(c_{0}^{lm}=-l\\mu_{c}/(l+1)\\mu_{b}\\). From (12, surface current is, \\[{\\bf K}^{lm}=\\frac{ic}{4\\pi a}\\left(1+\\frac{l}{l+1}\\frac{\\mu_{c}}{\\mu_{b}} \\right)\\sqrt{l(l+1)}{\\bf X}_{lm}. \\tag{20}\\] From (16) and (17) one then obtains, \\[\\Delta{\\bf B}^{lm} = -\\phi_{l}(t)\\left(\\frac{a}{r}\\right)^{l+1}Y_{lm} \\tag{21}\\] \\[\\phi_{l}(t) \\equiv \\frac{\\mu_{c}l}{\\mu_{b}a}\\left(1+\\frac{l}{l+1}\\frac{\\mu_{c}}{\\mu _{b}}\\right)\\sqrt{\\frac{4D_{c}(t-t_{\\rm r})}{\\pi}},\\] and the external fields are given by \\[\\Delta{\\bf B}^{lm}=i\\mu_{b}\\sqrt{\\frac{l+1}{l}}\\phi_{l}(t)\ abla\\times\\left[ \\left(\\frac{a}{r}\\right)^{l+1}{\\bf X}_{lm}\\right] \\tag{22}\\] \\[\\Delta{\\bf A}^{lm}=i\\mu_{b}\\sqrt{\\frac{l+1}{l}}\\phi_{l}(t)\\left(\\frac{a}{r} \\right)^{l+1}{\\bf X}_{lm}, \\tag{23}\\] which each display \\(\\sqrt{t}\\) cusps, while \\[\\Delta{\\bf E}^{lm}=-\\frac{i\\mu_{b}}{c}\\sqrt{\\frac{l+1}{l}}\\frac{\\phi_{l}(t)}{ 2(t-t_{\\rm tr})}\\left(\\frac{a}{r}\\right)^{l+1}{\\bf X}_{lm}, \\tag{24}\\] has the \\(1/\\sqrt{t}\\) divergence. Here \\({\\bf X}_{lm}=-i[l(l+l)]^{-1/2}{\\bf x}\\times\ abla Y_{lm}\\) are the vector spherical harmonics [13]. The spatial decay rate of the signal increases with \\(l\\), but all harmonics have the same universal power law time-dependence. The author is indebted to E. M. Lavely for numerous discussions. The support of SERDP, through contract No. DACA 72-02-C-0029, is gratefully acknowledged. ## References * (1) See, e.g., [http://www.serdp.org/research/research.html](http://www.serdp.org/research/research.html) for a list of ongoing projects in these areas. * (2) Solid state magnetoresistive sensors (presently used, e.g., in recording heads in magnetic data storage devices) are also being developed, yielding direct millimeter-scale vector measurements of the magnetic field \\({\\bf B}(t)\\), but will not effectively compete with standard meter-scale induction measurements without greater noise reduction. See, e.g., R. J. Wold, P. B. Weichman, M. Tondra, D. Reed, E. Lange and A. Becker, \"Proof-of-concept of a standoff UXO detection system using SDT sensor arrays,\" Proc. UXO/Countertime Forum (April 2001). * (3) See, e.g., C. V. Nelson and T. B. Huynh, \"Wide bandwidth time decay responses from low metal mines and ground voids,\" Proc. SPIE AeroSense Symp. Detect. Tech. for Mines and Minelike Targets VI (April 2001). * (4) Low frequency is crucial for reasonable (\\(>10\\) m) depth sensitivity. At ground penetrating radar frequencies, where the short wavelength (\\(<10\\) cm) ensures good spatial resolution, the EM skin depth is at best a few meters, and much smaller still in wet soil. In addition, the dielectric contrast between the ground and the target (and similarly sized clutter, such as rocks) is greatly reduced, thus also reducing target classification capability. * (5) In an analogous effect, molecular diffusion produces an initial _linear_ decay of the NMR signal from fluids in porous media. The coefficient reflects the surface-to-volume ratio of the pores. See P. P. Mitra, P. N. Sen, L. M. Schwartz and P. LeDoussal, Phys. Rev. Lett. **68**, 3555 (1992). * (6) This actually need only be true inside the target. The background may be insulating (\\(\\epsilon=\\epsilon^{\\prime}\\) real), with no significant change in the results. It is required only that signals, be they diffusive or wavelike, propagate much faster in the background than in the target. * (7) In an insulating background one has instead \\(d\\sim ct/\\sqrt{\\epsilon_{b}^{\\prime}}\\), and the much less stringent condition \\(R/c\\ll\\tau_{c}\\). * (8) The spectrum of target modes \\(\\{\\lambda_{n}\\}\\) is actually embedded in the continuum of background decay modes. E.g., for an infinite homogeneous medium, the modes are plane waves \\({\\bf a}({\\bf x};{\\bf q})={\\bf a}_{0}e^{i{\\bf q}\\cdot{\\bf x}}\\), with \\({\\bf a}_{0}\\cdot{\\bf q}=0\\) and decay rate \\(\\lambda({\\bf q})=D_{b}{\\bf q}^{2}\\). This background response is used in geophysical surveys to estimate the ground conductivity, but must be subtracted out (e.g., using data taken far from the target) to isolate the target response. * (9) Analytical solutions are possible only for spherical targets. Recently developed numerical techniques [the general theory is developed in P. B. Weichman, \"Mean field approach to high contrast scattering\" (preprint, 2003)] for solving (3) now produce spectra for oblate and prolate spheroidal targets with a broad range of aspect ratios: see Refs. [10; 11] below. * (10) P. B. Weichman, \"Rapid computation of time-domain response of metallic scatterers for real-time discrimination,\" Proc. 2001 SAGEEP meeting (March 2001). * (11) P. B. Weichman and E. M. Lavely, \"Study of inverse problems for buried UXO discrimination based on EMI sensor data,\" Proc. SPIE AeroSense Symp. Detect. Tech. for Mines and Minelike Targets VIII (April 2003). * (12) The external field, however, will be distorted by background variations, but there is negligible feedback on the internal field. This may be formalized via the Green function integral formulation of (3) which reduces the eigenvalue equation to one for the internal field alone. * (13) See, e.g., J. D. Jackson, _Classical Electrodynamics_ (John Wiley and Sons, New York, 1975). * (14) Further examples for ellipsoidal targets, where partial analytic results may be obtained, along with more extensive calculations of finite corrections to the leading \\(1/\\sqrt{t}\\) behavior at higher order in \\(\\epsilon\\), will be presented elsewhere: P. B. Weichman, manuscript in preparation.
The time-domain response of highly conducting targets following a rapidly terminated electromagnetic pulse displays three distinct regimes: early, intermediate and late time. The intermediate and late times are characterized by a superposition of exponentially decaying eigenmodes. At early time an ever increasing number of rapidly decaying modes contribute, with the result that the scattered electric field displays a universal \\(t^{-1/2}\\) power law which emerges from the diffusive decay of a pattern of surface currents induced by the pulse. The power law amplitude reflects the surface geometry of the target, a property that may prove useful in buried target classification in geophysical remote sensing applications.
Provide a brief summary of the text.
arxiv-format/0307029v1.md
# Assessing Interaction Networks with Applications to Catastrophe Dynamics and Disaster Management Dirk Helbing and Christian Kuhnert Institute for Economics and Traffic, and Faculty of Mathematics and Natural Sciences, Dresden University of Technology, D-01062 Dresden, Germany ## 1 Introduction Natural disasters [1] have occured since earliest times, and despite the development of science and technology, they still cause many victims each year. One reason for this is the increased world population. Nowadays, more people can be affected by a disaster than in former centuries. Apart from this, more people affect their environment, which challenges its stability and triggers disasters such as famines. One common feature of many disastrous events is the so-called domino or avalanche effect. It means, that one critical situation triggers another one and so one, so that the situation worsens even more. One famous example is a mountain slide that fell into a lake and caused very high waves (Vajont, Italy 1963 [2]). Other examples are fires and failures of water and electricity supply caused by earthquakes or, on a larger timescale, a disease harming the economy and social life of the affected area, which leaves the people and country even poorer and the medical system less effective, causing further fatalities (e.g. AIDS in Central Afrika [3] or the plague in former ages). Therefore, a great effort is necessary to prevent the emergence of disasters (which is not always possible) and to improve the management of catastrophes. Physics and other natural scientists have traditionally contributed a lot to understanding the laws behind catastrophes. For example, we mention the extensive work on forest fires [4] and earthquakes [5], which relate to concepts of self-organized criticality used to describe avalanche effects [6]. But there is also work on floodings [7], landslides [8], and volcanos [9]. A considerably attention has also been devoted to epidemics [10]. Some of the physical disciplines involved into the study of these subjects are the theory of catastrophes and bifurcations [11], non-equilibrium phase transitions [12], self-organized criticality and scaling laws [6], percolation theory [13], the statistical physics of networks [14] and extreme events [15], stochastic processes [16] and noise-induced transitions [17], but mechanics, fluid-dynamics, and other fields play an important role as well [18]. In this paper, we like to develop a flexible semi-quantitative method allowing * to assess the suitability of alternative measures of emergency management, i.e. to give decision support, * to estimate the temporary development of catastrophes, and * to give hints when to take certain actions in an anticipatory way. For that purpose, it is necessary to take into account all factors which are relevant during the catastrophe and all direct and indirect interactions between them. This method is in the tradition of systems theory [19]. It extends the concept of the causality diagram in Sec. 2, while a dynamical generalization is developed in Sec. 4. In the next section, we start with a static analysis of interaction networks. Section 3 is intended to illustrate its usefulness for disaster management. There, we will calculate in a semi-fictious example the effects of measures taken to combat an epidemic catastrophe. Section 4 contains the extension to a dynamic description, which is connected to the discrete master equation [16]. It allows to determine the probability and order of events as well as their most likely occurence in time. In Secs. 5 and 7, we will develop a dynamical model of disaster management, which specifies some parameters in the master equation. It relates to models of excitable media [20] and supply networks [21, 22]. Some analytical results for this model are presented in Sec. 6, while Sec. 8 summarizes our results and closes with an outlook. ## 2 Assessment of Interaction Networks In this section, we want to develop a simple method to reflect the approximate influence of different factors or sectors on each other. Such factors may, for example, be energy supply, public transport, or medical support. In principle, it is a long list of variables \\(i\\), which may play a role for the problem under consideration. If we represent the influence of factor \\(j\\) on factor \\(i\\) by \\(A_{ij}\\), we may summarize these (directed) influences by a matrix \\(\\mathbf{A}=(A_{ij})\\). However, in practical applications, one faces the following problems: 1. The number of possible interactions grows quadratically with the number of variables or factors \\(i\\). It is, therefore, difficult to measure or even estimate all the influences \\(A_{ij}\\). 2. While it appears feasible to determine the _direct_ influence \\(M_{ij}\\) of one variable \\(j\\) on another one \\(i\\), it is hard or almost impossible to estimate the indirect influences over various nodes of the graph, which enter into \\(A_{ij}\\) as well. However, feedback loops may have an important effect and may neutralize or even overcompensate the direct influences. Problem (i) can be partially resolved by clustering similar variables and selecting a representative one for each cluster. The remaining set of variables should contain the main explanatory variables. Systematic statistical methods for such a procedure are, in principle, available, but intuition may be a good guide, when the quantitative data required for the clustering of variables are missing. Problem (ii) may be addressed by estimating the _indirect_ influences due to feedback loops based on the _direct_ influences \\(M_{ij}\\), which can be summarized by a matrix \\(\\mathbf{M}=(M_{ij})\\). One may use a formula such as \\[\\mathbf{A}^{\\prime}=\\mathbf{A}^{\\prime}_{\\tau}=\\frac{1}{\\tau}\\sum_{k=1}^{ \\infty}(\\tau\\mathbf{M})^{k}=\\frac{1}{\\tau}\\sum_{k=1}^{\\infty}\\tau^{k} where \\({\\bf 1}\\) denotes the unity matrix. The expression \\({\\bf M}^{k}\\) reflects all influences over \\(k-1\\) nodes and \\(k\\) links, i.e. \\(k=1\\) corresponds to direct influences, \\(k=2\\) to feedback loops with one intermediate node, \\(k=3\\) to feedback loops with two intermediate nodes, etc. The prefactor \\(\\tau^{k}\\) is not only required for convergence, but with \\(\\tau<1\\), it also allows to reflect that indirect interactions often become weaker, the more edges (nodes) are in between. A further simplification can be reached by restricting to a few discrete values to characterize the influences. We may, for example, restrict ourselves to \\[M_{ij}\\in\\left\\{-3,-2,-1,0,1,2,3\\right\\}, \\tag{3}\\] where \\(M_{ij}=\\pm 3\\) means an extreme positive or negative influence, \\(M_{ij}=\\pm 2\\) represents a strong influence, \\(M_{ij}=\\pm 1\\) a weak influence, and \\(M_{ij}=0\\) a negligible influence. Of course, a finer differentiation is possible, whenever necessary. (For an investigation of stylized relationships, it can also make sense to choose \\(M_{ij}\\in\\left\\{-1,0,1\\right\\}\\), where \\(M_{ij}=\\pm 1\\) represents a strong positive or negative influence, then.) The matrix \\({\\bf A}=(A_{ij})\\) will be called the assessment matrix and summarizes all direct influences (\\({\\bf M}\\)) and feedback effects (\\({\\bf A}-{\\bf M}\\)) among the investigated factors. It allows conclusions about * the resulting strength of desireable and undesireable interactions, when feedback effects are included, * the effect of failures of a specific sector (node), * the suitability of possible measures to reach specific goals or improvements, * the side effects of these measures on other factors. This will be illustrated in more detail by the example in Sec. 3. One open problem is the choice of the parameter \\(\\tau\\). It controls how strong the indirect effects contribute in comparison with the direct effects. A small value of \\(\\tau\\) corresponds to neglecting indirect effects, i.e. \\[\\lim_{\\tau\\to 0}{\\bf A}_{\\tau}={\\bf M}\\,, \\tag{4}\\] while increasing values of \\(\\tau\\) reflect a growing influence of indirect effects. This is often the case for catastrophes, as these are frequently related to bifurcations or phase transitions, to avalanches or percolation effects [5, 4]. By variation of \\(\\tau\\), one can study different scenarios. Note that \\(\\tau\\) may be interpreted as time coordinate: Defining \\[\\vec{X}(\\tau)=\\exp(\\tau{\\bf M})\\vec{X} \\tag{5}\\]for an arbitrary vector \\(\\vec{X}\\), we find \\(\\vec{X}(0)=\\vec{X}\\), \\[\\frac{\\vec{X}(\\tau)-\\vec{X}(0)}{\\tau}=\\frac{1}{\\tau}[\\exp(\\tau{\\bf M})-{\\bf 1} ]\\vec{X}(0)={\\bf A}_{\\tau}\\vec{X}(0)\\] and \\[\\frac{d\\vec{X}}{d\\tau}=\\lim_{\\tau\\to 0}\\frac{\\vec{X}(\\tau)-\\vec{X}(0)}{\\tau}={\\bf M }\\vec{X}(0)\\,.\\] From this point of view, \\[\\vec{X}(\\tau)=(\\tau{\\bf A}_{\\tau}+{\\bf 1})\\vec{X}(0) \\tag{6}\\] describes the state of the system at time \\(\\tau\\), and \\(M_{ij}\\) the changing rates. \\(\\vec{X}=\\vec{0}\\) is a stationary solution and corresponds to the normal (everyday) state. An initial state \\(\\vec{X}(0)\ eq\\vec{0}\\) may be interpreted as perturbation of the system by some (catastrophic) event. We should, however, note that the linear system of equations (6) is certainly a rough description of the system dynamics. It is expected to hold only for small perturbations of the system state and does not consider damping effects due to disaster management. Such aspects will be considered later on (see Secs. 4 and 7), after discussion of an example illustrating how to apply interaction matrices to cope with catastrophes. ## 3 Optimization of Interaction Networks: A Simple Example One advantage of our semi-quantitative approach to catastrophes is that it allows to estimate the impact of certain actions on the whole set of factors. Usually, during a disaster the responsibilities have only dissatisfactory information and short time to decide, so in many cases they will take into account only direct impacts on other factors. In the worst case, this may lead to the opposite than the desired result, if the feedback effects exceed the direct influence. Therefore, it would be better to know the implications on the whole system. As we have argued before, all direct and indirect effects are summarized by the matrix \\({\\bf A}\\), which is determined from the matrix \\({\\bf M}\\) of direct interactions. Different measures taken by the responsibilities are reflected by different matrices \\({\\bf M}\\). As an example we consider the spreading of a disease. For illustrative reasons, we will restrict to the discussion of five factors only: 1. the number of _infected persons_, 2. the quality of _medical care_, 3. the _public transport_, 4. the _economic situation_ and 5. the _disposal_ of waste. These factors are not independent from each other, as illustrated by Fig. 1. The corresponding matrix of the assumed direct influences among the different factors is \\[\\mathbf{M}=\\begin{pmatrix}0&-2&+2&0&-1\\\\ -2&0&+1&+2&+1\\\\ -1&0&0&+2&0\\\\ -1&0&+2&0&+1\\\\ -1&0&+1&+2&0\\end{pmatrix} \\tag{7}\\] The right choice of the sign of the direct influence \\(M_{ij}\\) of factor \\(j\\) on factor \\(i\\) is plausible: We assume a positive sign if the factor \\(i\\) increases with an increase of factor \\(j\\), while we assume a negative sign when factor \\(i\\) decreases with the growth of factor \\(j\\). However, the determination of the absolute value of \\(M_{ij}\\) requires empirical data, expert knowledge, or experience. We have argued as follows: * A growing number of infected persons affects all other factors in a negative way (see first column), as these will not continue to work. That is, there will be problems to maintain a good economic situation, public transport, or the disposal of waste. Health care is affected twice, as not only the medical personnel may be infected, but also a higher number of patients needs to be treated, and capacities are limited. Therefore, we have chosen the value \\(-2\\) in this case, but \\(-1\\) for the other factors. * A well operating health system (second column) can reduce the number Figure 1: Interaction network for the example of a spreading disease discussed in the text. of infected persons efficiently, so that we have chosen a value of \\(-2\\). The influence of the health system on the economic situation and other factors was assumed to be of indirect nature, by reducing the number of ill persons. * Public transport (third column) contributes to a fast spreading of the infection assumed here. Therefore, we have selected a value of 2. Transport is also an important factor for economic prosperity (therefore the value of 2), and it is required to get medical personnel and workers in the disposal sector to work (which is reflected by a value of 1). * The economic situation (fourth column) has a significant effect on the quality of the health system, public transport, and disposal, so that we have chosen a value of 2 in each case. * Waste may contribute to the spreading of the disease, if it is not properly removed. Therefore, a good disposal system (fifth column) may reduce the number of infections (therefore the value of \\(-1\\)). It is also required for a functioning health system and economic production. That is, why we have assumed a value of 1. Depending on the respective situation, the concrete values of the direct influences \\(M_{ij}\\) may be somewhat different. For their specification, it can also be helpful to check the resulting values of \\(A_{ij}\\) of the overall direct plus indirect influence for their plausibility, and to compare the size of second-order or third-order interactions. For example, we see that the third-order feedback loop \"number of infected persons\\(\\rightarrow\\)economic situation\\(\\rightarrow\\)quality of the health system\\(\\rightarrow\\)number of infected persons\" is proportional to \\((-1)\\cdot(+2)\\cdot(-2)=4\\). The same indirect influence is found for the feedback loop \"number of infected persons\\(\\rightarrow\\)economic situation\\(\\rightarrow\\)public transport\\(\\rightarrow\\)number of infected persons\". Moreover, according to our assumptions, the second-order autocatalytic increase of the number of infected persons via its impact on the health system is four times as large as the one via its impact on the waste disposal. One surprising observation is that the number of infected persons is reduced via its impact on public transport. In fact, once less busses are operated (because the bus drivers are ill), the spreading rate of the disease is reduced. This may inspire responsibilities to reduce public transport or even stop it. Later on, we will discuss the effect of this possible measure. Before, let us have a look at the resulting overall interaction matrix \\[\\mathbf{A}=(A_{ij})=\\begin{pmatrix}0.9&-2.2&1.3&-0.8&-1.6\\\\ -3.4&1.1&1.5&3.5&2.3\\\\ -1.7&0.6&0.5&2.5&0.8\\\\ -2.0&0.6&2.1&1.5&1.6\\\\ -2.0&0.6&1.5&2.9&0.9\\end{pmatrix} \\tag{8}\\]For its calculation, we have chosen the value \\(\\tau=0.4\\), which will also be used later on to assess alternative actions to fight the spreading of the disease. In order to discuss a certain scenario, we will assume that \\(X_{j}\\) reflects the perturbation of factor \\(j\\). Because of Eq. (6), the quantities \\[Y_{i}=\\sum_{j}(\\tau A_{ij}+\\delta_{ij})X_{j} \\tag{9}\\] will be used to characterize the potential response of the system in the specific scenario described by the perturbations \\(X_{j}\\) (and without the damping effects by disaster management discussed in later sections of this contribution). Here, \\(\\delta_{ij}\\) denotes the Kronecker function, which is 1 for \\(i=j\\) and 0 otherwise. We will assume \\(X_{1}=1.0\\), as the number of infected persons is higher than normal, and \\(X_{2}=X_{3}=X_{4}=X_{5}=-0.1\\), as the other factors are reduced by the spreading of the disease: \\[(X_{1},X_{2},X_{3},X_{4},X_{5})=(1.0,-0.1,-0.1,-0.1,-0.1)\\,. \\tag{10}\\] Moreover, if we attribute a weight of \\(Z_{1}=0.5\\) to the number of infected persons, a weight \\(Z_{4}=0.3\\) to the economic situation, and weights of \\(Z_{2}=Z_{3}=0.1\\) to the quality of the medical care and public transport, while we do not care about waste in our evaluation (i.e. \\(Z_{5}=0\\)), the resulting value of \\[F=F_{\\tau}=\\left(\\sum_{i}Z_{i}Y_{i}^{2}\\right)^{1/2} \\tag{11}\\] will be used to assess the overall situation of the system. In the stationary (normal) system state, \\(F\\) would be zero. Therefore, we want to find a strategy which brings \\(F\\) close to zero. For our basic scenario, we find \\[(Y_{1},Y_{2},Y_{3},Y_{4},Y_{5})=(1.5,-1.8,-1.0,-1.1,-1.1)\\quad\\text{and}\\quad F =1.4\\,. \\tag{12}\\] These reference values will be compared with the values for alternative scenarios which correspond to different actions taken to fight the catastrophe. For example, let us assume to have limited stocks of vaccine for immunization. Should we use these to immunize 1) the transport workers, 2) the medical staff, or 3) the disposal workers? In the first case, we have the modified matrix \\[\\mathbf{M}=\\begin{pmatrix}0&-2&+2&0&-1\\\\ -2&0&+1&+2&+1\\\\ \\underline{0}&0&0&+2&0\\\\ -1&0&+2&0&+1\\\\ -1&0&+1&+2&0\\end{pmatrix}\\,, \\tag{13}\\]which implies \\[\\mathbf{A}=\\begin{pmatrix}1.2&-2.3&1.4&-0.8&-1.7\\\\ -3.2&1.1&1.5&3.5&2.2\\\\ -0.5&0.1&0.8&2.4&0.5\\\\ -1.6&0.5&2.2&1.5&1.5\\\\ -1.7&0.6&1.6&2.9&0.9\\end{pmatrix}\\,, \\tag{14}\\] \\[(Y_{1},Y_{2},Y_{3},Y_{4},Y_{5})=(1.6,-1.7,-0.5,-1.0,-1.0)\\,,\\quad\\text{and} \\quad F=1.4\\,. \\tag{15}\\] In the second case, when we immunize the medical staff, we find \\[\\mathbf{M}=\\begin{pmatrix}0&-2&+2&0&-1\\\\ \\underline{-1}&0&+1&+2&+1\\\\ -1&0&0&+2&0\\\\ -1&0&+2&0&+1\\\\ -1&0&+1&+2&0\\end{pmatrix}\\,, \\tag{16}\\] which implies \\[\\mathbf{A}=\\begin{pmatrix}0.5&-2.1&1.3&-0.7&-1.5\\\\ -2.3&0.7&1.8&3.4&2.0\\\\ -1.7&0.6&0.5&2.5&0.8\\\\ -1.9&0.6&2.1&1.5&1.6\\\\ -1.9&0.6&1.6&2.9&0.9\\end{pmatrix}\\,, \\tag{17}\\] \\[(Y_{1},Y_{2},Y_{3},Y_{4},Y_{5})=(1.3,-1.3,-0.9,-1.1,-1.1)\\,,\\quad\\text{and} \\quad F=1.2\\,. \\tag{18}\\] In the third case, when the disposal workers are immunized, we expect \\[\\mathbf{M}=\\begin{pmatrix}0&-2&+2&0&-1\\\\ -2&0&+1&+2&+1\\\\ -1&0&0&+2&0\\\\ -1&0&+2&0&+1\\\\ \\underline{0}&0&+1&+2&0\\end{pmatrix}\\,, \\tag{19}\\]which implies \\[{\\bf A}=\\begin{pmatrix}0.6&-2.1&1.3&-0.7&-1.6\\\\ -3.1&1.1&1.6&3.5&2.2\\\\ -1.6&0.6&0.5&2.5&0.8\\\\ -1.7&0.6&2.1&1.5&1.5\\\\ -0.8&0.2&1.9&2.8&0.6\\end{pmatrix}\\,, \\tag{20}\\] \\[(Y_{1},Y_{2},Y_{3},Y_{4},Y_{5})=(1.4,-1.7,-0.9,-1.0,-0.7)\\,,\\quad\\mbox{and} \\quad F=1.3\\,. \\tag{21}\\] While the immunization of the public transport staff has almost no effect on the overall situation in the system, the last two measures can improve it. We see that it is more effective to immunize the medical staff than the disposal workers, and the best would be to immunize both groups. This corresponds to \\[{\\bf M}=\\begin{pmatrix}0&-2&+2&0&-1\\\\ \\underline{-1}&0&+1&+2&+1\\\\ -1&0&0&+2&0\\\\ -1&0&+2&0&+1\\\\ \\underline{0}&0&+1&+2&0\\end{pmatrix}\\,, \\tag{22}\\] and we obtain \\[{\\bf A}=\\begin{pmatrix}0.2&-2.0&1.2&-0.7&-1.5\\\\ -1.9&0.6&1.9&3.4&1.9\\\\ -1.6&0.5&0.5&2.5&0.8\\\\ -1.7&0.6&2.1&1.5&1.5\\\\ -0.8&0.2&1.9&2.8&0.6\\end{pmatrix}\\,, \\tag{23}\\] \\[(Y_{1},Y_{2},Y_{3},Y_{4},Y_{5})=(1.2,-1.2,-0.9,-1.0,-0.6)\\,,\\quad\\mbox{and} \\quad F=1.1\\,. \\tag{24}\\] Other measures do not change the interactions in the system, but correspond to a change of the effective impact \\(\\vec{X}\\) of the catastrophe. For example, we may consider to reduce public transport. With (8) and \\[(X_{1},X_{2},X_{3},X_{4},X_{5})=(1.0,-0.1,\\underline{-1.0},-0.1,-0.1)\\,, \\tag{25}\\] we find \\[(Y_{1},Y_{2},Y_{3},Y_{4},Y_{5})=(1.0,-2.4,-2.0,-1.9,-1.7)\\quad\\mbox{and}\\quad F =1.6\\,. \\tag{26}\\]We see that the number of infections could, in fact, be reduced. However, the overall situation of the system has deteriorated, as the economic situation and all the other sectors were negatively affected, because many people could not reach their workplace. Therefore, let us consider the option to increase the number of disposal workers. With (8) and \\[(X_{1},X_{2},X_{3},X_{4},X_{5})=(1.0,-0.1,-0.1,-0.1,\\underline{0.5})\\,, \\tag{27}\\] we find \\[(Y_{1},Y_{2},Y_{3},Y_{4},Y_{5})=(1.1,-1.3,-0.8,-0.8,-0.3)\\quad\\mbox{and}\\quad F =1.0\\,. \\tag{28}\\] In conclusion, increasing the hygienic standards can be surprisingly efficient. Finally, let us assume an improved waste disposal together with the immunization of both, the medical staff and the disposal workers. In that case the interactions of the relevant factors are characterized by matrix (22), whereas the starting vector is again (27). The resulting response is \\[(Y_{1},Y_{2},Y_{3},Y_{4},Y_{5})=(0.8,-0.7,-0.7,-0.6,0.1)\\quad\\mbox{and}\\quad F =0.75\\,. \\tag{29}\\] Only this combination of measures manages to actually reduce the infections compared to the initial state, i.e. \\(Y_{1}<X_{1}\\). However, we can also see that a negative impact on the economic situation and other factors cannot be avoided. In any case, we can assess which measures are reasonable to take, which impact they will have on the system, and which measures need to be combined in order to control the spreading of the disease (or other problems in different scenarios). The simple example in this section was chosen to illustrate the procedure how to assess interaction networks and potential optimization measures. In an on-going project, we do now investigate the interaction network among a large number of factors for the floodings in Germany during August 2002 and other catastrophes. This involves considerably more detailed and much larger matrices \\(\\mathbf{M}\\), where nobody would be able to assess the feedback loops without a method such as the one proposed above. It also involves other aspects such as human forces fighting the catastrophe and the availability of technical or other equipment etc, which will be modeled in the following sections. ## 4 Impact of the Interaction Network on Catastrophe Dynamics Before, we have used the interaction network predominantly for the static assessment of the influence of different factors on each other. We will now try to extend this method step by step in a way that allows a semi-quantitative analysis of the time-dependence of catastrophes for the purpose of anticipation, which helps to prepare for the next step in catastrophe management or prevention. We are particularly interested in the domino or avalanche effects of particular events such as the failure of a certain factor or sector in the interaction network. We will assume that this failure spreads along and in the order of the direct connections in the interaction network (causality graph). In terms of example in Sec. 3, a failure of medical care would first affect the number of infected persons, and in a second step the economic situation, public transport, and the disposal of waste. For a description of the catastrophe dynamics, let us assume that \\(P_{i}(\\tau)\\) denotes the impact on factor \\(i\\) at time \\(\\tau\\) and \\(W_{ji}\\) the rate at which this impact spreads to factor \\(j\\), while \\(D_{i}\\) is a damping rate describing the mitigation of the catastrophic impact on factor \\(i\\) by disaster management. In this case, it is reasonable to assume the dynamics \\[\\frac{d\\vec{P}}{d\\tau}=(\\mathbf{W}-\\mathbf{D})\\vec{P}(\\tau)=\\mathbf{L}\\vec{P} (\\tau) \\tag{30}\\] with \\(\\mathbf{D}=(\\delta_{ij}D_{i})\\), \\(\\mathbf{L}=(L_{ij})=(W_{ij}-\\delta_{ij}D_{i})\\), and \\(\\vec{P}(\\tau)=(P_{i}(\\tau))\\). The symbol \\(\\delta_{ij}\\) represents the Kronecker function, i.e. it is 1 for \\(i=j\\) and otherwise 0. When no better information is available, we may assume that the spreading rate \\(W_{ij}\\) is proportional to the strength \\(|M_{ij}|\\) of the direct influence of factor \\(j\\) on factor \\(i\\). With a constant proportionality factor \\(c\\) this means \\[W_{ij}\\approx c|M_{ij}|\\,. \\tag{31}\\] The formal solution of equation (30) for a time-independent matrix \\(\\mathbf{L}\\) is given by \\[\\vec{P}(\\tau)=\\exp(\\mathbf{L}\\tau)\\vec{P}(0)=\\sum_{k=0}^{\\infty}\\frac{\\tau^{k }}{k!}\\mathbf{L}^{k}\\vec{P}(0)=\\mathbf{B}(\\tau)\\vec{P}(0)\\,. \\tag{32}\\] That is, \\(\\mathbf{B}(\\tau)\\) describes the spreading of an event in the causality network (interaction network) in the course of time \\(\\tau\\), while \\(\\vec{P}(0)\\) reflects the initial impact of a catastrophic event. A random series of catastrophic events can be described by adding a random variable \\(\\vec{\\xi}(t)\\) to the right-hand side of Eq. (30). When we assume \\[D_{i}=\\sum_{j}W_{ji}\\,, \\tag{33}\\] equation (30) is related to the Liouville representation of the discrete master equation. In this case, we can apply all the solution methods developed for it. This includes the so-called path integral solution [23], which allows one to calculate the occurence probability of specific spreading paths. This has some interesting implications. For example, the danger that the impact on sector affects the sectors \\(i_{1},i_{2},\\ldots,i_{n}\\) in the indicated order is quantified by \\[P(i_{0}\\to i_{1}\\to\\cdots\\to i_{n})=\\frac{|P_{i_{0}}(0)|}{D_{i_{n}}}\\prod_{l=0}^ {n-1}\\frac{W_{i_{l+1},i_{l}}}{D_{i_{l}}}\\approx c^{n}\\frac{|P_{i_{0}}(0)|}{D_{i _{n}}}\\prod_{l=0}^{n-1}\\frac{|M_{i_{l+1},i_{l}}|}{D_{i_{l}}}\\,. \\tag{34}\\] Moreover, the average time at which this series of events has occured can be calculated as \\[T(i_{0}\\to i_{1}\\to\\cdots\\to i_{n})=\\sum_{l=0}^{n}\\frac{1}{D_{i_{l}}}\\,, \\tag{35}\\] and the variance of this time is determined by \\[\\Theta(i_{0}\\to i_{1}\\to\\cdots\\to i_{n})=\\sum_{l=0}^{n}\\frac{1}{(D_{i_{l}})^{ 2}}\\,. \\tag{36}\\] That is, Eq. (30) does not only allow to assess the likelyhood of certain series of events rather accurately, but also their approximate appearance times. In other words, we have a detailed picture of the potential catastrophic scenarios and of their time evolution, which allows for a specific preparation and disaster management. In the following, we do not want to restrict to the case (33). If \\[D_{i}<\\sum_{j}W_{ji} \\tag{37}\\] for all \\(i\\), the damping is weak and the solutions \\(P_{i}(\\tau)\\) are expected to grow more or less exponentially in the course of time, which describes a scenario where control is lost and the catastrophe spreads all over the system. In many cases, we will have \\[D_{i}>\\sum_{j}W_{ji} \\tag{38}\\] for all \\(i\\), i.e. the impact of the catastrophe on the system decays in the course of time, and \\(\\lim_{\\tau\\to 0}P_{i}(\\tau)\\to 0\\). This determines, how strong the damping effects, i.e. the means counteracting the catastrophe have to be chosen. In terms of Sec. 7, this concerns the specification of the parameters \\(V_{ik}\\). Finally, it may also happen that \\(D_{i}>\\sum_{j}W_{ji}\\) for some factors \\(i\\), but \\(D_{i}<\\sum_{j}W_{ji}\\) for others. In such situations, everything depends on the initial impact \\(\\vec{P}(0)\\) and on the matrix \\({\\bf B}(\\tau)\\). However, in all these cases, Eqs. (34) to (36) remain valid. ## 5 An Excitable Media Model of Disaster Management The damping effects \\(D_{i}\\) are, to a large extent, related to the forces counteracting the catastrophe. Therefore, we will now develop a dynamical model for these, while our previous considerations assumed some more or less constant value of \\(D_{i}\\). Let us denote by \\(N_{k}\\) the quantity of human forces (e.g. police, fire fibers, or military) ready for action, or the quantity of materials (e.g. technical or medical equipment) ready for use to fight the catastrophe. The index \\(k\\) distinguishes different kinds of forces or required materials. We will assume the following equation: \\[\\frac{dN_{k}}{d\\tau}=\\frac{R_{k}(\\tau)}{T_{k}^{R}}\\pm\\lambda_{k}N_{k}^{\\pm} \\mathrm{e}^{-\\lambda_{k}\\tau}-\\sum_{i}|P_{i}|V_{ik}N_{k}(\\tau){A_{k}}^{l}(\\tau )\\,. \\tag{39}\\] The first term on the right-hand side describes the quantity \\(R_{k}\\) of (human) forces and materials, which were exhausted or not usable, but become available again after an average time period of \\(T_{k}^{R}\\). The second term delineates reserve forces of quantity \\(N_{k}^{\\pm}\\), which are activated from the \"standby mode\" at a rate \\(\\lambda_{k}\\) after the occurence of a catastrophe (plus sign), while they are removed after the recovery from the disaster (minus sign). In most cases, \\(N_{k}^{-}\\leq N_{k}^{+}\\), due to possible fatalities. The third term describes the activation of the forces \\(k\\) to fight the problems with factor or sector \\(i\\). For simplicity, it is here assumed to be proportional to the strenth \\(|P_{i}|\\) of the catastrophic impact on factor \\(i\\), with proportionality factors \\(V_{ik}\\) which reflect the priorities in disaster management and the speed with which the forces or materials \\(k\\) become available for \\(i\\). The exponent \\(l\\) allows to distinguish different cases: When the impact of the catastrophe on factor \\(i\\) is known, we may assume \\(l=0\\). However, when the active forces are assumed to order more forces, an exponent \\(l>0\\) can make sense as well. The quantities \\(A_{k}\\) of forces in action or materials in use change in time according to the equation \\[\\frac{dA_{k}}{d\\tau}=\\sum_{i}|P_{i}|V_{ik}N_{k}(\\tau){A_{k}}^{l}(\\tau)-\\frac{A _{k}(\\tau)}{T_{k}^{A}}-\\sum_{i}|P_{i}|\ u_{ik}A_{k}(\\tau)\\,. \\tag{40}\\] The first term on the right-hand side is due to the available forces \\(k\\), which are activated for fighting the catastrophe, while the second term describes a reduction of the active forces, as they become exhausted or damaged and require rest or repair after an average time period of \\(T_{k}^{A}\\). The third term describes unrecoverable losses such as fatalities or unrecoverable damage of materials, which are assumed to occur with a rate \\(|P_{i}|\ u_{ik}\\) proportional to the catastrophic impact \\(|P_{i}|\\). The quantity of exhausted or damaged forces is described by the following differential equation: \\[\\frac{dR_{k}}{d\\tau}=\\frac{A_{k}(\\tau)}{T_{k}^{A}}-\\frac{R_{k}(\\tau)}{T_{k}^{ R}}\\,. \\tag{41}\\] Herein, \\(1/T_{k}^{A}\\) is the rate at which the forces \\(k\\) become exhausted or damaged, while \\(1/T_{k}^{R}\\) is the recovery or repair rate. The above model already contain many effects which are typically relevant in practical situations. We should note that it is related to models of excitable media developed to describe chemical waves, the propagation of electrical pulses in heart tissue, LaOla waves in human crowds in stadia [20], or the spreading of forest fires [4]. These models typically contain three different states: an excitable one, an active one, and a refractory one. In our model of disaster management, refractory states are described by the variables \\(R_{k}\\), active states by the variables \\(A_{k}\\), and excitable states by the variables \\(N_{k}\\). Due to this analogy, we expect to find certain pattern formation phenomena for our model of disaster management. In a forth-coming paper, this aspect shall be investigated in more detail. ## 6 Some Analytical Results In the following, we will try to get an idea of the possible behavior of the excitable media model of disaster management suggested in Sec. 5. The stationary state of this model is, for \\(\ u_{ik}=0\\), given by \\[\\frac{R_{k}}{T_{k}^{R}}=\\frac{A_{k}}{T_{k}^{A}}=\\sum_{i}|P_{i}|V_{ik}N_{k}{A_{ k}}^{l}=\\mbox{const}\\,. \\tag{42}\\] In order to investigate the sensitivity with respect to small perturbations, we will carry out a linear stability analysis of the simplified model with one sector \\(i\\), \\(|P_{i}|\\approx\\mbox{const.}\\), \\(\ u_{ik}=0\\), and one kind \\(k\\) of forces. Dropping the subscripts and defining \\(P=|P_{i}|\\), the resulting coupled set of differential equations is: \\[\\frac{dN}{d\\tau} = \\frac{R}{T^{R}}-PVNA^{l}\\,, \\tag{43}\\] \\[\\frac{dA}{d\\tau} = PVNA^{l}-\\frac{A}{T^{A}}\\,,\\] (44) \\[\\frac{dR}{d\\tau} = \\frac{A}{T^{A}}-\\frac{R}{T^{R}}\\,. \\tag{45}\\] Its stationary solution is given by \\(N(\\tau)=N_{0}\\), \\(R(\\tau)=R_{0}\\), and \\(A(\\tau)=A_{0}\\) with \\[\\frac{R_{0}}{T^{R}}=\\frac{A_{0}}{T^{A}}=PVN_{0}{A_{0}}^{l}\\,. \\tag{46}\\] It is stable with respect to disturbances, if all eigenvalues \\(\\lambda\\) are non-positive. These eigenvalues can be calculated in the usual way. Assuming \\[N(t)=N_{0}+\\delta N\\;\\mbox{e}^{\\lambda\\tau}\\,,\\quad A(t)=A_{0}+\\delta A\\; \\mbox{e}^{\\lambda\\tau}\\,,\\quad\\mbox{and}\\quad R(t)=R_{0}+\\delta R\\;\\mbox{e}^{ \\lambda\\tau}\\,, \\tag{47}\\]we find the following eigenvalue problem for the amplitudes \\(\\delta N\\), \\(\\delta A\\), and \\(\\delta R\\) of the deviations from the stationary values \\(N_{0}\\), \\(A_{0}\\), and \\(R_{0}\\): \\[\\lambda\\left(\\begin{array}{c}\\delta N\\\\ \\delta A\\\\ \\delta R\\end{array}\\right)=\\left(\\begin{array}{ccc}-PVA_{0}{}^{l}&-PVN_{0} IA_{0}{}^{l-1}&1/T^{R}\\\\ PVA_{0}{}^{l}&PVN_{0}lA_{0}{}^{l-1}-1/T^{A}&0\\\\ 0&1/T^{A}&-1/T^{R}\\end{array}\\right)\\left(\\begin{array}{c}\\delta N\\\\ \\delta A\\\\ \\delta R\\end{array}\\right)\\,. \\tag{48}\\] The eigenvalues \\(\\lambda\\) are the solutions of the characteristic equation \\[(-PVA_{0}{}^{l}-\\lambda)\\left(PVN_{0}lA_{0}{}^{l-1}-\\frac{1}{T^{A}}- \\lambda\\right)\\left(-\\frac{1}{T^{R}}-\\lambda\\right)\\] \\[+\\frac{PVA_{0}{}^{l}}{T^{R}T^{A}}+P^{2}V^{2}N_{0}lA_{0}{}^{2l-1} \\left(-\\frac{1}{T^{R}}-\\lambda\\right)=0\\,. \\tag{49}\\] The three solutions are \\[\\lambda_{1/2} = \\frac{PVA_{0}{}^{l-1}(lN_{0}-A_{0})}{2}-\\frac{1}{2T^{A}}-\\frac{1 }{2T^{R}} \\tag{50}\\] \\[\\pm \\sqrt{\\frac{1}{4}\\left[PVA_{0}{}^{l-1}(lN_{0}-A_{0})+\\frac{1}{T^ {R}}-\\frac{1}{T^{A}}\\right]^{2}-\\frac{PVA_{0}{}^{l}}{T^{A}}}\\] and \\(\\lambda_{3}=0\\). Taking into account Eq. (46), which implies \\(PVA_{0}{}^{l-1}=1/(N_{0}T^{A})\\), this becomes \\[\\lambda_{1/2}=\\frac{1}{2T^{A}}\\left(l-1-\\frac{A_{0}}{N_{0}}\\right)-\\frac{1}{2 T^{R}}\\pm\\sqrt{\\left[\\frac{1}{2T^{A}}\\left(l-1-\\frac{A_{0}}{N_{0}}\\right)+ \\frac{1}{2T^{R}}\\right]^{2}-\\frac{A_{0}}{N_{0}(T^{A})^{2}}}\\,. \\tag{51}\\] A detailed analysis of this expression shows the following: The system behaves unstable with respect to perturbations, when the real part of one of the above solutions becomes positive, which is the case for \\[l-1-\\frac{A_{0}}{N_{0}}>\\min\\left(\\frac{A_{0}}{N_{0}}\\frac{T^{R}}{T^{A}}\\;,\\; \\frac{T^{A}}{T^{R}}\\right)\\,. \\tag{52}\\] Otherwise (apart from the case of marginal stability resulting for the equality sign), perturbations are damped, but one can distinguish two subcases: For \\[-\\frac{T^{A}}{T^{R}}-2\\sqrt{\\frac{A_{0}}{N_{0}}}<l-1-\\frac{A_{0}}{N_{0}}<\\min \\left(\\frac{A_{0}}{N_{0}}\\frac{T^{R}}{T^{A}}\\;,\\;\\frac{T^{A}}{T^{R}}\\;,\\;- \\frac{T^{A}}{T^{R}}+2\\sqrt{\\frac{A_{0}}{N_{0}}}\\right)\\,, \\tag{53}\\] the resulting solution is complex, corresponding to damped oscillations, while the system behaves overdamped in the remaining case, where perturbationsfade away without any oscillations. For disaster management, the linearly unstable case and the case of damped oscillations are both unfavourable. Therefore, \\(l\\) should be small enough. Otherwise, if active forces recruit other forces, the resulting \"autocatalytic effect\" may cause instabilities or overreactions in the supply with forces and materials. This effect is most likely for disasters which nobody was prepared for, where the recruiting mechanism plays the most signifant role. It may explain the suboptimal distribution of forces observed in these situations [24]. ## 7 Connection with Supply Networks and Production Systems Finally, we have to specify the influence of disaster management activities on the damping \\(D_{i}\\) of the impact \\(P_{i}\\), which a catastrophic event has on sector \\(i\\). Let us assume that we have \\(K\\) different kinds of forces, materials or technical equipment. With \\(k\\in K\\), we will indicate that the forces or materials \\(k\\) can substitute each other, i.e. \\(K\\) summarizes equivalent forces or materials. On the other hand, certain actions require the simultaneous presence of different supplementary kinds of forces and materials. Let us assume that the quantities simultaneously required to reduce the problems with factor \\(i\\) are represented by the coefficients \\(c_{iK}\\). The units of \\(c_{iK}\\) shall be chosen in a way that the following equation holds: \\[D_{i}(\\tau)=(1-L_{i})\\min_{K}\\left\\{\\frac{\\sum_{k\\in K}|P_{i}|V_{ik}N_{k}(\\tau )A_{k}{}^{l}(\\tau)}{c_{iK}}\\right\\}\\,, \\tag{54}\\] where \\(V_{ik}N_{k}(\\tau)A_{k}{}^{l}(\\tau)\\) is the rate of activating forces \\(k\\) to mitigate the situation of factor \\(i\\). This equation reflects that, if only one of the required forces or materials is missing, no successful action can be taken. Moreover, there may be a loss \\(L_{i}\\) of efficiency, e.g. due to queueing or limited capacities (\\(0\\leq L_{i}\\leq 1\\)). The formula is analogous to that for production systems, where a product cannot be finished, as long as some required part or worker is missing, and where finite storage capacities may cause losses [21]. Therefore, this formula delineates the inefficiencies in disaster management, which occur when forces and materials are distributed in the wrong way. It has sometimes been reported, that too many forces have been located at some place, and missing at others [24]. Here, models designed to optimize supply networks could help to optimize the efficiency of disaster management [22]. Note that, for disaster management, a slight generalization of formula (54) is in place, as improvization may cope with a lack of certain materials or forces. It is reasonable to assume a generalized function \\(G_{q}\\) with \\[D_{i}(\\tau)=(1-L_{i})G_{q}\\left(\\left\\{\\frac{\\sum_{k\\in K}|P_{i}|V_{ik}N_{k}(\\tau ){A_{k}}^{l}(\\tau)}{c_{iK}}\\right\\}\\right) \\tag{55}\\] and \\[\\min_{K}\\left\\{\\frac{\\sum_{k\\in K}|P_{i}|V_{ik}N_{k}(\\tau){A_{k}}^ {l}(\\tau)}{c_{iK}}\\right\\} \\leq G_{q}\\left(\\left\\{\\frac{\\sum_{k\\in K}|P_{i}|V_{ik}N_{k}(\\tau ){A_{k}}^{l}(\\tau)}{c_{iK}}\\right\\}\\right) \\tag{56}\\] \\[\\leq \\sum_{K}\\frac{\\sum_{k\\in K}|P_{i}|V_{ik}N_{k}(\\tau){A_{k}}^{l}( \\tau)}{c_{iK}}\\,.\\] Herein, the minimum reflects the worst case, while the sum over \\(K\\) describes the best case (if we neglect non-linearities, which may sometimes arise due to synergy effects). The specification \\[G_{q}\\left(\\left\\{\\frac{\\sum_{k\\in K}|P_{i}|V_{ik}N_{k}(\\tau){A_{k}}^{l}(\\tau )}{c_{iK}}\\right\\}\\right)=\\left[\\sum_{K}\\left(\\frac{\\sum_{k\\in K}|P_{i}|V_{ik }N_{k}(\\tau){A_{k}}^{l}(\\tau)}{c_{iK}}\\right)^{q}\\right]^{1/q} \\tag{57}\\] describes both extreme cases. The sum over \\(K\\) corresponds to \\(q=1\\), while the minimum results for \\(q\\to-\\infty\\). Hence, a variation of the parameter \\(q\\) allows to investigate different possible scenarios lying between the best case and the worst case. In order to model synergy effects between different forces, one would have to add nonlinear terms, e.g. bilinear ones. However, this would introduce a large number of additional parameters, which are even harder to estimate than the first-order effects included in our model. Note that we already have non-linearities in our model, namely the products \\(|P_{i}(\\tau)|N_{k}(\\tau){A_{k}}^{l}(\\tau)\\) and the function \\(G_{q}\\). The framework of supply networks also allows one to move from the semi-quantitative description of disaster management in Secs. 2 to 4 to a fully quantitative one, if the required data are available (while we work with assumption (31) otherwise): One simple case of the supply network model proposed in Refs. [21, 22, 25, 26] corresponds to the dynamic input-output model \\[\\frac{dN_{i}}{d\\tau}=\\sum_{j}(\\delta_{ij}-c_{ij})Q_{j}(\\tau) \\tag{58}\\] with \\[\\frac{dQ_{j}}{d\\tau}=\\frac{V_{j}(N_{j})-Q_{j}(\\tau)}{T_{j}}\\,, \\tag{59}\\] where \\(N_{i}\\) denotes the inventory (stock level) of factor or product \\(i\\), \\(V_{j}(N_{j})\\) the desired and \\(Q_{j}\\geq 0\\) the actual throughput of sector \\(j\\), \\(T_{j}\\) the adaptation time, and \\(c_{ij}\\geq 0\\) the quantity of factor \\(i\\) needed per throughput cycle. (For details see Ref. [22].) One could also say, \\(c_{ij}\\) are the entries of the input-outputmatrix measured in economics, and \\(c_{ij}Q_{j}\\) is the flow of the quantity generated by factor \\(i\\) to factor \\(j\\). In the limit of short adaptation times \\(T_{j}\\approx 0\\), the above equations reduce to \\[\\frac{dN_{i}}{d\\tau}=\\sum_{j}(\\delta_{ij}-c_{ij})V_{j}(N_{j})\\,. \\tag{60}\\] Let us assume that the stationary state of this supply system is given by \\(N_{i}(t)=N_{i}^{0}\\). Moreover let us denote the deviations from the stationary state by \\(\\delta N_{i}(t)=N_{i}(t)-N_{i}^{0}\\). With \\[V_{j}(N_{j})\\approx V_{j}(N_{j}^{0})+\\frac{dV_{j}(N_{j}^{0})}{dN_{j}}\\,\\delta N _{j}=A_{j}-B_{j}\\,\\delta N_{j}\\,, \\tag{61}\\] the linearized version of Eq. (60) reads \\[\\frac{d\\,\\delta N_{i}}{d\\tau}=\\sum_{j}(W_{ij}-B_{j}\\delta_{ij})\\,\\delta N_{j}( \\tau)\\,, \\tag{62}\\] where \\(A_{j}=V_{j}(N_{j}^{0})\\), \\(B_{j}=-dV_{j}(N_{j}^{0})/dN_{j}>0\\), and \\(W_{ij}=c_{ij}B_{j}\\). Here, we have used that, for the stationary solution \\(N_{j}^{0}\\), \\[\\sum_{j}(\\delta_{ij}-c_{ij})(A_{j}-B_{j}N_{j}^{0})=0\\,. \\tag{63}\\] In Eq. (62), we have \\(B_{i}=\\sum_{j}W_{ji}\\) because of \\(\\sum_{j}c_{ji}=1\\). Taking into account the additional contribution (55), we finally obtain the set of linear equations \\[\\frac{d\\vec{P}}{d\\tau}=({\\bf W}-{\\bf D})\\vec{P}(\\tau)={\\bf L}\\vec{P}(\\tau) \\tag{64}\\] with \\({\\bf W}=(W_{ij})\\), \\({\\bf D}=(\\delta_{ij}(B_{i}+D_{i}))\\), \\({\\bf L}={\\bf W}-{\\bf D}\\), and \\(\\vec{P}(\\tau)=(P_{i}(\\tau))=(\\delta N_{i}(\\tau))\\). Although \\(P_{j}\\) can become negative due to some catastrophic impact, Eqs. (35) and (36) for the average occurence times and their variance still remain meaningful, when \\(D_{i}\\) is replaced by \\((B_{i}+D_{i})\\). Hence, they can be used to estimate the time of impact on other factors or sectors in the supply network. One particularly important aspect of supply networks is their sensitivity or robustness with respect to perturbations. It is, for example, known that supply chains may suffer from the so-called bullwhip effect, i.e. small temporal variations in the demand may cause large variations in the supply. This instability leads both to undesireable delays in delivery at some places and large stock levels at others [21, 22, 27]. In disaster management, this effect can have serious consequences. However, anticipation is known to efficiently stabilize the dynamics of supply networks [25, 26]. As the formulas from Sec. 4 can be used to estimate the approximate time at which certain factors are likely to be affected, they can help to optimize the supply chain management, in particular to stabilize the supply of forces and materials in time. It is certainly reasonable to have forces available in time to fight the spreading of the disaster to other sectors, rather than sending them all to the places which are already devastated. The philosophy is to reach an anticipative disaster management rather than having a responsive one. To model anticipation, the term \\(\\sum_{i}P_{i}(\\tau)V_{ik}N_{k}(\\tau){A_{k}}^{l}(\\tau)\\) has to be replaced by \\[\\sum_{i}|P_{i}(\\tau+\\Delta\\tau)|V_{ik}N_{k}(\\tau){A_{k}}^{l}(\\tau)\\,, \\tag{65}\\] where \\(\\Delta\\tau\\) denotes the anticipation time horizon. Even more interesting is the robustness of supply networks with respect to structural changes, e.g. when some supplier fails to work or to deliver. This may be investigated by changing the coefficients \\(V_{ik}\\) and \\(c_{iK}\\), which characterize the supply network. The influence of the topology of the supply network on its robustness, reliability, and dynamics is presently under investigation [26]. It has, for example, been noticed that supply ladders are more robust than linear supply chains or supply hierarchies (see Fig. 2). It is not surprising that the redundance of supply ladders, i.e. the availability of alternative delivery channels, stabilizes the system compared to a linear supply chain. We note, however, that hierarchical systems are very common in disaster management, and better alternatives are expected to be found in a research project that we presently pursue. Some insights regarding the robustness of networks have already been gained for the world wide web and other networks [14]. However, it is questionable whether the results for small-world, scale-free or random networks can directly be transfered to disaster management. Further research in this direction would be very helpful. ## 8 Summary and Outlook In the past, physics has made significant contributions to the understanding of catastrophes. This concerns, for example, the statistics of extreme events and avalanche effects. In this paper, we have tried to indicate how physics could contribute to disaster management [28]. We have sketched a rather general, semi-quantitative approach for the assessment of interaction networks which can serve for decision support, as it allows to compare potential optimization measures including their side effects. The method takes into account feedback loops and can be generalized to a dynamical model, which is suitable to estimate likely sequences of events and the times at which they are expected to materialize. We have also described the dynamics of disaster management in a way similar to excitable media, distinguishing an excitable state (\"ready to go\"), an active state, and a refractory (exhausted) state requiring recovery. As successful actions need the simultaneous presence and/or action of several specialized forces and particular equipment, there was also a direct connection with the management of supply networks. We have pointed out that the management of disasters and supply chains can be considerably improved by anticipation, based on the formulas in this paper. Moreover, the robustness [29] crucially depends on the structure of the supply network. In an on-going study, we investigate the optimal network structure to achieve robustness with respect to dynamical and structural perturbations [26]. The statistical physics of networks and graphs [14] is expected to make significant contributions to this. Some relevant aspects for the optimization of organizations and work groups have already been studied [30]. Our proposed approach connects to several methods and fields from statistical physics, such as the master equation [16, 23], excitable media [20], and the dynamics of transport (supply) processes [22]. It is also in the tradition of system dynamics [19], which has, with some success, been used to anticipate future problems of society [31]. In such kinds of studies, it is reasonable to carry out a sensitivity analysis [32] and to investigate the impact of random effects [16], which do, of course, play a significant role for the dynamics of catastrophes. Figure 2: Illustration of different supply networks: (a) linear supply chain, (b) “supply ladder”, and (c) hierarchical supply tree. The supply ladder is particularly robust because of its redundant links and nodes. Apart from stochastic methods, one may in the future also apply elements of fuzzy logic [33] in order to describe the vague knowledge and soft facts, on which disaster management is often based. Insufficient, inconsistent, and uncertain information is one of the typical complications of disaster management, which makes it difficult to assess alternatives and to take the best decision, in particular under often very tense time constraints. In the future, information theory [34] is expected to make some valuable contributions to the design of decision support systems which can integrate inconsistent information and handle incomplete information [35]. In summary, statistical physics offers various promising concepts to develop and improve methods of disaster management. We think that the theory of self-organization [36] is particularly promising for this, having in mind principles such as synchronization [37], distributed control [38], and optimal self-organization [39]. It is expected that an application of these principles would lead to a more flexible, efficient, and robust disaster management compared to the present centralized or hierarchical concepts. ## Acknowledgments This work has been inspired by interviews of several central persons involved into the management of the disastrous floodings in Germany during August 2002. We want to express our particular thanks to the Major of Dresden, Ingolf Rossberg, to the director of the South-East Branch of the German Railway Net (DB Netz), Ralf Rothe, to the director of the Dam Administration of the State of Saxony, Hans-Jurgen Glasebach, to the director of the Traffic Alliance Oberelbe (VVO), Knut Ringat, to the managing director of the Dresdner Transport Services (DVB), Frank Muller-Eberstein, to the Chief of the Fire Fighter Brigade of Pirna, Mr. Peter Kammel, and many others. ## References * [1] S. J. Guastello, Chaos, Catastrophe, and Human Affairs: Applications of Nonlinear Dynamics to Work, Organizations, and Social Evolution, Erlbaum, Mahwah, 1995; I. Asimov, A Choice of Catastrophes: The Disasters That Threat Our World, Simon & Schuster, New York, 1979. * [2] M. Paolini and G. Vacis, The story of Vajont, Bordighera, Boca Raton, FL, 2000. * [3] P. Piot, M. Bartos, P. D. Ghys, N. Walker, and B. Schwartlander, Nature 410 (2001) 968-973. * [4] M. A. Cochrane, Nature 421 (2003) 913-919; B. Drossel and F. Schwabl, Phys. Rev. Lett. 69 (1992) 1629-1632; K. Christensen, H. Flyvbjerg, and Z. Olami, Phys. Rev. Lett. 71 (1993) 2737-2740; V. Loreto, L. Pietronero, A. Vespignani, and S. Zapperi, Phys. Rev. Lett. 75 (1995) 465-468; B. Drossel, Phys. Rev. Lett. 76 (1996) 936-939. * [5] D. Sornette and A. Helmstetter, Phys. Rev. Lett. 89 (2002) 158501; P. Bak, K. Christensen, L. Danon, and T. Scanlon, Phys. Rev. Lett. 88 (2002) 178501; S. Lise and M. Paczuski, Phys. Rev. Lett. 88 (2002) 228301; S. Hergarten and H. J. Neugebauer, Phys. Rev. Lett. 88 (2002) 238501. * [6] P. Bak, C. Tang, and K. Wiesenfeld, Phys. Rev. Lett. 59 (1987) 381-384; P. Bak, C. Tang, and K. Wiesenfeld, Phys. Rev. A 38 (1988) 364-374. * [7] J. H. Christensen and O. B. Christensen, Nature 421 (2003) 805-806; T. N. Palmer, J. Raisanen, Nature 415 (2002) 512-514; O. Peters, C. Hertlein, and K. Christensen, Phys. Rev. Lett. 88 (2002) 018701; A. D. Angelopoulos, V. N. Paunov, V. N. Burganos, and A. C. Payatakes, Phys. Rev. E 57 (1998) 3237-3245; M. Blunt and P. King, Phys. Rev. A 42 (1990) 4780-4787; M. J. King and H. Scher, Phys. Rev. A 41 (1990) 874-884; M. Cieplak and M. O. Robbins, Phys. Rev. Lett. 60 (1988) 2042-2045. * [8] B. Tadic, Phys. Rev. E 57 (1998) 4375-4381. A. K. Turner and R. L. Schuster (eds.), Landslides: Investigation and Mitigation, National Research Council, Transportation Research Board, Special Report 247, 1996; R. Casale et al. (eds.), Flood and Land Slides: Integrated Risk Assessment, Springer, Berlin, 1999; * [9] T. Matsuoka, T. Yoshioka, J. Oda, H. Tanaka, Y. Kuwagata, H. Sugimoto, and T. Sugimoto, Public Health 114 (2000) 249-253; J. Dieterich, V. Cayol, and P. Okubo, Nature 408 (2000) 457-460; D. M. Pyle, Nature 393 (1998) 415-417. * [10] B. T. Grenfell, O. N. Bjornstad, and J. Kappey, Nature 414 (2001) 716-723; J. A. N. Filipe and C. A. Gilligan, Phys. Rev. E 67 (2003) 021906; R. Huerta and L. S. Tsimring, Phys. Rev. E 66 (2002) 056115; M. Boguna and R. Pastor-Satorras, Phys. Rev. E 66 (2002) 047104; D. Volchenkov, L. Volchenkova, and Ph. Blanchard, Phys. Rev. E 66 (2002) 046137; M. E. J. Newman, S. Forrest, and J. Balthrop, Phys. Rev. E 66 (2002) 035101; M. E. J. Newman, Phys. Rev. E 66 (2002) 016128; M. Girvan, D. S. Callaway, M. E. J. Newman, and S. H. Strogatz, Phys. Rev. E 65 (2002) 031915; R. M. May and A. L. Lloyd, Phys. Rev. E 64 (2001) 066112; H. C. Tuckwell, L. Toubiana, and J.-F. Vibert, Phys. Rev. E 64 (2001) 041918; R. Pastor-Satorras and A. Vespignani, Phys. Rev. Lett. 86 (2001) 3200-3203; R. Pastor-Satorras and A. Vespignani, Phys. Rev. E 63 (2001) 066117; C. Moore and M. E. J. Newman, Phys. Rev. E 61 (2000) 5678-5682; M. J. Keeling, M. E. J. Woolhouse, R. M. May, G. Davies, and B. T. Grenfell, Nature 421 (2003) 136-142. * [11] E. C. Zeeman (ed.), Catastrophe Theory, Addison-Wesley, London, 1977; V. I. Arnold, Catastrophe Theory, Springer, Berlin, 1986; R. Gilmore and W. R. Knorr, Catastrophe Theory for Scientists and Engineers, Dover, New York,1993; T. Poston and I. Stewart, Catastrophe Theory and Its Applications, Dover, New York, 1996; V. I. Arnold, V. S. Afrajmovich, Y. S. Il'Yashenko, and L. P. Shilnikov, Bifurcation Theory and Catastrophe Theory, Springer, Berlin, 1999; C. Conti and S. Trillo, Phys. Rev. E 64 (2001) 036617; J. A. Gaite, Phys. Rev. A 41 (1990) 5320-5324; R. Gilmore, Phys. Rev. A 20 (1979) 2510-2515. * [12] H. E. Stanley, Introduction to Phase Transitions and Critical Phenomena, Oxford University, Oxford, 1971; S. K. Ma, Modern Theory of Critical Phenomena, Benjamin, New York, 1976; P. C. Hohenberg and B. I. Halperin, Rev. Mod. Phys. 49 (1977) 435-479; C. Domb and M. S. Green (eds.), Phase Transitions and Critical Phenomena, Vols. 1-6, Academic Press, New York, 1972-1976; C. Domb and J. L. Lebowitz (eds.), Phase Transitions and Critical Phenomena, Vols. 7-19, Academic Press, New York, 1983-2000. * [13] D. Stauffer and A. Aharony, Introduction to Percolation Theory, Taylor & Francis, London, 1994; A. Rodrigues and D. Tondeur (eds.), Percolation Processes: Theory and Applications, Kluwer Acadamic, Dordrecht, 1981; G. Grimmett, Percolation, Springer, Berlin, 1999. * [14] R. Albert and A.-L. Barabasi, Rev. Mod. Phys. 74 (2002) 47-97; R. Albert, H. Jeong, and A.-L. Barabasi, Nature 406 (2000) 378-382; J. E. Cohen and P. Horowitz, Nature 352 (1991) 699-701. * [15] M. Falk, Laws of Small Numbers: Extreme and Rare Events, Birkhauser, Basel, 1994; J. Nott, Extreme Events: Reconstruction from Natural Records and Hazard Risk Assessment, Cambridge University, Cambridge, in print. * [16] N. G. van Kampen, Stochastic Processes in Physics and Chemistry, North-Holland, Amsterdam, 1981; C. W. Gardiner, Handbook of Stochastic Methods, Springer, Berlin, 1985; H. Risken, The Fokker-Planck Equation, Springer, New York, 1989. * [17] W. Horsthemke and R. Lefever, Noise-Induced Transitions, Springer, Berlin, 1984; I. Prigogine, in: Evolution and Consciousness. Human Systems in Transition, edited by E. Jantsch and C. H. Waddington, Addison-Wesley, Reading, MA, 1976; D. Helbing, I. J. Farkas, and T. Vicsek, Phys. Rev. Lett. 84 (2000) 1240-1243. * [18] G. Woo, The Mathematics of Natural Catastrophes, World Scientific, Singapore, 1999; A. Bunde, J. Kropp, and H. J. Schellnhuber (eds.) The Science of Disasters: Climate Disruptions, Heart Attacks and Market Crashes, Springer, Berlin, 2002. * [19] E. Zwicker, Simulation und Analyse dynamischer Systeme in den Wirtschaftsund Sozialwissenschaften, deGruyter, Berlin, 1981; C. M. Close, D. K. Frederick, and J. C. Newell, Modeling and Analysis of Dynamic Systems, Wiley, New York, 2001; D. G. Luenberger, Introduction to Dynamic Systems: Theory, Models, and Applications, Wiley, New York, 1979; A. Katok and B. Hasselblatt, Introduction to the Modern Theory of Dynamical Systems, Cambridge University, Cambridge, 1997. * [20] M. Markus and B. Hess, Nature 347 (1990) 56-58; E. Meron, Phys. Rep. 218 (1992) 1-66; I. Farkas, D. Helbing, and T. Vicsek, Nature 419 (2002) 131-132. * [21] D. Helbing, preprint [http://arXiv.org/abs/cond-mat/0301204](http://arXiv.org/abs/cond-mat/0301204). * [22] D. Helbing, in: Nonlinear Dynamics of Production Systems, edited by G. Radons and R. Neugebauer, Wiley, New York, 2003, in print. * [23] D. Helbing, Phys. Lett. A 212 (1994) 130-137; D. Helbing and R. Molini, Phys. Lett. A 212 (1995) 130-137. * [24] Bericht der Unabhangigen Kommission der Sachsischen Staatsreigerung Flutkatastrophe 2002, see www.sachsen.de/de/bf/hochwasser/programme/download/Kirchbach_Bericht.pdf. * [25] T. Nagatani and D. Helbing, Phys. Rev. E (2003), submitted. * [26] D. Helbing, T. Platkowski, and P. Seba, preprint (2003). * [27] E. Mosekilde and E. R. Larsen, System Dynamics Review 4(1/2) (1988) 131-147; C. Daganzo, A Theory of Supply Chains, Springer, New York, 2003; J. D. Sterman, Business Dynamics, McGraw-Hill, Boston, 2000. * [28] W. Zelinsky and L. A. Kosinski, L.A. 1991. The Emergency Evacuation of Cities, Rowman & Littlefield, Savage, 1991; H. T. Christen and P. M. Maniscalco, The EMS Incident Management System: Operations for Mass Casualty and High Impact Incidents, Prentice Hall, Upper Saddle River, 1998; K. N. Myers, Contingency Planning for Disasters: Protecting Vital Facilities and Critical Operations, Wiley, New York, 1999; W. L. Waugh, Living with Hazards, Dealing with Disasters: An Introduction to Emergency Mangement, M. E. Sharpe, New York, 2000; D. Alexander, Principles of Emergency Planning and Management, Oxford University Press, New York, 2002; P. M. Maniscalco and H. T. Christen, Understanding Terrorism and Managing the Consequences, Prentice Hall, Upper Saddle River, 2001; P. A. Erickson, Emergency Response Planning: For Corporate and Municipal Managers, Academic Press, Harcourt, 1999; D. A. Moss, When All Else Fails: Government As the Ultimate Risk Manager, Harvard University Press, Cambridge, Mass, 2002; R. W. Greene, Confronting Catastrophe: A GIS Handbook, ESRI, Redlands, Cal, 2002; D. R. Godschalk (ed.), Natural Hazard Mitigation: Recasting Disaster Policy and Planning, Island Press, Washington D.C., 1999; G. El Mahdy, Disaster Management in Telecommunications, Broadcasting and Computer Systems, Wiley, Chichester, New York, 2001; S. A. Marston, Terminal Disasters: Computer Applications in Emergency Management, Institute of Behavioral Science, University of Colorado, Boulder, CO 1986; R. Shaw and L. Walley, Disaster Management, Butterworth-Heinemann, Amsterdam, Boston, MA, 2002; G. D. Haddow and J. A. Bullock, Introduction to Emergency Management, Butterworth-Heinemann, Amsterdam, Boston, MA, 2004; E. Huls and H.-J. Oestern (eds.) Die ICE-Katastrophe von Eschede. Erfahrungen und Lehren. Eine interdisziplinare Analyse, Springer, Berlin, 1999. * [29] M. E. J. Newman, M. Girvan, and J. D. Farmer, Phys. Rev. Lett. 89 (2002) 028301; M. G. Shnirman and E. M. Blanter, Phys. Rev. E 60 (1999) 5111-5120; E. M. Blanter and M. G. Shnirman, Phys. Rev. E 55 (1997) 6397-6403; * [30] S. H. Clearwater and B. A. Huberman, Science 254 (1991) 1181-1183; B. A. Huberman and T. Hogg, Computational and Mathematical Organization Theory 1 (1995) 73-92; B. A. Huberman and C. H. Loch, Journal of Organizational Computing 6 (1996) 109-130; C. Loch, B. A. Huberman, and S. Stout, Journal of Economic Behavior and Organization 43(1) (2000) 35-55; B. A. Huberman, Computational and Mathematical Organization Theory 7 (2001) 145-153. * [31] G. O. Barney (ed.), Global 2000: The Report to the President: Entering the Twenty-First Century, Seven Locks Press, Cabin John, MD, 1992; The Club of Rome, see www.clubofrome.org/ * [32] H. Theil, Economic Forecasts and Policy, North Holland Publishing Co., Amsterdam, 1961. * [33] L. A. Zadeh, Inform. Contr. 8 (1965) 338-353; R. E. Bellman, and L. A. Zadeh, Management Sci. 17 (1970) 141-164. * [34] H. Haken, Information and Self-Organization, Springer, Berlin, 1988; G. Deco and B. Schurmann, Information Dynamics: Foundations and Applications, Springer, Berlin, 2000; K. Kornwachs and K. Jacoby (eds.) Information. New Questions to a Multidisciplinary Concept, Akademie, Berlin, 1996. * [35] D. Helbing, M. Schonhof, and D. Kern, New Journal of Physics 4 (2002) 33.1-33.16. * [36] H. Haken, Synergetics, Springer, Berlin, 1977; G. Nicolis and I. Prigogine, Self-Organization in Nonequilibrium Systems. From Dissipative Structures to Order through Fluctuations, Wiley, New York, 1977; W. Weidlich and G. Haag, Concepts and Models of a Quantitative Sociology. The Dynamics of Interacting Populations, Springer, Berlin, 1983; J. M. Pasteels and J. L. Deneubourg (eds.), From Individual to Collective Behavior in Social Insects, Birkhauser, Basel, 1987; R. Feistel and W. Ebeling, Evolution of Complex Systems. Self-Organization, Entropy and Development, Kluwer, Dordrecht, 1989; W. Weidlich, Physics Reports 204 (1991) 1-163; S. Kai (ed.), Pattern Formation in Complex Dissipative Systems, World Scientific, Singapore, 1992; D. L. DeAngelis and L. J. Gross (eds.), Individual-Based Models and Approaches in Ecology: Populations, Communities, and Ecosystems, Chapman and Hall, New York, 1992; D. Helbing, Quantitative Sociodynamics, Kluwer Academic, Dordrecht, 1995; W. Weidlich, Sociodynamics. A Systematic Approach to Mathematical Modelling in the Social Sciences, Harwood Academic, Amsterdam, 2000. * [37] A. Pikovsky, M. Rosenblum, and J. Kurths, Synchronization. A universal concept in nonlinear sciences., Cambridge University Press, 2001; S. Boccaletti, J. Kurths, G. Osipov, D. L. Valladares, and C. H. Zhou, Phys. Rep. 366 (2002) 1-101. * [38] D. Wolpert, K. Wheeler, and K. Tumer, Europhys. Lett. 49 (2000) 708-714; D. H. Wolpert and K. Tumer, in: Handbook of Agent Technology, ed J. M. Bradshaw, AAAI Press/MIT Press, 1999; H. M. Botee and E. Bonabeau, Adv. Compl. Syst. 1 (1998) 149-159; D. H. Wolpert, S. Kirshner, C. J. Merz, and K. Tumer, Adaptivity in agent-based routing for data networks, ACM Press, New York, 2000; C. R. Kube and E. Bonabeau, Robotics and Autonomous Systems 30 (2000) 85-101; P. Molnar and J. Starke, IEEE Transactions on Systems, Men and Cybernetics B 31(3) (2001) 433-436. * [39] D. Helbing and T. Vicsek, New Journal of Physics 1 (1999) 13.1-13.17.
In this paper we present a versatile method for the investigation of interaction networks and show how to use it to assess effects of indirect interactions and feedback loops. The method allows to evaluate the impact of optimization measures or failures on the system. Here, we will apply it to the investigation of catastrophes, in particular to the temporal development of disasters (catastrophe dynamics). The mathematical methods are related to the master equation, which allows the application of well-known solution methods. We will also indicate connections of disaster management with excitable media and supply networks. This facilitates to study the effects of measures taken by the emergency management or the local operation units. With a fictious, but more or less realistic example of a spreading epidemic disease or a wave of influenza, we illustrate how this method can, in principle, provide decision support to the emergency management during such a disaster. Similar considerations may help to assess measures to fight the SARS epidemics, although immunization is presently not possible. keywords: Master equation, interaction network, excitable media, supply chain management, robustness of graphs, causality network, catastrophe dynamics, disaster preparedness
Write a summary of the passage below.
arxiv-format/0307215v1.md
# The Bursting Behavior of 4U 1728-34: Parameters of a Neutron Star and Geometry of a NS-disk system Nickolai Shaposhnikov1, Lev Titarchuk12 and Frank Haberl3 Footnote 1: affiliation: George Mason University, Center for Earth Observing and Space Research, Fairfax, VA 22030; [email protected]; [email protected] Footnote 2: affiliation: NASA/ Goddard Space Flight Center, Greenbelt MD 20771, USA; [email protected] Footnote 3: affiliation: Max-Planck-Institut für extraterrestrische Physik, Giessenbachstrasse, 85748 Garching, Germany, [email protected] ###### accretion, accretion disks--stars:fundamental parameters--stars:individual(4U 1728-34) -- X-ray: bursts + [FOOTNOIntroduction A low mass X-ray binary (LMXB) system consists of a neutron star (NS) which accretes matter through Roche lobe overflow from an evolved low-mass secondary star. 4U 1728-34 has been recognized as a classical LMXB because it exhibits a wide range of observational characteristics which are attributed to LMXBs [see e. g. Lewin, van Paradijs & Taam (1993)]. In particular, it exhibits regular thermonuclear explosions of accreted matter on the NS surface [Type I X-ray bursts, see Strohmayer & Bildsten (2003) for the latest review]. High-frequency quasi-periodic oscillations (kHz QPO) in persistent emission [Ford & van der Klis (1998); Mendez & van der Klis (1999)], and 363 Hz burst oscillation [Strohmayer et al. (1998), Franco (2001), van Straaten et al. (2001), hereafter VS01] were revealed in burst emission from 4U 1728-34 using Fourier analysis of the high time-resolution RXTE data. So far, no optical counterpart for this X-ray source has been found. Although 4U 1728-34 is believed to be located several kiloparsecs from the Earth, no accurate estimation of the distance to the source is currently available. The theory of X-ray spectral formation during the expansion and contraction stages of the bursts was developed in Titarchuk (1994) and Shaposhnikov & Titarchuk (2002), hereafter T94 and ST02 respectively. This theory was first applied to EXOSAT data in Haberl & Titarchuk (1995), hereafter HT95, for the LMXBs 4U 1705-44 and 4U 1820-30. In Titarchuk & Shaposhnikov (2002), hereafter TS02, three bursts from Cyg X-2 were analyzed. In this work we employ the methodology, developed in TS02 to analyze a set of 26 bursts from 4U 1728-34, previously analyzed in VS01 to search for burst oscillations. Compared with Cyg X-2 (TS02), the 4U 1728-34 burst data have the advantage of high quality counting statistics as well as a larger number of burst events. A brief description of the data used in the analysis is given in SS2. We present the model and the results of its application to the burst data of 4U 1728-34 in SS3. Specifically, we obtain the dependence of the NS mass on the radius as error contours, calculated for the set of distances to the system taken from reasonable interval. In SS4 we offer an evolution scenario for the NS - accretion disk geometry, which can explain the existing controversy between the Eddington limit for the peak flux and the flux behavior during the bursts with radial expansion (Strohmayer & Bildsten, 2003). In SS4 we also present estimates for the inclination angle of the system. We discuss our results and come to conclusions in SS5. ## 2 Observations We analyzed the data collected by the Proportional Counter Array, (PCA; Jahoda et al., 1996), the main instrument on board the _RXTE_. Generous amount (\\(>\\) 1100 ksec) of _RXTE_ observational time was devoted to 4U 1728-34. More than 70 bursts were detected. For our analysis we selected the subset of bursts based on statistical equivalence and PCA data configuration homogeneity, namely, when all five detectors of PCA are operational and detailed spectral analysis on a subsecond time scale is possible. The selected events occurred during three periods: 15 March - 1 February 1996 (Proposal ID 10073), 19 September - 1 October 1997 (Proposal ID 20083), and 28 February - 10 June 1999 (Proposal ID 40027). A total of 26 Type I bursts were selected. Due to its exotic nature we excluded the second burst detected on 26 September 1997 (Observation ID 20083-01-04-00, burst 19 according VS01 numbering). ## 3 Data analysis and results We extract spectral slices from the Burst Catcher Mode for consecutive 0.125 sec time intervals for each burst. We obtain the spectrum of the persistent emission using a 300-500 second time interval prior to a particular burst and we input resulting spectra as a background to distinguish the burst radiation component. We use fixed hydrogen column of \\(N_{H}=1.6\\times 10^{22}\\) provided by HEASARC4 to model Galactic absorption. The dead-time corrections is applied to all spectra. We fit the burst emission component using a blackbody model. This is justified because the X-ray burst spectrum deviates from the blackbody-like shape only in the soft part \\(\\lesssim\\) 1 keV (T94 and ST02). The quality of the fits is quite good for all spectra except the particular contraction episodes when the luminosity is very close to the Eddington and the photospheric radius changes rapidly along with the outgoing spectrum shape. We calculate model flux between 0.01 and 100 keV. Errors for parameter estimations from spectral fits are calculated for 68% (1\\(\\sigma\\)) confidence level. For the interpretation of the spectral fit results we utilized the theoretical model for the color temperature of the spectra from the burst cooling phase. The underlying formalism is developed in T94 and TS02. Here we present the final formula for the color temperature \\(kT_{\\infty}\\)(see Eqs. 7-8 in TS02) Footnote 4: [http://heasarc.gsfc.nasa.gov/cgi-bin/Tools/w3nh/w3nh.pl](http://heasarc.gsfc.nasa.gov/cgi-bin/Tools/w3nh/w3nh.pl) \\[kT_{\\infty}=2.1\\,T_{h}\\{lm[(2-Y_{He})(z+1)^{3}r_{6}^{2}]^{-1}\\}^{1/4}{\\rm keV} \\tag{1}\\] where \\(m\\) is the NS mass in units of solar mass, \\(r_{6}\\) is the NS radius in units of 10 km, \\(Y_{He}\\) is the helium abundance, \\(l=L/L_{Edd}\\) is the dimensionless luminosity in units of the Eddington luminosity. \\(T_{h}\\) is the color (hardening) factor, which depends on \\(l\\), and \\(Y_{He}\\) (TS02). Parameters of the model are \\(m\\), \\(r_{6}\\), \\(Y_{He}\\) and \\(d_{10}\\) - distance to the object in units of 10 kpc. The dimensionless luminosity (Eddington ratio) is expressed by \\[l=0.476\\,\\xi_{b}\\,d_{10}^{2}F_{8}(2-Y_{He})(z+1)/m, \\tag{2}\\]where \\(\\xi_{b}\\) is an anisotropy factor, \\(F_{8}=F/(10^{-8}\\) erg cm\\({}^{-2}\\) s\\({}^{-1}\\)) is the observed bolometric flux. Equations (1) and (2) describe the functional dependence of the observed \\(kT_{\\infty}\\) on the observed flux \\(F_{8}\\) for the set of input parameters \\(m\\), \\(r_{6}\\), \\(Y_{He}\\) and \\(d_{10}\\). In the next section we put forth the NS-accretion disk geometry scenario and infer the behavior of \\(\\xi_{b}\\) during a burst with radial expansion. In this proposed scenario, the transition from the expansion stage to the decay stage is described graphically on the upper panel of Figure 1. At the beginning of the decay stage, immediately after the expansion stage ends, the entire star is exposed to the observer. We do not need any correction due to the system geometry, which means that \\(\\xi_{b}=1\\) in this case. Then at some moment \\(t=t^{*}\\) accretion disk recedes (reaches the star surface) and a certain part of the NS is obscured by the disk for the observer. The anisotropy factor \\(\\xi_{b}^{*}\\) that quantitatively takes into account this occultation effect is more than one in this case. In terms of functional dependence of \\(kT_{\\infty}\\) upon \\(F_{8}\\) the flux domain consists of three intervals. For \\(F_{8}>\\xi_{b}^{*}F_{8}^{*}\\) the entire star is open and \\(\\xi_{b}=1\\). The flux level \\(\\xi_{b}^{*}F_{8}^{*}\\) from the NS surface is related to the star-disk position when the lower NS hemisphere starts to get covered by the disk. Through the decay stage the temperature should be calculated using formulae (1) and (2) where \\(\\xi_{b}=\\xi_{b}^{*}>1\\). Thus the model formulated in this way acquires two more parameters, the anisotropy factor during occultation \\(\\xi_{b}^{*}\\) and dimensionless flux \\(F_{8}^{*}\\) at the moment when occultation occurs. Our analysis shows that constant values of these parameters for all bursts with radial expansion are consistent with observations. We use \\(\\xi_{b}^{*}=1.2\\) that is estimated using the expansion stage evolution (see SS 4). The best-fit value for \\(F_{8}^{*}=5.7\\) does not significantly depend on the distance either. We combine fluxes and temperatures for all 26 bursts and fit the model to the the entire data set in the range \\(1.0<F_{8}<9.5\\), which approximately corresponds to \\(0.1<l<0.9\\). We use the lower limit to exclude the data points with large errors. We put the upper limit on \\(l\\) because of the restricted validity of the model for \\(l\\sim 1\\) where expansion of the atmosphere can occur. We use a set of values for the distance between 4 and 5 kpc. We found that our analytical model gives statistically acceptable fit only when \\(Y_{He}>0.9\\). In fact, for lower \\(Y_{He}\\) the slope of the model color temperature dependence on \\(F_{8}\\) is steeper than that is dictated by the data (see Fig. 2). The model with hydrogen-reach gas composition fails to describe the data, particularly, due to the strong dependence of the hardening factor on the Eddington ratio \\(l\\). A given change in flux represents a larger change in the Eddington ratio in an \\(H\\)-atmosphere than that in an \\(He\\)-atmosphere because the value of \\(L_{Edd}\\) is smaller for an \\(H\\)-atmosphere. We consider this fact as a strong argument for NS atmosphere in 4U 1728-34 to be helium dominated and further we assume \\(Y_{He}=1.0\\) during the whole proceeding analysis. In Figure 2 we present the data and the best-fit model for the case of \\(d_{10}=0.45\\) for which we obtain \\(M_{NS}=1.25^{+0.06}_{-0.04}\\,M_{\\odot}\\) and \\(R_{NS}=9.00^{+0.17}_{-0.28}\\) km. The best-fit values of NS mass and radius and error contours for 68%, 90% and 99% confidence levels are obtained for each fit and they are presented in Table 1 and Figure 3. The number of degrees of freedom is 1418 and thus the acceptable fits must satisfy the condition that \\(\\chi^{2}_{red}=\\chi^{2}/1418\\leq 1.0\\). This condition is not satisfied for distances higher than 5 kpc for which \\(\\chi^{2}_{red}\\) grows very rapidly. This fact suggests that probability for NS mass to be higher than 1.6 \\(M_{\\odot}\\) is very low. The dashed line in Figure 3 presents the dependence of the inner disk radius \\(R_{in}\\) on the NS mass \\(m\\), derived for 4U 1728-34 using the transition layer model (TLM) for QPOs detected in the persistent state from LMXBs (Titarchuk & Osherovich, 1999; Li et al., 1999): \\(R_{in}=9\\times m^{1/3}\\) km. Our values of the NS radius are in good agreement with the TLM. Based both on this fact and on statistical behavior of the model we can conclude _that 4.5 - 5.0 kpc is the most probable interval for the distance to the source, that relates to 8.7-9.7 km and 1.2-1.6 \\(M_{\\odot}\\) ranges for \\(R_{NS}\\) and \\(M_{NS}\\) respectively._ ## 4 Dynamic evolution of the system geometry during the expansion stage We investigate the temporal evolution of the burst atmosphere photospheric radius for each individual burst in the manner similar to TS02 and HT95. Choosing particular values of distance and NS mass (obtained from the model fit) we find the radius [i.e. solve equation (7) of TS02] for each spectral slice. The evolution of the NS photospheric radius during the burst event is shown in Figure 1 for the case of \\(d_{10}=0.45\\). We present radius (diamonds) and observed flux (empty circles) values versus time for burst 9, according to the VS01 numbering convention (observation ID 10073-01-06-00). The data were rebinned with 0.25 second time resolution for presentation purpose. Filled circles represent the red-shift-corrected flux for each data point. This red-shift recalculated flux, is the flux which would be detected by an observer situated on the NS photosphere. Restoration of the red-shift corrected flux reveals distinctive features which are barely seen in the observed flux behavior. After the initial rise of the burst, the flux levels off and stays constant for more than a second while the radius increases gradually. After the radius reaches its maximum, a second rise in the flux occurs. The flux reaches its maximum value when radius begins to fall quickly in contrast to the initial flux plateau. After that, flux decays exponentially, indicating the end of the expansion stage, and radius levels off. The above analysis clearly indicates that the flux emitted in the direction of the Earth, (when measured locally at the NS surface), is not constant throughout the expansion stage. This behavior of the burst peak flux was found to be common for many bursters [see VS01, Galloway et al. (2002)]. If the system geometry remains unchanged, the mentioned flux behavior is in contradiction with the Eddington limit for the radiation power from a stellar atmosphere. We argue that the expanded burst envelope interacts strongly with the inner accretion disk. The evolution of the system geometry through the burst event is displayed in the upper part of Figure 1. Before the burst the accretion disk extends down to the surface of NS (stage a), _covering the lower part of the NS, which is not exposed to the Earth observer_. Then the burst starts and the atmosphere expands (stage b). At this moment, the inner part of the disk is swept away by the burst radiation pressure in excess of Eddington flux. This stage corresponds to the initial plateau of the red-shift corrected flux versus time. After the photosphere begins to contract, the second rise in the red-shift corrected flux starts, effectively indicating that _the lower NS hemisphere appears from behind the accretion disk_. Indeed, the free fall velocity is much higher than the radial propagation velocity component in the disk. After the touchdown of the burst envelope, the NS-disk configuration corresponds to the case (c). We assume that the red-shift corrected flux obtained for geometry (b) is the critical (upper limit) hemisphere flux. Obviously, the red-shift corrections depend on the NS mass and radius. Using the standard disk theory (Shakura & Sunyaev, 1973) one can estimate the expansion stage duration required for sufficient disk material evacuation as \\[{\\cal T}_{Exp}\\approx 10^{-6}m/(\\alpha\\dot{m}^{2}\\varepsilon)\\ \\ s, \\tag{3}\\] where \\(\\alpha\\) is efficiency of the radial momentum transfer, \\(\\dot{m}\\) is the mass accretion rate in units of critical mass flux value and \\(\\varepsilon\\approx 0.03\\sim 0.05\\) is a burst flux fraction transformed into the potential energy of ambient gas (see e.g. ST02). For burst sources the persistent mass accretion rate in the disk is believed to be \\(\\dot{m}=0.1\\sim 1\\) [see e. g. Lewin, van Paradijs & Taam (1993)]. Values for \\(\\alpha\\sim 0.1\\) are widely used in the astrophysical community for LMXB. Under these assumptions it takes only a small fraction of a second (\\({\\cal T}_{Exp}\\ll 0.1\\) s) to push the inner disk edge out, while observed expansion episodes of strong bursts from 4U 1728-34 usually last more than a second. This simple estimate suggests that the expanded NS atmosphere effectively push accretion disk outward. As long as the total NS luminosity during expansion stage (the Eddington luminosity) is constant the observed radiation flux is higher in geometry (c) than in (a) and (b). Assuming that the NS radiates at the Eddington limit, the ratio of fluxes detected from the direction at inclination angle \\(i\\) from the normal to the accretion disk in geometries (b) and (c) is \\[\\tilde{F}_{b}/\\tilde{F}_{c}=H(i)/H(0),\\ \\ \\ \\ H(i)=\\int_{i-\\pi/2}^{\\pi/2}\\cos \\omega d\\omega\\int_{-\\pi/2}^{\\pi/2}I(\\mu)\\cos^{2}\\psi d\\psi, \\tag{4}\\] where \\(\\omega\\) and \\(\\psi\\) are starcentric coordinates, \\(\\mu=\\cos\\omega\\cos\\psi\\) and \\(I(\\mu)\\) describes the angular intensity distribution law (see Sobolev 1975 for details). The tilde denotes the fact that value of flux was corrected for redshift. For the burst, presented in Figure 1, we have \\(\\tilde{F}_{b}\\simeq 1.3\\times 10^{-7}\\) erg/cm\\({}^{2}\\) and \\(\\tilde{F}_{c}\\simeq 1.55\\times 10^{-7}\\) erg/cm\\({}^{2}\\). Assuming \\(I(\\mu)=\\) const (the Lambert law), we obtain \\(\\tilde{F}_{b}/\\tilde{F}_{c}=(1+\\cos i)/2\\) which leads to an estimate of the inclination angle \\(i\\sim 50^{\\circ}\\). This result is also close to that for the Chandrasekhar angular distribution, \\(I=I_{0}(1+2.06\\mu)\\). We can apply the value of \\(\\tilde{F}_{c}\\) for the quantitative assessment of the distance to the source. The peak of the red-shift corrected flux \\(\\tilde{F}_{c}\\) corresponds to the state when the burst atmosphere subsides on the NS surface. The anisotropy factor is estimated as \\(\\xi_{b}=\\tilde{F}_{c}/\\tilde{F}_{b}\\simeq 1.2\\). We use this value as the anisotropy factor \\(\\xi_{b}^{*}\\) in the a-geometry. ## 5 Discussion and Conclusions In this _Letter_ we offer a sophisticated analysis of the burst expansion events. Accounting for the general relativistic (GR) effects reveals the dynamics and geometry of the NS-disk system and consistently explains the controversy between theory and the behavior observed during the expansion episodes of X-ray bursts. As a direct outcome of the proposed geometric scenario we obtain the distance and estimate the inclination angle of the system. The derived mass-radius relation depends on the assumed anisotropy. For example, without taking into account of anisotropy, \\(\\xi_{b}=1\\) (conventional approach) we obtain \\(M/M_{\\odot}=1.29\\) and \\(r=7.8\\) km for \\(d=4.5\\) kpc while accurate anisotropy corrections implemented in SS3 result in \\(M/M_{\\odot}=1.25\\) and \\(r=9.0\\) km. The statistical behavior of our model clearly rules out values of helium abundance lower than 0.9. We could not find the set of model parameters including the distance that would give an acceptable value of \\(\\chi^{2}\\) for \\(Y_{He}<0.9\\). The lower values of helium abundance gives much steeper temperature versus flux functional dependence than it is dictated by the data. Furthermore, in earlier observations of 4U 1728-34 Basinska et al. (1984) found that helium was probably the main source of nuclear fuel for the bursts and that any contribution from hydrogen was small. Our data analysis confirms the results of Basinska et al. and gives another argument for a helium-reach atmosphere in 4U 1728-34. Our results also favor the soft EOSs [Baym & Pethick (1979)]. We conclude that (1) the application of our analytical model to the data leads to the determination of the NS mass and radius as a function of the distance to the system. For the range of allowed distances 4.5-5.0 kpc we obtain rather narrow constrains for the NS radius in 8.7-9.7 km range and wide interval 1.2-1.6 \\(M_{\\odot}\\) for NS mass (see also discussion in TS02 and Strohmayer & Bildsten 2003); (2) the consistent evolution scenario for the NS - accretion disk geometry, which explains the variation of the peak flux during the radial expansion stage; (3) our geometrical model enables us to estimate; the inclination angle of the system to \\(i\\sim 50^{\\circ}\\) with respect to the Earth observer. We acknowledge the fruitful and constructive discussion with the referee. ## References * () Baym, G., & Pethick, C. 1979, Ann. Rev. Astr. Ap. 17, 415 * () Basinska, E.M., et al. 1984, ApJ, 281, 337 * () Di Salvo, T., Iaria, R., Burderi, L.,& Robba N. R. 2000, ApJ, 542, 1034 * () Ford, E. C., & van der Klis, M. 1998, ApJ, 506, L39 * () Franco, L. M. 2001, ApJ, 554, 340 * () Galloway, D. K., et al. 2003, ApJ, 590, 999 * () Haberl, F., & Titarchuk, L. 1995, A&A, 299, 414 (HT95) 1999, Astron. Lett, 25, 269 * () Jahoda, K., et al. 1996, Proc. SPIE, 2808,59 * () Lewin, W.H. G., van Paradijs, J., & Taam, R.E. 1993, Space Sci. Rev., 62, 223 * () Li, X., et al. 1999, Phys. Rev. Lett., 83, 3776 * () Mendez, M., & van der Klis, M. 1999, ApJ, 517, L51 * () Shakura, N.I., Sunyaev, R. A. 1973, A&A, 24, 337-355 * () Shaposhnikov, N., & Titarchuk, L. 2002, ApJ, 567, 1077 (ST02) * () Sobolev, V.V. 1975, Light Scattering in Atmospheres (Oxford: Pergamon) * () Strohmayer, T.E., Zhang, W., Swank, J.H. & Lapidus, I.I. 1998, ApJ, 503, L147 * () Strohmayer, T.E., Bildsten, L. 2003, in \"X-ray Binaries\", in press, astro-ph/0301544 * () Titarchuk, L. 1994, ApJ, 429, 330 * () Titarchuk, L., & Osherovich, V. 1999, ApJ, 518, L95 * () Titarchuk, L., & Shaposhnikov, N. 2002, ApJ, 570, L25 (TS02) * () van Straaten, S., van der Klis, M., Kuulkers, E., & Mendez, M. 2001, ApJ, 551, 2 (VS01) Figure 1: Geometry evolution of the burst through radial expansion. White circles present observed bolometric flux. Filled circles present GR corrected flux. Photospheric radii inferred for the pure helium atmosphere and distance of 4.5 kpc are shown by diamonds. Upper panel displays a cartoon diagram of different states of a NS-accretion disk system. Dashed arrows point to different stages of the burst. Figure 2: Color blackbody temperature of burst spectra for 26 bursts from 4U 1728-34 versus flux. Solid curve presents the analytical model fit with fixed \\(d_{10}=0.45\\) and \\(Y_{He}=1.0\\). Dashed line present the same model with \\(Y_{He}=0\\). Figure 3: Mass-radius contour obtained by the model fitting. Mass-radius relations for soft EOS (Baym & Pethick, 1979) is presented by solid line. The dotted curve shows the dependence of the accretion disk inner edge \\(R_{in}\\) obtained using the transition layer model (TLM).
We analyze a set of Type I X-ray bursts from the low mass X-ray binary 4U 1728-34, observed with _Rossi X-ray Timing Explorer_ (RXTE). We infer the dependence of the neutron star (NS) mass and radius with respect to the assumed distance to the system using an analytical model of X-ray burst spectral formation. The model behavior clearly indicates that the burster atmosphere is helium-dominated. Our results strongly favor the soft equation of state (EOS) of NS for 4U 1728-34. We find that distance to the source should be within 4.5-5.0 kpc range. We obtain rather narrow constrains for the NS radius in 8.7-9.7 km range and interval 1.2-1.6 \\(M_{\\odot}\\) for NS mass for this particular distance range. We uncover a temporal behavior of red-shift corrected burst flux for the radial expansion episodes and we put forth a dynamical evolution scenario for the NS-accretion disk geometry during which an expanded envelope affects the accretion disk and increases the area of the neutron star exposed to the Earth observer. In the framework of this scenario we provide a new method for the estimation of the inclination angle which leads to the value of \\(\\sim 50^{\\circ}\\) for 4U 1728-34.
Give a concise overview of the text below.
arxiv-format/0308088v1.md
# Strangeness Production in Nuclear Matter and Expansion Dynamics V.D. Toneev\\({}^{a,b}\\), E.G. Nikonov\\({}^{a,b}\\), B. Friman\\({}^{a}\\), W. Norenberg\\({}^{a}\\), K. Redlich\\({}^{c,d}\\) \\({}^{a}\\) Gesellschaft fur Schwerionenforschung GSI, D-64291 Darmstadt, Germany \\({}^{b}\\) Joint Institute for Nuclear Research, 141980 Dubna, Moscow Region, Russia \\({}^{c}\\) Fakultat fur Physik, Universitat Bielefeld, D-33501 Bielefeld, Germany \\({}^{d}\\) Institute of Theoretical Physics, University of Wroclaw, PL-50204 Wroclaw, Poland ###### Introduction The quest for the deconfinement transition, the phase transition from a confined hadronic phase to a deconfined quark-gluon phase (the so called quark-gluon plasma, QGP), remains a major challenge in strong interaction physics [1]. Over the past two decades a lot of effort has gone into the exploration of this transition and its possible manifestations in relativistic heavy ion collisions, in neutron stars as well as in the early universe. Relativistic heavy ion collisions offers a unique opportunity to reach states with temperatures and energy densities exceeding the critical values, \\(T_{c}\\sim 170~{}MeV\\) and \\(\\varepsilon_{c}\\sim 0.6~{}GeV/{\\rm fm}^{3}\\), specific for the deconfinement phase transition [2]. Thus, it is likely that color degrees of freedom play an important role already at SPS and RHIC energies [3]. Various signals for the formation of a quark-gluon plasma in such collisions have been discussed and probed in experiments [1, 4]. Enhanced production of strangeness relative to proton-proton and proton-nucleus collisions was one of the conjecture signals of the quark-gluon plasma formation in heavy ion collisions [5]. The original idea behind the strangeness enhancement is that strange and antistrange quarks are easily created in a quark-gluon plasma, while in the hadronic phase strangeness production is suppressed. The dominant reaction in the plasma is \\(gg\\to s\\bar{s}\\). Furthermore, since the strange quark mass is not larger than \\(T_{c}\\), one expects the strange degrees of freedom to equilibrate in the quark-gluon plasma. Although, a heavy ion collision at high energies is a highly non-equilibrium process, the hadron yields (including strange particles) measured in the energy range from SIS to RHIC [6, 7, 8, 9, 10] are remarkably well described in the thermal model assuming chemical equilibrium at freeze out. This indicates that collective effects play an important role in the production of strangeness. On the other hand, elaborate microscopic transport models do not provide a quantitative explanation of the excitation functions for strange particles in this energy range. In the hadron string dynamics model [11] one finds a too small \\(K^{+}/\\pi^{+}\\) ratio around AGS energies, while in RQMD [12] the yield is overestimated at SIS and too small at SPS energies. The aim of this paper is to explore global effects of strangeness production in hot and dense nuclear matter within a collective approach. Our starting point is an equation of state (EoS) with a deconfinement phase transition. Since strangeness is conserved at the time scales relevant for heavy ion collisions, a strangeness chemical potential is introduced. We examine various phenomenological models for the equation of state, which differ in the order of the deconfinement phase transition: a first order transition (the two-phase bag model), a crossover-type transition (the statistical mixed-phase model) and no phase transition (pure hadronic models). The consequences of strangeness separation and softening of the equation of state are discussed. Furthermore, the manifestation of the order of the deconfinement phase transition in the expansion dynamics and the bulk strangeness production is studied. The predictions obtained with different equations of state are related with experimental excitation functions for relative strange particle abundances. ## 2 Modelling the equation of state of strongly interacting matter The EoS of strongly interacting matter can in general be obtained by first principal calculations within lattice gauge theory [13]. The thermodynamics and the order of the phase transition in QCD is rather well established for two and three light quark flavour in lattice calculations. However, the physically relevant situation of two light (u,d) and a heavy (s) quark is still not well described within lattice approach. In particular, the existence of a phase transition and its order in 2+1 flavour QCD is not yet known. In addition most of the lattice calculations are performed for vanishing net baryon number density. Only recently, first results on the EoS with non zero baryon chemical potential have been obtained on the lattice [14]. However, these studies have so far been performed with large quark masses which distort the physical EoS. Thus, lattice results can still not be used directly in physical applications. Lacking lattice QCD results for the EoS at finite baryon density \\(n_{B}\\) with physically relevant values of the quark masses, a common approach is to construct a phenomenological equation of state for strongly interacting matter. This EoS should be constrained by existing lattice results and should also reproduce the two-phase structure of QCD. Here we construct different models for QCD thermodynamics and study their physical implications with particular emphasis on strangeness production and evolution in heavy ion collisions. A recent analysis of the lattice EoS [15, 16], shows that in the low temperature phase, hadrons and resonances are the relevant degrees of freedom. The hadron resonance gas, with a modified mass spectrum to account for the unphysical values of the quark masses used in the lattice calculations, was shown to reproduce the bulk thermodynamic properties of QCD, obtained on the lattice with different numbers of quark flavors as well as at finite and vanishing net baryon density [15, 16]. Lattice calculations show that, at very large temperature the thermodynamical observables approach the Stefan-Boltzmann limit of an ideal gas of quarks and gluons, both at finite as well as vanishing net baryon density. The remaining \\(\\sim 20\\%\\) discrepancy at \\(T>2T_{C}\\) is understood by systematic contributions in self-consistent implementation of quasiparticle masses in the HTL-reassumed perturbative QCD [17]. To describe the thermodynamics near the phase transition additional model assumptions are necessary [18, 19]. From the above discussion it is clear that the most straightforward model for the EoS would be a non-interacting hadron resonance gas in the low temperature phase and ideal quark gluon-plasma in the color deconfined phase. These phases are matched at the phase transition boundary by means of the Gibbs phase equilibrium conditions. By construction, this approach yields a first order phase transition. Such an EoS with strange degrees of freedom has frequently been used in the literature [20, 21, 22, 23, 24, 25] and is also a standard input in hydrodynamic simulations of heavy ion collisions [26, 27]. However, in order to obtain a reasonable phase diagram one has to include short-range repulsive interactions between hadronic constituents. In general this can be realized by introducing short-range repulsion in a thermodynamically consistent approach [28, 29, 30, 31]. We note that according to Gibbs phase rule [32], the number of thermodynamic degrees of freedom that may be varied without destroying the equilibrium of a mixture of \\(r\\) phases, with \\(n_{c}\\) conserved charges is \\({\\cal N}=n_{c}+2-r\\). For the hadron-quark deconfinement transition under consideration \\(r=2\\). If the baryon number is the only conserved quantity, \\(n_{c}=1\\) and \\({\\cal N}=1\\). Thus, the phase boundary is one-dimensional, i.e. a line. The Maxwell construction for a first order phase transition corresponds just to this case \\(r=2\\) and \\(n_{c}=1\\). When both the baryon number and strangeness are conserved (\\(n_{c}=2\\)), one has \\({\\cal N}=2\\) and therefore the phase boundary is in general a surface. In such a system, a standard Maxwell construction is not possible [33]. To account for the uncertainties in the order of the phase transition in \\(2+1\\)flavour QCD and also for the deviation of the equation of state from an ideal gas near the deconfinement transition we employ the EoS of the mixed phase model [28, 29]. In this model it is assumed that unbound quarks and gluons may coexist with hadrons forming a homogeneous mixture. This model is thermodynamically consistent and reproduces the lattice EoS obtained in the pure gauge theory as well as in two flavour QCD. Furthermore, the order of the phase transitions in the mixed phase model depends on the strength of the interaction between the phases. In this approach we can study the importance of the order of the phase transition on strangeness production and on the evolution of heavy ion collisions. In the following we discuss first the basic thermodynamical properties of these different models of the EoS and indicate relevant differences in their predictions. ### Two-phase bag model In the two-phase (2P) model [34], the deconfinement phase transition is determined by matching the EoS of a relativistic gas of hadrons and resonances, with repulsive interactions at short distances, to that of an ideal gas of quarks and gluons. The change in vacuum energy in the plasma phase is parameterized by a bag constant \\(B\\). We work in the grand canonical ensemble and account for all hadrons with mass \\(m_{j}<1.6\\ GeV\\), including the strange particles and resonances with strangeness \\(s_{j}=\\pm 1,\\pm 2,\\pm 3\\). The density of particle species \\(j\\) is then \\[n_{j}(T,\\mu_{j})\\equiv n_{j}(T,\\mu_{B},\\mu_{S}) = v\\ n_{j}^{id}(T,\\mu_{B},\\mu_{S}) \\tag{1}\\] \\[= \\frac{v\\ g_{j}}{2\\pi^{2}}\\int_{0}^{\\infty}dk\\ k^{2}\\ f_{j}(k,T, \\mu_{B},\\mu_{S})\\,\\] where \\[f_{j}(k,T,\\mu_{B},\\mu_{S})=\\left[\\ exp\\left(\\frac{\\sqrt{k^{2}+m_{j}^{2}}-b_{j} \\mu_{B}-s_{j}\\mu_{S}}{T}\\right)\\pm 1\\right]^{-1} \\tag{2}\\] is the momentum distribution function for fermions (\\(+\\)) and bosons (\\(-\\)) while \\(g_{j}\\) is the spin-isospin degeneracy factor. The chemical potential \\(\\mu_{j}\\) is related to the baryon (\\(\\mu_{B}\\)) and strangeness (\\(\\mu_{S}\\)) chemical potentials \\[\\mu_{j}=b_{j}\\ \\mu_{B}+s_{j}\\ \\mu_{S}\\,, \\tag{3}\\]where \\(b_{j}\\) and \\(s_{j}\\) are the baryon number and strangeness of the particle. The quantity \\(n_{j}^{id}\\) corresponds to the number density of an ideal point-like hadron gas (IdHG). The factor \\[v\\equiv v(T,\\mu_{B},\\mu_{S})=1/[1+\\sum_{j}\\ v_{0j}\\ n_{j}^{id}(T,\\mu_{B},\\mu_{S})] \\tag{4}\\] reduces the volume available for hadrons due to their short range repulsion determined by the eigenvolume \\(v_{0j}=(1/2)(4\\pi/3)(2r_{0j})^{3}\\)[32]. We choose the effective interaction radius \\(r_{0j}\\sim 0.5\\)fm for all hadrons. Following (1), the baryon density and strangeness in the hadronic phase can be expressed as \\[n_{B}^{H} = \\sum_{j\\in h}b_{j}\\ n_{j}(T,\\mu_{B},\\mu_{S})\\, \\tag{5}\\] \\[n_{S}^{H} = \\sum_{j\\in h}s_{j}\\ n_{j}(T,\\mu_{B},\\mu_{S}) \\tag{6}\\] where the sum is taken over all hadrons and resonances. Similarly, the energy density of species \\(j\\) is given by \\[\\varepsilon_{j}(T,\\mu_{B},\\mu_{S}) = v\\ \\varepsilon_{j}^{id}(T,\\mu_{B},\\mu_{S}) \\tag{7}\\] \\[= \\frac{v\\ g_{j}}{2\\pi^{2}}\\int_{0}^{\\infty}dk\\ k^{2}\\sqrt{k^{2}+m _{j}^{2}}\\ f_{j}(k,T,\\mu_{B},\\mu_{S})\\.\\] In early studies [34, 35], the excluded volume correction \\(v\\) was implemented in the same way for all thermodynamic quantities of the hadron gas, including the pressure \\[p^{H}(T,\\mu_{B},\\mu_{S})=\\sum_{j\\in h}p_{j}(T,\\mu_{B},\\mu_{S})\\,, \\tag{8}\\] where the partial pressures are given by \\[p_{j}(T,\\mu_{B},\\mu_{S}) = v\\ p_{j}^{id}(T,\\mu_{B},\\mu_{S}) \\tag{9}\\] \\[= \\frac{v\\ g_{j}}{6\\pi^{2}}\\int_{0}^{\\infty}dk\\ \\frac{k^{4}}{\\sqrt{k^{2}+m_{j}^{2}}}\\ f_{j}(k,T,\\mu_{B},\\mu_{S})\\.\\] However, this expansion for the pressure is not thermodynamically consistent with the charge (5-6) as well as the energy density (7). In Ref. [36] it was shown that, it is possible to account for a thermodynamically consistent implementation of the excluded volume corrections. In this approach the pressure is given by that of an ideal gas with modified chemical potentials \\[p^{H}(T,\\mu_{B},\\mu_{S})=\\sum_{j\\in h}p_{j}^{id}(T,\\tilde{\\mu_{j}}) \\tag{10}\\] where \\[\\tilde{\\mu_{j}}=b_{j}\\ \\mu_{B}+s_{j}\\ \\mu_{S}-v_{0j}\\ p^{H}(T,\\mu_{B},\\mu_{S}). \\tag{11}\\] The remaining thermodynamic quantities are obtained with the excluded volume correction given above by taking the corresponding derivatives of the pressure. Thus, in this approach all fundamental thermodynamic relations are fulfilled [36]. We shall refer to Eqs.(8,9) and Eqs.(10,11) as two-phase thermodynamically inconsistent (2PIN) and consistent (2PC) model, respectively. Note that such an equation of state may violate causality at high densities, because an extended rigid body is incompatible with the basic principles of relativity. The QGP phase is described as a gas of non-interacting point-like quarks, antiquarks and gluons. The non-perturbative effects associated with confinement are described by the constant vacuum energy \\(B\\). The pressure in the plasma is then given by \\[p^{Q}(T,\\mu_{B},\\mu_{S})=p_{g}(T)+\\sum_{j\\in q}p_{j}^{id}(T,\\mu_{B},\\mu_{S})-B\\, \\tag{12}\\] where the gluon \\[p_{g}(T)=\\frac{g_{g}\\pi^{2}}{90}T^{4}\\hskip 28.452756pt(g_{g}=16) \\tag{13}\\] and the quark pressure is obtained from Eq.(9) for \\(u,d,s\\) quarks and antiquarks. We use the quark masses \\(m_{u}=m_{d}=5\\)MeV and \\(m_{s}=150\\)MeV and the bag constant \\(B=(235\\)MeV\\()^{4}\\) which yields a transition temperature \\(T_{c}\\approx 160\\)MeV in agreement with lattice calculations at \\(n_{B}=0\\)[13]. The energy density of the plasma phase \\[\\varepsilon^{Q}(T,\\mu_{B},\\mu_{S})=\\varepsilon_{g}(T)+\\sum_{j\\in q}\\varepsilon _{j}^{id}(T,\\mu_{B},\\mu_{S})+B\\, \\tag{14}\\] where the gluon contribution is given by \\[\\varepsilon_{g}(T)=3\\ p_{g}(T)=\\frac{g_{g}\\pi^{2}}{30}T^{4} \\tag{15}\\]and that of quark species \\(j\\) is obtained from Eq.(7) with \\(v=1\\). Analogously to Eqs.(5) and (6) the densities of the conserved charges in the QGP phase are : \\[n_{B}^{Q} = \\sum_{j\\in q}b_{j}\\ n_{j}^{id}(T,\\mu_{B},\\mu_{S})\\, \\tag{16}\\] \\[n_{S}^{Q} = \\sum_{j\\in q}s_{j}\\ n_{j}^{id}(T,\\mu_{B},\\mu_{S}). \\tag{17}\\] The equilibrium between the plasma and the hadronic phase is determined by the Gibbs conditions for thermal (\\(T^{Q}=T^{H}\\)), mechanical (\\(p^{Q}=p^{H}\\)) and chemical (\\(\\mu_{B}^{Q}=\\mu_{B}^{H},\\ \\mu_{S}^{Q}=\\mu_{S}^{H}\\)) equilibrium. At a given temperature \\(T\\) and baryon chemical potential \\(\\mu_{B}\\) the strange chemical potential \\(\\mu_{S}\\) is obtained by requiring that the net strangeness of the total system vanishes. Thus, for the total baryon density \\(n_{B}\\) the phase equilibrium requires that: \\[p^{H}(T,\\mu_{B},\\mu_{S}) = p^{Q}(T,\\mu_{B},\\mu_{S})\\, \\tag{18}\\] \\[n_{B} = \\alpha\\ n_{B}^{Q}(T,\\mu_{B},\\mu_{S})+(1-\\alpha)\\ n_{B}^{H}(T,\\mu_ {B},\\mu_{S})\\,\\] (19) \\[0 = \\alpha\\ n_{S}^{Q}(T,\\mu_{B},\\mu_{S})+(1-\\alpha)\\ n_{S}^{H}(T,\\mu_ {B},\\mu_{S})\\, \\tag{20}\\] where \\(\\alpha=V_{Q}/V\\) is the fraction of the volume occupied by the plasma phase. The boundaries of the coexistence region are found by putting \\(\\alpha=0\\) (the hadron phase boundary) and \\(\\alpha=1\\) (the plasma boundary). As mentioned above, the Maxwell construction is not appropriate in a system where both baryon number and strangeness are conserved. To illustrate this, we first analyze an approximate form of the equation (17) for strangeness conservation. We retain only the main terms and drop those with \\(|s_{j}|>1\\): \\[\\alpha\\ (n_{s}-n_{\\bar{s}})=(1-\\alpha)\\ (n_{K}+n_{\\bar{\\Lambda}}+n_{\\bar{ \\Sigma}}-n_{\\bar{K}}-n_{\\Lambda}-n_{\\Sigma})\\,. \\tag{21}\\] In the Boltzmann approximation the densities may be computed analytically \\[n_{j}^{id}\\approx n_{j}^{B}=g_{j}\\,(\\frac{T^{3}}{2\\pi^{2}})\\,(\\frac{m_{j}}{T} )^{2}\\,K_{2}(\\frac{m_{j}}{T})\\,\\exp(\\frac{\\mu_{j}}{T})\\equiv g_{j}\\ (\\frac{T^{3}}{2\\pi^{2}})\\,W_{j}\\,\\exp(\\frac{\\mu_{j}}{T})\\,, \\tag{22}\\] and the strangeness chemical potential is obtained as [36]: \\[\\mu_{S}=\\frac{T}{2}\\ln\\frac{3\\alpha W_{s}+v(1-\\alpha)\\left(W_{K}\\ e^{-\\frac{ \\mu_{B}}{3T}}+(W_{\\Lambda}+3W_{\\Sigma})e^{\\frac{2\\mu_{B}}{3T}}\\right)}{3\\alpha W _{s}+v(1-\\alpha)\\left(W_{K}\\ e^{\\frac{\\mu_{B}}{3T}}+(W_{\\Lambda}+3W_{\\Sigma}) e^{-\\frac{2\\mu_{B}}{3T}}\\right)}+\\frac{\\mu_{B}}{3}. \\tag{23}\\]It is seen that at the _plasma boundary_ (\\(\\alpha=1\\)) \\(\\mu_{S}=\\mu_{B}/3\\) while \\(\\mu_{S}\ eq\\mu_{B}/3\\) at the _hadron boundary_ (\\(\\alpha=0\\)). This implies that not only \\(\\mu_{S}\\) but also \\(\\mu_{B}\\) and the pressure changes along isotherms in the coexistence region. Hence, the standard Maxwell construction, which interpolates the densities linearly between the pure phases, is not adequate. The equations for phase equilibrium (18-20) must be solved to obtain \\(\\mu_{S}\\) and \\(\\mu_{B}\\) at every point in the coexistence region. When two phases coexist, the system is in general not homogeneous as the phases occupy a separate domains in space. We do not explicitly account for such domains structure nor for a possible surface energy contribution to the equation of state. The only consequence of the phase separation in this calculations is that the interactions between particles in the plasma and hadronic phase are excluded. This is different in the statistical mixed phase model discussed in the next section. The solution of the Gibbs conditions (18-20) is shown in Fig. 1 for the plasma and hadron phase pressure versus \\(\\mu_{B}^{4}\\) at fixed \\(T=80\\)MeV and \\(\\mu_{S}=\\mu_{B}/3\\). The crossing of the quark and hadronic pressure corresponds to the transition point at the plasma boundary. In this special case the condition \\(\\mu_{S}=\\mu_{B}/3\\) guarantees strangeness neutrality. In general, however, for \\(\\alpha\ eq 1\\), \\(\\mu_{S}\\) must be chosen such that the strangeness of the total system of quarks and hadrons vanishes. This requires an iterative solution of the equations (18-20). Away from the transition point, the system is in the phase with higher pressure \\(p\\) (lower free energy). Fig. 1 also shows that there is no deconfinement transition if the hadronic phase is described as a gas of point-like particles [34]. The situation is not improved by including more resonances. On the contrary, the larger the set of hadronic resonances is, the higher is the pressure at a given baryon chemical potential. However, the inclusion of repulsive interactions between hadrons leads to a reduction of the hadron pressure \\(p^{H}\\) at fixed baryon chemical potential. Consequently, a short-range repulsion between hadrons stabilizes the quark-gluon plasma at high densities. The resulting phase boundaries in the \\(T\\)-\\(\\mu\\) plane are shown in Fig. 2. The difference in \\(\\mu_{B}\\) at the phase boundaries described by Eqs.(18-20) is small while for the strange chemical potential \\(\\mu_{S}\\) it is more noticeable. It is natural to expect that in the high temperature plasma \\(\\mu_{S}\\approx\\mu_{B}/3\\). On the other hand, in the hadronic phase and at low temperatures, where strangeness is carried mostly by kaons and \\(\\Lambda\\)-hyperons, the strange chemical potential is roughly approximated by \\(\\mu_{S}\\approx 0.5\\) (\\(\\mu_{B}+m_{K}-m_{\\Lambda}\\)) \\(\\approx 550\\)MeV. Both these expectations are in agreement with our numerical results. Nevertheless, also in the high temperature hadronic phase the strange chemical potential exhibits an approximately linear dependence on the baryon chemical potential. In Figs. 2 and 3, the resulting phase diagrams are shown in the \\(T\\)-\\(\\mu_{B}\\) and \\(T\\)-\\(\\mu_{S}\\) as well as \\(T\\)-\\(n_{B}\\) planes. The role of thermodynamical consistency is particularly evident in the \\(T\\)-\\(n_{B}\\) plane. As seen in Fig. 3, the baryon density \\(n_{B}\\) at the plasma boundary is increased while it is slightly decreased at the hadron side in the 2PC model as compared with the 2PIN approach. Consequently, the range of the coexistence region grows from Figure 1: Pressure versus baryon chemical potential for fixed \\(T=80~{}MeV\\) and for \\(\\mu_{S}=\\mu_{B}/3\\). The thin–line is the hadronic and the thick–line the quark phase in 2PC model. The dashed–dotted (1) line and dashed–line are an ideal gas model results without and with repulsion in the 2PIN model, respectively. The line (2) is obtained as line (1) but with fewer hadronic resonances. The line (3) is calculated within a mean–field approximation of the Zimanyi model [37] (see text). Figure 3: The phase diagram in the (\\(T\\)–\\(n_{B}\\)) plane for 2PC (full–lines) and 2PIN (dashed–lines) models. Figure 2: The phase diagram in the \\(T\\)–\\(\\mu_{B}\\) (marked by B) and in the \\(T\\)–\\(\\mu_{S}\\) (marked by S) plane for the 2PC and 2PIN models. The plasma and hadron boundaries are shown by full and dashed–lines, respectively. The dotted–lines are the approximate results obtained with \\(\\mu_{S}=0\\) and with \\(\\mu_{S}\\) from Eq.(23). to \\(\\sim(3.5\\div 10)n_{0}\\). Thermodynamical properties and the differences between the two-phase bag models are shown in Figs. 4 and 5. In both cases the baryon and strange chemical potentials are continuous when crossing the phase boundaries. This guarantees that the system is chemically stable. Demanding the conservation of strangeness in each phase separately [20] would results in a discontinuity in \\(\\mu_{S}\\). In contrast to the case with only one conserved charge, the chemical potentials are not necessary constant within the Gibbs coexistence region. Depending on the values of \\(\\mu\\) at the hadronic and plasma boundaries (see Fig. 2), the chemical potentials (in particular \\(\\mu_{S}\\)) can be either increasing or decreasing functions of \\(n_{B}\\). Although this change is not large, it influences the strangeness separation in the phase coexistence region. The energy density is seen in Figs. 4 and 5 to be a monotonously increasing function of \\(n_{B}\\) in both models. The pressure is also continuous Figure 4: Dependence of different thermodynamical quantities on the baryon density within 2PC model. The results are shown for two different temperatures. The hadron and plasma phase boundaries are shown by dotted–lines. within 2PC model and is higher than in the 2PIN approach. In addition, in the latter model the pressure also suffers a jump at the boundary of the hadronic phase, which increases with decreasing temperature. Such an EoS would lead to a mechanical instability of the hydrodynamic flow. As seen in Figs. 4 and 5, the changes in pressure across the coexistence region are quite small. Consequently, the system expands very slowly. This is a specific feature expected for the systems with a first order phase transition. We stress that there are at least two problems, which show up when the EoS discussed above is employed in hydrodynamic calculations. First, as shown in [36, 38] causality is violated at densities \\(n_{B}\\mathrel{\\hbox to 0.0pt{\\lower 4.0pt\\hbox{ $\\sim$}}\\raise 1.0pt\\hbox{$>$} }3.5n_{0}\\). Second, the ideal gas model with an excluded volume correction does not reproduce the saturation properties of nuclear matter. An attempt to combine the excluded volume correction with a mean field treatment of the hadronic interactions resulted in an incompressibility parameter which is too large \\(K\\geq 550~{}MeV\\)[36]. Figure 5: The same as in Fig. 4 but for the 2PIN model. ### Statistical mixed phase model The mixed phase (MP) model [28, 29, 39] is a phenomenological model of the EoS with a deconfinement phase transition of QCD which shows a satisfactory agreement with the lattice data. The underlying assumption of the MP model is that unbound quarks and gluons _may coexist_ with hadrons forming a _spatially homogeneous_ quark/gluon-hadron phase which we call a generalized Gibbs mixed phase. Since the mean distance between hadrons and quarks/gluons in the mixed phase may be of the same order as that between hadrons, the interactions between all these constituents (unbound quarks/gluons and hadrons) play an important role. The strength of this interactions defines the order of the phase transition. To find the free energy within the MP model [28, 29], the following effective Hamiltonian, expressed in terms of quasiparticles interacting with a density-dependent mean field, is used : \\[H=\\sum_{i}\\sum_{\\sigma}\\int d{\\bf r}\\ \\psi_{i}^{+}({\\bf r},\\sigma)\\ \\left(\\ \\sqrt{-\ abla^{2}+m_{i}^{2}}+U_{i}(\\rho)\\ \\right)\\ \\psi_{i}({\\bf r},\\sigma)-C(\\rho)V. \\tag{24}\\] Here \\(\\psi_{i}({\\bf r},\\sigma)\\) denotes a field operator for the quasiparticle species \\(i\\) characterized by the mass \\(m_{i}\\) (the current masses for quarks and gluons and the free hadron masses are used here). The index \\(\\sigma\\) accounts for spin, isospin and color degrees of freedom. Furthermore, \\(U_{i}\\) is the mean field acting on particles of type \\(i\\), \\(C(\\rho)\\) is a potential energy term, which is needed to avoid double counting of the interaction, and \\(V\\) is the volume of the system. By requiring thermodynamical consistency [28, 29, 30, 31] one finds constraints on the parameters in the Hamiltonian. The constraints follow from [28, 29] \\[\\langle\\frac{\\partial H}{\\partial T}\\rangle\\,=\\,0\\,,\\quad\\langle\\frac{\\partial H }{\\partial\\rho_{i}}\\rangle\\,=\\,0\\ \\, \\tag{25}\\] where \\(\\langle\\ldots\\rangle\\) denotes the statistical average. For the Hamiltonian (24), these conditions reduce to \\[\\sum_{i}\\rho_{i}\\frac{\\partial U_{i}}{\\partial\\rho_{j}}\\ -\\ \\frac{\\partial C}{ \\partial\\rho_{j}}\\ =\\ 0\\ \\,\\quad\\sum_{i}\\ \\rho_{i}\\frac{\\partial U_{i}}{\\partial T}\\ -\\ \\frac{\\partial C}{ \\partial T}\\ =\\ 0\\,, \\tag{26}\\] which, as shown in [28, 29], imply that \\(U_{i}(\\rho)\\) and \\(C(\\rho)\\) do not explicitly depend on temperature. We model color confinement by assuming the following density dependence for the mean-field potential of quarks and gluons \\[U_{q}(\\rho)=U_{g}(\\rho)=\\frac{A}{\\rho^{\\gamma}}\\ ;\\quad\\gamma>0 \\tag{27}\\] where \\[\\rho=\\rho_{q}+\\rho_{g}+\\sum_{j}\\rho_{j}=\\rho_{q}+\\rho_{g}+\\sum_{j}\ u_{j}\\ n_{j} \\tag{28}\\] is the total number density of quarks and gluons in the local rest frame and \\(\\rho_{q}\\) and \\(\\rho_{g}\\) are the number densities of unbound (deconfined) quarks and gluons (\\(\\rho_{pl}\\equiv\\rho_{q}+\\rho_{g}\\)), while \\(n_{j}\\) is the number density of hadrons of type \\(j\\) having \\(\ u_{j}\\) number of valence quarks inside. The presence of the total number density \\(\\rho\\) in (27) implies interactions between all components of the generalized Gibbs mixed phase. The potential (27) exhibits two important limits of QCD. For \\(\\rho\\to 0\\), the interaction potential approaches infinity, _i.e._ an infinite amount of energy is necessary to create an isolated quark or gluon. This obviously simulates confinement of colored objects. In the opposite limit of large energy density, \\(\\rho\\rightarrow\\infty\\), we have \\(U_{g}\\to 0\\) which is consistent with asymptotic freedom. In the description of the hadron components, the MP model accounts not only for hadron-hadron but also for quark/gluon-hadron interactions. The mean field acting on the hadron species \\(j\\) in the MP model has two terms : \\[U_{j}=U_{j}^{(h)}+U_{j}^{(pl)}. \\tag{29}\\] In the limit where there are no unbounded quarks and gluons, \\(U_{j}^{(pl)}=0\\), i.e., \\(U_{j}=U_{j}^{(h)}\\). This happens at low densities, where colored degrees of freedom are confined in hadrons. Due to the constraints (26) the second term in Eq.(29) my be written as [28]: \\[U_{j}^{(pl)}\\ =\\ \\frac{\ u_{j}\\,A}{\\rho^{\\gamma}}\\left(1-(1-w_{pl})^{-\\gamma} \\right)\\, \\tag{30}\\] where \\(w_{pl}=\\rho_{pl}/\\rho\\) is the fraction of quark-gluon plasma in the mixed phase 1. Thus, if \\(U_{q}\\) and \\(U_{g}\\) are known, the thermodynamic consistency conditions (26) allow us to unambiguously determine the correction term \\(C(\\rho)\\) in Eq. (24). The hadronic potential \\(U_{j}^{(h)}\\) is described by a non-linear mean-field model [37] \\[U_{j}^{(h)}\\ =g_{r,j}\\;\\varphi_{1}(x)+g_{a,j}\\;\\varphi_{2}(y)\\;, \\tag{31}\\] where \\(g_{r,j}>0\\) and \\(g_{a,j}<0\\) are repulsive and attractive coupling constants, respectively. Thermodynamic consistency implies that the functions \\(\\varphi_{1}(x)\\) and \\(\\varphi_{2}(y)\\) depend only on particle densities. In Ref. [37] these functions are chosen such that \\[b_{1}\\varphi_{1}=x,\\quad-b_{1}(\\varphi_{2}+b_{2}\\varphi_{2}^{3})=y \\tag{32}\\] where \\[x=\\sum_{\ u_{i}}g_{r,i}\\;\\rho_{\ u_{i}},\\quad y=\\sum_{\ u_{i}}g_{a,i}\\;\\rho_{ \ u_{i}}\\;.\\] and \\(b_{1}\\) and \\(b_{2}\\) are free parameters. In [37] considering a mixture of nucleons and \\(\\Delta\\)'s the model parameters were fixed such that to reproduce the saturation properties of nuclear matter and the ratio of the \\(\\Delta\\) to nucleon coupling constants. We generalize this approach by including all hadrons in our model and assuming that the coupling constants scale with the number of constituent quarks : \\[U_{j}^{(h)}=\ u_{j}\\left(\\widetilde{\\varphi}_{1}(\\rho-\\rho_{pl})+\\widetilde{ \\varphi}_{2}(\\rho-\\rho_{pl})\\,\\right)\\,, \\tag{33}\\] where \\(\\widetilde{\\varphi}_{1}\\) and \\(\\widetilde{\\varphi}_{2}\\) satisfy the equations \\[c_{1}\\widetilde{\\varphi}_{1}=\\rho-\\rho_{pl},\\quad-c_{2}\\widetilde{\\varphi}_{2 }-c_{3}\\widetilde{\\varphi}_{2}^{3}=\\rho-\\rho_{pl} \\tag{34}\\] with \\(\\rho-\\rho_{pl}=\\sum_{\ u_{j}}\ u_{j}\\rho_{j}\\). The parameters in Eq. (34) are given by [28] \\[c_{1}=\\frac{b_{1}}{(g_{r,j}/\ u_{j})^{2}},\\quad c_{2}=\\frac{b_{1}}{(g_{a,j}/ \ u_{j})^{2}},\\quad c_{3}=\\frac{b_{1}b_{2}}{(g_{a,j}/\ u_{j})^{4}}\\] and are fixed by requiring that the properties of the ground state (\\(T=0\\), \\(n_{B}=n_{0}\\approx 0.17\\;fm^{-3}\\)) of nuclear matter are reproduced: i.e. a binding energy per nucleon of \\(-16\\;MeV\\), incompressibility of \\(210\\;MeV\\) and vanishing pressure. We also addressed to the extension of the Zimanyi model [37] as interacting hadron gas (InHG) model with no phase transition. The \\(\\mu_{B}\\)-dependence of the pressure in this model is illustrated in Fig. 1. model have the _softest point_ in the EoS, i.e., a minimum [26] of the function \\(p(\\varepsilon)/\\varepsilon\\). A particular feature of the MP model is that even for \\(n_{B}=0\\) the softest point is not very pronounced and located at a relatively low energy density: \\(\\varepsilon_{SP}\\approx 0.45\\) GeV/fm\\({}^{3}\\). This is consistent with lattice result [40]. In the MP model, the softest point is gradually washed out with increasing baryon density and vanishes completely for \\(n_{B}\\mathrel{\\hbox to 0.0pt{\\lower 4.0pt\\hbox{ $\\sim$}}\\raise 1.0pt\\hbox{$>$}}0.5\\ n_{0}\\). This is, however, not the case in the 2P models, where one finds a pronounced softest point at large energy density \\(\\varepsilon_{SP}\\approx 1.5\\) GeV/fm\\({}^{3}\\), which depends only weakly on the baryon density \\(n_{B}\\) see Fig. 7). Finally, in the InHG model as well as in the relativistic ideal hadron gas there is obviously no softest point in the EoS. The differences in the thermodynamical properties of the above models will be also reflected in the expansion dynamics of a thermal fireball created in heavy ion collisions. The effect of these differences on strangeness production and evolution will be explored in the following sections. Figure 6: Temperature dependence of the energy density and pressure at vanishing total baryon density. Full and dashed–lines are the MP and 2PC model results, respectively. The insert figure shows the reduced heat capacity. ## 3 Strangeness production ### Strangeness content in equilibrium The conservation of strangeness in the coexistence region of quarks and hadrons implies that the total number of strange and antistrange quarks are equal. However, the \\(s\\)-\\(\\bar{s}\\) content in the individual phases may differ from zero. The strangeness content of the quarks in the mixed or plasma phase is characterized by two ratios : \\(\\rho_{s}/\\rho_{\\bar{s}}\\) and \\(D_{s}=(\\rho_{s}+\\rho_{\\bar{s}})/\\rho_{pl}\\) (see Eq. (28)). The second ratio gives the strangeness fraction in the plasma. In Fig. 8 the ratio \\(\\rho_{s}/\\rho_{\\bar{s}}\\) is shown as a function of \\(\\mu_{B}\\) for a fixed plasma fraction \\(\\alpha\\). For \\(\\alpha\\sim 1\\) the ratio \\(\\rho_{s}/\\rho_{\\bar{s}}\\approx 1\\) for almost all values of \\(\\mu_{B}\\). However, if \\(\\alpha<<1\\), that is when the volume in the mixed phase is mostly occupied by hadrons, the separation of strange and antistrange quarks is clearly seen in Fig. 8. This is mainly because the hadronic component of the mixed phase is dominated by the kaons, while the hyperons are suppressed due to their large masses. This strangeness excess through kaons is compensated by creation of Figure 7: The ratio of the pressure (\\(p\\)) to the energy density (\\(\\epsilon\\)) as a function of \\(\\epsilon\\). The results are for different values of a total baryon density (\\(n_{B}\\)) and for three models of the EoS. \\(s\\)-quarks in the plasma. The results in Fig. 8 are in qualitative agreement with Ref. [22] where the 2PIN model without higher mass resonances was employed. The contribution of higher mass resonances results in an increase of \\(\\rho_{s}/\\rho_{\\bar{s}}\\) for \\((\\mu_{B}/3)_{H}\\approx 400-500\\ MeV\\). In Fig. 9 the strangeness composition in an equilibrium system is compared for two different models. In the bag model EoS and at high temperature (\\(T\\sim 140\\ MeV\\)) the \\(\\rho_{s}/\\rho_{\\bar{s}}\\) ratio decreases when the baryon density inside the Gibbs mixed phase approaches the plasma boundary. However, for the moderate temperatures (\\(T\\sim 80\\ MeV\\)), the ratio \\(\\rho_{s}/\\rho_{\\bar{s}}<1\\) and it increases with \\(n_{B}\\). The above behavior is a direct implication of the simultaneous conservation of strangeness and the baryon number. If these conservation laws are decoupled [22], then this behavior at low temperatures is not seen. In the MP model the \\(\\rho_{s}/\\rho_{\\bar{s}}>1\\) for all values of the baryon density. For a fixed temperature the \\(\\rho_{s}/\\rho_{\\bar{s}}\\) ratio is seen in Fig. 9 to gradually decrease with Figure 8: Ratio of strange to antistrange quark densities in a quark–gluon plasma calculated along the hadronic boundary. The results are for PC model calculated with different values of the volume fraction (\\(\\alpha\\)) occupied by a quark–gluon plasma. increasing density. Its values are noticeably higher than in the 2P model. In both models, however, the strangeness separation effect is stronger when the system is closer to the hadronic boundary, i.e. where there is small admixture of quarks. For the 2P model this corresponds to the existence of a small blob of plasma while in the MP model a homogeneous admixture of unbound quarks and gluons with small concentration. Above the hadronic phase boundary, the \\(n_{B}\\)-dependence of \\(D_{s}\\) in the 2PC model is similar to that in the MP model. The strangeness fraction in the MP model is the largest below the hadronic boundary and maximal in baryon free matter. In Fig. 9 we note a jump in \\(D_{S}\\) which corresponds to a jump in strange particle multiplicity when crossing the phase boundary; a similar jump is observed in the baryon number. From the above discussion, it is clear, that the strangeness content and its distribution in the transition region from the quark-gluon plasma to the Figure 9: The \\(\\rho_{s}/\\rho_{\\bar{s}}\\) ratio for quark component and strangeness fraction (\\(D_{s}\\)) for unbound quarks as a function of baryon density. The results are shown for different temperatures and for two EoS. The plasma boundary is marked by arrows. Note the factor \\(1/5\\) in the MP model at \\(T=80\\) MeV. hadronic phase is strongly model dependent. It is effected by the order of the phase transition and the strength and the form of the interactions between constituents. These differences are particularly evident at moderate values of the temperature and baryon density. This is just the region which is traversed by an expanding system created in heavy ion collisions on its way towards the chemical freeze-out. Thus, one could expect that the order of the phase transition and particular strangeness dynamics could manifest itself in observables in heavy ion collisions. ### Strangeness evolution in expansion dynamics To study the possible influence of the EoS on observables in heavy ion collisions, we have to describe the space-time evolution of a thermal medium that is created in the initial state. This is conveniently done within a hydrodynamical model. The EoS is an input for constructing the energy--momentum tensor, which is needed in the hydrodynamical equations. To solve the hydrodynamical equations for a given experimental set-up one needs to specify the initial conditions. The initial volume, the entropy and the baryon number densities in the collisions are modelled within QGSM transport code [41]. The predictions of this model are consistent with the results obtained within the RQMD and UrQMD transport codes. We assume that, in the center of mass frame, the initial state is a cylinder of radius \\(R=5\\)\\(fm\\) and Lorentz contracted length \\(L=2R/\\gamma_{c.m.}\\). This initial state corresponds to the time when the centers of the colliding nuclei just have passed the point of full overlap.2 We neglect the transverse expansion and assume that the hydrodynamical evolution of the fireball is described by a one-dimensional isentropic expansion of the scaling type in the longitudinal direction. In this approximation the entropy and baryon density decrease inversely proportional to the expansion time. The values of all other thermodynamic quantities are obtained from the EoS at each temporal step (see, for example [42]). Footnote 2: A detailed description of the procedure to fix the initial conditions in heavy ion collisions can be found in [28, 29, 30, 31]. In Fig. 10 we show the fireball evolution trajectories for central \\(Au\\)-\\(Au\\) collisions in the \\(T\\)-\\(\\mu_{B}\\) plane for different collision energies and for different EoS. The chemical freeze-out parameters obtained [8, 9, 43, 44] within the statistical model at different collision energies are also shown in this figure. Clearly, the chemical freeze-out parameters from SIS up to RHIC are well described by the universal condition of fixed energy/particle, \\(\\left\\langle E_{had}\\right\\rangle/\\left\\langle N_{had}\\right\\rangle\\simeq 1\\) GeV [43, 45]. The dynamical trajectories show a strong dependence on the properties of the EoS. In the MP model there is a turning point seen in all trajectories, i.e. the point where \\(\\partial T/\\partial\\mu_{B}\\) changes sign. The existence of such a point is a general feature of the MP model and is directly related to the appearance of two limiting regimes: (i) At high temperatures and in the ultra-relativistic limit, \\(m_{q}\\to 0\\) Figure 10: A compilation of the chemical freeze–out parameters from Refs. [9, 43, 44] obtained with the hadron resonance gas partition function at different beam–energies (filled dots, squares and triangles). The smooth dashed curve is the universal freeze–out curve of fixed \\(\\left\\langle E_{had}\\right\\rangle/\\left\\langle N_{had}\\right\\rangle\\simeq 1\\)\\(GeV\\) from Ref. [43]. Also shown are dynamical trajectories for central \\(Au\\)–\\(Au\\) collisions calculated within different models (the interacting hadron gas model (InHG), the mixed-phase model (MP), the ideal hadron gas model (IdHG) and the thermodynamically consistent two–phase model (2PC)). The empty circles near the end of each trajectory correspond to freeze–out condition of fixed energy density, \\(\\varepsilon_{f}\\simeq 0.135\\)\\(GeV/fm^{3}\\). the thermodynamic potential \\(\\Omega=-Vp\\) can be obtained analytically from Eqs.(9) and (12) \\[\\Omega=-V\\ (a_{1}T^{4}+a_{2}T^{2}\\mu_{B}^{2}+a_{3}\\mu_{B}^{4}). \\tag{38}\\] The entropy per baryon \\[\\frac{s}{n_{B}}=\\frac{\\partial\\Omega/\\partial T}{\\partial\\Omega/\\partial\\mu_{B }}=\\frac{2a_{1}+a_{2}\\ (\\frac{\\mu_{B}}{T})^{2}}{a_{2}\\ (\\frac{\\mu_{B}}{T})+a_{3}\\ ( \\frac{\\mu_{B}}{T})^{3}}\\,, \\tag{39}\\] is conserved along trajectories defined by \\(\\mu_{B}/T=\\it const\\). Thus, in the high temperature limit, an isentropic expansion is characterized by a linear relation between \\(T\\) and \\(\\mu_{B}\\). (ii) At intermediate temperatures, the system can be approximated by a Boltzmann gas (22) of a non-relativistic nucleons. In this case the entropy in the dilute gas approximation is given by \\[S = -\\frac{g_{N}V}{(2\\pi)^{3}}\\int d^{3}p\\ \\left[f\\ln f+(1-f)\\ln(1-f)\\right] \\tag{40}\\] \\[\\approx N_{B}\\left[1-\\frac{\\int d^{3}p\\ f\\ln f}{\\int d^{3}p\\ f}\\right]\\] with the distribution function \\(f=\\exp[(\\mu_{B}-m_{B}-p^{2}/2m_{B})/T]\\). In this temperature range, conservation of the entropy per baryon implies that \\[\\frac{s}{n_{B}}=\\frac{5}{2}+\\frac{m_{B}-\\mu_{B}}{T}=\\it const. \\tag{41}\\] Thus, for intermediate temperatures, we again find a linear relation between \\(T\\) and \\(\\mu_{B}\\) but with a negative slope. The different behavior of \\(\\mu_{B}(T)\\) at high and intermediate temperatures, implies that there is a turning point in the fireball expansion trajectories, as seen in Fig. 10. The dynamical trajectories calculated in the MP model pass quite close to the phenomenological freeze-out points. For all collision energies, the turning point is located on the universal freeze-out curve of fixed energy/particle. This fact has been noticed already in Ref. [46] for the MP model with two light quarks. The contribution of strange quarks and the requirement of strangeness conservation modifies the dynamical expansion path of the fireball. This is particularly evident for \\(E_{lab}\\mathrel{\\hbox to 0.0pt{\\lower 4.0pt\\hbox{ $\\sim$}}\\raise 1.0pt\\hbox{$<$} }10\\) AGeV where neglectingthe strange quarks gives rise to a visible shift of the turning point towards smaller \\(\\mu_{B}\\). In the parameter range below the phenomenological freeze-out curve the expansion path in the MP, IdHG as well as in MP model are quite similar. In the InHG model, however, there is a small shift toward larger values of \\(\\mu_{B}\\). This agreement indicates that in the final stage the expansion path depends only weakly on details of the equation of state. The dynamical path is, to a large extent, determined by the entropy/baryon and strangeness conservation which in the hadronic phase puts strong constraints on the particle composition of the fireball. In this case the space time evolution and thermodynamics is governed by a gas of weakly interacting resonances, the effective degrees of freedom in the low temperature phase of QCD. This may be the reason behind the success of the non-interacting hadron resonance gas in the description of bulk observables in heavy ion collisions. The differences between the various equations of state in the evolution of the thermal fireball is clearly visible above the freeze-out curve. In contrast to the MP model, the IdHG turning points do not correlate with the freeze-out curve. There is also no softest point in the InHG and IdHG model. The dynamical trajectories within the 2P bag-models exhibit a characteristic reheating regime in the phase coexistence region. For this model, the expansion trajectory closely follows the phase boundary in this regime, as shown in Fig. 2. At SPS energies and above, the hadronic end of the intermediate coexistence region in the \\(T\\)-\\(\\mu_{B}\\) plane (the so-called \"hottest hadronic point\") is close to the phenomenological chemical freeze-out point. At lower energies there is no such correlation for the 2P models. For \\(E_{lab}\\mathrel{\\hbox to 0.0pt{\\lower 4.0pt\\hbox{ $\\sim$}}\\raise 1.0pt\\hbox{$<$} }10\\;AGeV\\) the initial state is in the phase coexistence region. The question of strangeness separation in heavy ion collisions addressed in Ref. [22] for static system can be reanalyzed in our approach for dynamically evolving fireball. The results are shown in Fig. 11 for Au-Au collisions at different bombarding energies within 2P and MP models. In both models \\(\\rho_{s}/\\rho_{\\bar{s}}>1\\), since there is no chance for the system to pass through a high density baryonic state where \\(\\rho_{s}\\) could be less then unity. In the 2P model we find that strangeness is separated to a less degree at the exit point from the phase coexistence region than found in [22]. On the other hand in the MP model the system evolves much longer and consequently a higher degree of strangeness separation is obtained. This effect is stronger at \\(E_{lab}=10\\;AGeV\\) than at \\(160\\;AGeV\\). So far the differences between various models for the expansion dynamics were discussed on the level of global thermodynamical quantities. It is of particular interest to explore physical observables that are directly measured in heavy ion collisions. In the following we consider strange particle multiplicity ratios to discuss the influence of the equation of state on particle yields. The predictions of the different models will be compared at _thermal freeze-out_ where the particle momentum distributions are frozen. We assume a shock-like freeze-out [47] where energy, the total baryonic and strangeness charges are conserved. The thermal freeze-out conditions are assumed to be determined by the fixed energy density \\(\\varepsilon_{f}\\approx 0.9n_{0}m_{N}=0.135\\ GeV/fm^{3}\\). Below this energy density the system consists of a free streaming gas of non interacting particles. The thermal freeze-out points are shown in Fig. 10 by empty circles on each trajectory for all models and for all collision energies. The excitation function of the relative yields of \\(K^{+}\\) mesons calculated within the MP model is shown in Fig. 12 as triangles. Figure 11: Time evolution of the ratio of strange to anti–strange quark densities in the hadronic component. The MP and 2P model results are for central \\(Au\\)–\\(Au\\) collisions at different beam–energies. show in this figure the \\(4\\pi\\)-integrated data for \\(K^{+}/\\pi^{+}\\) ratio obtained in heavy ion collisions at different beam energies. The shape of the kaon excitation function in the MP model is similar to that seen in the data. However, the absolute values are overestimated, especially for the low collision energies. We have to stress, however, that the models discussed here are still not quite suitable to be compared with data. First, the conservation of electric charge was not taken into account. The isospin asymmetry is particularly relevant at low collision energies (below AGS) where it can change the charge particle multiplicity ratios by up to 20%. Second, the hydrodynamical model applied here describes a longitudinally expanding fireball. This is, to a large extend sufficient at RHIC energies, however, it may be not valid at AGS or SIS where transverse expansion cannot be neglected. Furthermore, only part of particle mass spectrum was included with the masses up to 1.6 GeV. At AGS and higher energies the contributions from heavier resonances increases the yields of lighter particles. Finally, the system may be out of chemical equilibrium at some stages during the evolution from chemical towards thermal equilibrium [48]. Nevertheless, all these effects cannot account for the observed discrepancy by a factor of five between the MP model and data at low collision energies (Fig. 12). However, the differences may be due to the grand canonical (GC) treatment of the strangeness conservation used in the calculations. In the GC ensemble strangeness is conserved on the average and is controlled by the strange chemical potential. Within the statistical approach, the use of the grand canonical ensemble for particle production can be justified only if the number of produced particles that carry a conserved charge is sufficiently large. In this case also event-averaged multiplicities can be treated in a grand canonical formulation. In this approach, the net value of a given charge (e.g. electric charge, baryon number, strangeness, charm, etc.) fluctuates from event to event. These fluctuations can be neglected (relative to the mean particle multiplicity) only if the particles carrying the charges in question are abundant. Here, the charge is indeed conserved on the average and a grand canonical treatment is adequate. However, in the opposite limit of low production yields (as is the case for strangeness production in low energy heavy-ion collisions) the particle number fluctuation can be of the same order as the event averaged value. In this case charge conservation has to be implemented exactly in each event [8, 49]. In the statistical physics the exact conservation of quantum numbers requires a canonical (C) formulation of the partition function. The grand canonical \\(Z^{GC}\\) and canonical \\(Z^{C}_{S}\\) partition functions are connected by a cluster decomposition in the fugacity parameter (\\(\\lambda\\equiv\\exp(\\mu_{s}/T)\\)) \\[Z^{GC}(T,V,\\mu_{B},\\lambda)=\\sum_{s=-\\infty}^{s=\\infty}\\lambda^{s}\\ Z^{C}_{s}(T, V,\\mu_{B}). \\tag{42}\\] The relation (42) can be inverted and the canonical partition function with total strangeness \\(S=0\\) is obtained from \\[Z^{C}_{S=0}(T,V,\\mu_{B})=\\frac{1}{2\\pi}\\int_{0}^{2\\pi}d\\phi\\ Z^{GC}(T,V,\\mu_{B}, \\lambda\\to e^{i\\phi}). \\tag{43}\\] Neglecting the contributions from multistrange hyperons and assuming Boltzmann statistics the density of kaons in the C ensemble is given by [50, 51] \\[n^{C}_{K}=n^{B}_{K}\\ \\frac{{\\cal S}_{1}}{\\sqrt{{\\cal S}_{1}{\\cal S}_{-1}}}\\ \\frac{I_{1}(x)}{I_{0}(x)}. \\tag{44}\\] Figure 12: The \\(K^{+}/\\pi^{+}\\) ratio as a function of a beam–energy. The data points are from [44]. The lines are the MP model results obtained in the grand canonical as well as canonical formulation of strangeness conservation for different parameterizations of the volume parameter \\(V_{c}\\) (see text). where the argument of the Bessel function \\(I_{s}(x)\\) is \\[x\\equiv 2\\sqrt{{\\cal S}_{1}{\\cal S}_{-1}}. \\tag{45}\\] with \\[{\\cal S}_{s}=V_{c}\\sum_{j}n_{j}^{B}\\.\\] Here the particle density \\(n_{j}^{B}\\) for hadron species \\(j\\) is given by Eq. (22) with \\(\\mu_{j}=\\mu_{B}b_{j}\\). The sum is taken over all particles and resonances carrying strangeness \\(s\\). The volume \\(V_{c}\\) is a model parameter which is interpreted as the strangeness correlation volume.3 In the equilibrium statistical model a correlation volume \\(V\\equiv V_{1}\\simeq 1.9\\pi A_{part}/2\\) was found to reproduce the experimental multiplicity ratios for all measured particle. In our dynamical approach, \\(V_{c}\\) is assumed to be the initial volume of the collision fireball \\(V_{c}=V_{0}(E_{lab})\\), and thus is energy dependent [52]. Footnote 3: For a more detailed discussion of the interpretation and the role of this parameter see e.g. Ref. [8]. From Eqs.(44) and (22) it is clear that grand canonical and canonical results for the kaon density are related by the substitution [7]: \\[\\exp(\\mu_{s}/T)\\rightarrow\\frac{{\\cal S}_{1}}{\\sqrt{{\\cal S}_{1}{\\cal S}_{-1}} }\\ \\frac{I_{1}(x)}{I_{0}(x)}. \\tag{46}\\] Thus, the main difference between C and GC results is contained in a reduction of the fugacity parameter by the factor \\(F\\equiv I_{1}(x)/I decreasing with increasing collision energy. At lower collision energies this decrease of the volume is compensated by an increase of temperature such that the suppression factor increases with \\(E_{lab}\\). However for \\(E_{lab}>10\\) GeV there is only a moderate increase of freeze-out temperature that is not sufficient to overcome a decrease of \\(V_{0}\\). Consequently for \\(E_{lab}>10\\) GeV the suppression factor starts to decrease with energy. The amount of canonical suppression at fixed \\(E_{lab}\\) also depends strongly on the temperature which in turn is determined by the energy density at freeze-out. In the equilibrium analysis of particle production at SIS [7] the energy density at chemical and thermal freeze-out was a factor of three lower than the value used in the present dynamical study, \\(\\epsilon\\simeq 0.135\\) GeV/fm\\({}^{3}\\). Consequently, for \\(V_{c}=V_{0}\\) and for \\(1<E_{lab}<2\\) GeV the canonical suppression found in Ref. [7] was much stronger than that shown in Fig. 13. We have not tuned the parameters to reproduce previous results. In low energy heavy ion collisions the expansion trajectories and the freeze-out parameters will change once the transverse expansion is taken into account. In fig. 12 we show the effect of the canonical suppression on the Figure 13: The beam–energy dependence of the strangeness suppression factor for central \\(Au\\)–\\(Au\\) collisions. This factor is calculated within MP model at freeze–out for different parameterizations of the correlation volume \\(V_{c}\\) (see text). excitation function calculated in the MP model with two different parameterizations of the correlation volume: \\(V_{c}=V_{0}\\) and \\(V_{c}=\\min(V_{0},V_{2})\\) where \\(V_{2}=V_{1}/5\\). As expected, there is a noticeable decrease of \\(K^{+}\\) yield due to the exact treatment of strangeness conservation. The suppression of strangeness at energies beyond AGS is entirely due to the energy-dependent Lorentz contraction of the initial correlation volume. In Fig. 12 the results of a calculation also are presented where the choice of \\(V_{c}=\\min(V_{0},V_{2})\\) is optimized to reproduce the \\(K^{+}/\\pi^{+}\\) data. The above analysis of \\(K^{+}\\) excitation function clearly shows that due to Figure 14: The ratios of \\(4\\pi\\)–integrated strange particle yields per pion yields for central \\(Au\\)–\\(Au\\) collision as a function of beam–energy. The compilation of experimental data is taken from [44, 53]. The calculated excitation functions are for different EoS with the canonical suppression factor. associated strangeness production and the small production cross sections at low collision energies one has to implement exact strangeness conservation. In the following we will implement this concept in all models and discuss the predictions for strangeness production and energy dependence. In the calculations we use the correlation volume \\(V_{c}=\\min(V_{0},V_{2})\\). In Fig. 14 we calculate relative excitation functions for different strange mesons and baryons for four hydrodynamical models. The most striking result seen in this figure is that all models yield very similar results for the strangeness excitation functions. This is particularly true for the production of \\(K^{+}/\\pi^{+}\\) and \\(\\Lambda/\\pi^{+}\\) where the results of all models besides InHG, are hardly distinguishable. Some differences are seen on the level of \\(K^{-}\\) excitation function which are mainly due to larger sensitivity of \\(K^{-}/\\pi^{-}\\) ratio to the value of the temperature. It is interesting to note that all models show a maximum in the \\(\\Lambda/\\pi\\) excitation function for \\(10<E_{lab}<30\\) GeV. Such a maximum is found also in equilibrium models [54]. The relative strangeness content of the produced particles in heavy ion collisions is characterized by the Wroblew Figure 15: The Wroblewski ratio \\(\\lambda_{s}\\) as a function of beam–energy for central \\(Au\\)–\\(Au\\) collisions. The contributions of mesons and baryons are shown separately. The points at AGS energies are from [44]. \\[\\lambda_{S}=\\frac{2<s\\bar{s}>}{<u\\bar{u}>+<d\\bar{d}>} \\tag{47}\\] where the quantities in angular brackets refer to the number of newly created quark-antiquark _pairs_. The Wroblewski factor is shown in Fig. 15 for different collision energies. The separate contributions to \\(\\lambda_{S}\\) from strange mesons and baryons as well as its overall value is calculated within the MP, 2P and IdHG models. The results are compared with \\(\\lambda_{S}\\) obtained in an equilibrium model analysis of experimental data at AGS energies. There is a surprising agreement of all dynamical models on the relative strangeness content of the fireball at freeze-out. The results are also consistent with the equilibrium model [54]. However, the maximum spread of the Wroblewski factor seen in Fig. 15 is broader than previously seen in the equilibrium canonical model [54]. In the dynamical models there is also a small shift in the position of this maximum towards lower energy. ## 4 Summary and conclusions The main objective of this article was to explore the influence of the expansion dynamics, the equation of state and the nature of deconfinement phase transition on strangeness production in heavy ion collisions. We have discussed and formulated different models for a phase transition in light density QCD matter. The thermodynamical properties of these models and the role of the order of the phase transition as well as the interactions between the particles has been analyzed. We have addressed the question of the Gibbs construction of the phase transition in the presence of two conserved charges and emphasized the problem of causality and thermodynamical consistency. The strangeness separation in the transition region from the quark-gluon plasma to the hadronic phase was also studied. The asymmetry in the relative concentration of strange and anti-strange quarks in the hadronic and quark-gluon component in the phase coexistence region was found in all models that exhibit a phase transition. However, the largest effect was observed in the mixed-phase model with a crossover-type deconfinement phase transition. The differences in equilibrium thermodynamics of the models were studied on the dynamical level. We have shown that the hydrodynamical expansion trajectories of the fireball in the \\(T\\)-\\(\\mu_{B}\\) plane are very sensitive to the equation of state. We considered the effect of the different expansion paths on strangeness production. Our detailed analysis show that there is almost no sensitivity of strangeness observables on the equation of state or on the expansion trajectories. This was demonstrated for several strange particle excitation functions. To relate the model predictions with experimental data we have extended our study to a canonical formulation of strangeness conservation. We have discussed the phenomenological limitations of our dynamical models and the possible extension needed to provide a quantitative description of the observed particle yields in heavy ion collisions. Exact strangeness conservation substantially reduces the strange particle yields in heavy ion collisions for \\(E_{lab}<10\\) GeV. For higher energies a moderate suppression is also found if the beam-energy dependence of the volume parameter \\(V_{c}\\) is taken into account. We have shown that the assumption that \\(V_{c}\\) is the volume of the initially produced Lorentz contracted fireball may lead to a negative slope in the energy dependence of the \\(K^{+}/\\pi^{+}\\) ratio. However, within considered models, the almost singular behavior of the excitation function near \\(E_{lab}<20\\) GeV for \\(K^{+}/\\pi^{+}\\) ratio found recently by the NA49 collaboration [56] was not reproduced. Simplified hydrodynamics with the assumption of a shock-like particles freeze-out in heavy-ion collisions results in a very smooth behavior of the strange particle excitation functions. ## Acknowledgements Stimulating discussions with Yu. Ivanov are gratefully acknowledged. We also thank J. Knoll, E. Kolomeitsev, A. Parvan, A. Shanenko and D. Voskresenski for useful comments. E.G.N. and V.D.T. acknowledge the hospitality at the Theory Group of GSI, where this work has been done. This work was supported in part by DFG (project 436 RUS 113/558/0-2) and RFBR (grant 03-02-04008). K.R. acknowledges the support of the Alexander von Humboldt Foundation (AvH) and the Polish State Committee for Scientific Research (KBN) grant 2P03 (06925). ## References * [1] For a review see eg. H. Satz, _Rep. Prog. Phys._**63**, 1511 (2000); S.A. Bass, M. Gyulassy, H. Stocker, and W. Greiner, _J. Phys._**G25**, R1 (1999); E.V. Shuryak, _Phys. Rep._**115**, 151 (1984). * [2] F. Karsch, E. Laermann, and A. Peikert, _Nucl. Phys._**B605**, 579 (2001). * [3] See e.g., U. Heinz and S.M.H. Wong, _Nucl. Phys._**A715** 649 (1993); I. Vitev and M. Gyulassy, _Phys. Rev. Lett._ **89** 252301 (2002); X.-N. Wang, nucl-th/0307036; E.L. Bratkovskaya, et al., nucl-th/0307098. * [4] See eg. Proceedings of Quark Matter 2002, _Nucl. Phys._**A715**, (2002). * [5] J. Rafelski, _Phys. Rep._**88**, 331 (1982); P. Koch, B. Muller and J. Rafelski, _Phys. Rep._**142**, 167 (1986). * [6] P. Braun-Munzinger, I. Heppe and J. Stachel, _Phys. Lett._**B465**, 15 (1999). * [7] J. Cleymans, H. Oeschler and K. Redlich, _Phys. Rev._**C59**, 1663 (1999); _Phys. Lett._**B485**, 27 (2000); H. Oeschler, _J. Phys._**G27**, 257 (2001). * [8] P. Braun-Munzinger, K. Redlich and J. Stachel, nucl-th/0304013. * [9] F. Becattini, et al., _Phys. Rev._**C64**, 024901 (2001); K. Redlich, _Nucl. Phys._**A698**, 94c (2002); P. Braun-Munzinger, D. Magestro, K. Redlich and J. Stachel, _Phys. Lett._**B518**, 41 (2001); W. Broniowski and W. Florkowski, _Phys. Rev._**C65**, 064905 (2002). * [10] R. Averbeck, R. Holzmann, V. Metag and R.S. Simon, _Phys. Rev._**C67**, 024903 (2003). * [11] W. Cassing, _Nucl. Phys._**A661**, 468c (1999). * [12] J.C. Dunlop and A. Ogilve, _Phys. Rev._**C61**, 031901 (1999); A. Ogilve, nucl-ex/0104010. * [13] F. Karsch, _Lect. Notes. Phys._**583**, 202 (2002); E. Laermann and O. Philipsen, hep-ph/0303042. * [14] Z. Fodor, S.D. Katz and K.K. Szabo, hep-lat/0208078; C.R. Allton, S. Ejiri, S.J. Hands, O. Kaczmarek, F. Karsch, E. Laermann and C. Schmidt, hep-lat/0305007, to appear in _Phys. Rev._**D**, (2003); Z. Fodor, S.D. Katz and K.K. Szabo, hep-lat/0208078. * [15] F. Karsch, K. Redlich and A. Tawfik, hep-ph/0303108, to appear in _Eur. Phys. J._ (2003). * [16] F. Karsch, K. Redlich and A. Tawfik, hep-ph/0306208, to appear in _Phys. Lett._**B** (2003). * [17] J.P. Blaizot, E. Iancu, and A. Rebhan, _Phys. Rev._**D63**, 065003 (2001). * [18] A. Peshier, B. Kampfer and G. Soff, _Phys. Rev._**C61**, 045203 (2000); _Phys. Rev._**D66**, 094003 (2002); J. Letessier and J. Rafelski, _Phys. Rev._**C67**, 031902 (2003); K.K. Szabo and A.I. Toth, _JHEP_**306**, 008 (2003). * [19] A. Dumitru and R.D. Pisarski, _Phys. Lett._**B525** 95 (2002); A. Dumitru and R.D. Pisarski, _Phys. Rev._**D66** 096003 (2002); hep-ph/0204223. * [20] K.S. Lee, M.J. Rhoades-Brown and U. Heinz, _Phys. Lett._**B174**, 123 (1986). * [21] B. Lucasz, J. Zimanyi and N.L. Balazs, _Phys. Lett._**B183**, 27 (1987). * [22] C. Greiner, P. Koch and H. Stocker, _Phys. Rev. Lett._**58**, 1825 (1987). * [23] H.W. Barz, B.L. Friman, J. Knoll, and H. Schulz, _Phys. Rev._**D40**, 157 (1989). * [24] J. Cleymans, J. Stalnacke, E. Suhonen, and G.M. Weber, _Z. Phys._**C53**, 317 (1992). * [25] J. Cleymans, M.I. Gorenstein, J. Stalnacke, and E. Suhonen, _Phys. Scripta_**48**, 277 (1993). * [26] C.M. Hung and E.V. Shuryak, _Phys. Rev. Lett._**75**, 4003 (1995); _Phys. Rev._**C57**, 1891 (1998). * [27] S.A. Bass and A. Dumitru, _Phys. Rev._**C61**, 064909 (2000). * [28] E.G. Nikonov, A.A. Shanenko, and V.D. Toneev, _Heavy Ion Phys._**8**, 89 (1998); Yad. Fiz. **62**, 1301 (1999) [translated as Physics of Atomic Nuclei, **62**, 1226 (1999)]. * [29] V.D. Toneev, E.G. Nikonov, and A.A. Shanenko, in _Nuclear Matter in Different Phases and Transitions_, eds. J.-P. Blaizot, X. Campi, and M. Ploszajczak, Kluwer Academic Publishers (1999), p.309. * [30] M. I. Gorenstein and S. N. Yang, _Phys. Rev._**D52**, 5206 (1995); _ibid_, _J. Phys._**G21**, 1053 (1995). * [31] T.S. Biro, A.A. Shanenko and V.D. Toneev, Yad. Fiz. **66**, 1015 (2003) [translated as Physics of Atomic Nuclei, **66**, 982 (2003)]; nucl-th/0102027. * [32] L.D. Landau and E.M. Lifshitz, _Statistical Physics_, vol.5, Part 1, Pergamon Press, 1980. * [33] N. Glendenning, _Phys. Rev._**D46**, 1274 (1992); _Phys. Rep._**342**, 393 (2001). * [34] J. Cleymans, R.V. Gavai and E. Suhonen, _Phys. Rep._**130**, 217 (1986); J. Cleymans, K. Redlich, H. Satz and E. Suhonen, _Z. Phys._**C58**, 347 (1993). * [35] U. Heinz, P.R. Subramanian, H. Stocker, and W. Greiner, _J. Phys._**G12**, 1237 (1986); J. Cleymans, K. Redlich, H. Satz, and E. Suhonen, _Z. Phys._**C33**, 151 (1986); E. Suhonen and S. Sohlo, _J. Phys._**G13**, 1487 (1987); H. Kuono and F. Takagi, _Z. Phys._**C42**, 209 (1989); J. Cleymans and E. Suhonen, _Z. Phys._**C37**, 51 (1987); J. Cleymans, H. Satz, E. Suhonen, and D.W. von Oertzen, _Phys. Lett._**B242**, 111 (1990). * [36] D.H. Rischke, M.I. Gorenstein, H. Stocker, and W. Greiner, _Z. Phys._**C51**, 485 (1991). * [37] J. Zimanyi et al., _Nucl. Phys._**A484**, 647 (1988). * [38] N. Prasad, K.K. Singh and C.P. Singh, Phys. Rev. **C62**, 037903 (2001). * [39] E.G. Nikonov, A.A. Shanenko and V.D. Toneev, Heavy Ion Phys. **4**, 333 (1996). * [40] K. Redlich and H. Satz, _Phys. Rev._**D33**, 3747 (1986). * [41] V.D. Toneev, N.S. Amelin, K.K. Gudima, and S.Yu. Sivoklokov _Nucl. Phys._**A519**, (463c); N.S. Amelin _et al.__Phys. Rev._**C44**, 1541 (1991); N.S. Amelin _et al.__Phys. Rev._**C47**, 2299 (1993). * [42] P.R. Subramanian, H. Stocker and W. Greiner _Phys. Lett._**B173**, 468 (1986). * [43] J. Cleymans and K. Redlich _Phys. Rev. Lett._**81**, 5284 (1998). * [44] R. Stock, hep-ph/0204032; The NA49 Collaboration, _Phys. Rev._**C66**, 054902 (2002); nucl-ex/0205002; M. van Leeuwen for the NA49 Collaboration, _Nucl. Phys._**A715**, 161 (2003); nucl-ex/02080714. * [45] J. Cleymans and K. Redlich _Phys. Rev._**C60**, 054908 (1999). * [46] V.D. Toneev, J. Cleymans, E.G. Nikonov, K. Redlich, and A.A. Shanenko, _J. Phys._**G27**, 827 (2001). * [47] K.A. Bugaev, _Nucl.Phys._**A606**, 59 (1996). * [48] R. Rapp and E.V. Shuryak, _Phys. Rev. Lett._**86**, 2980 (2001). * [49] C.M. Ko, V. Koch, Z. Lin, K. Redlich, M. Stephanov, and X.N. Wang, _Phys. Rev. Lett._**86**, 5438 (2001). * [50] R. Hagedorn and K. Redlich, _Z. Phys._**C27**, 541 (1985); K. Redlich and L. Turko, _Z. Phys._**C5**, 201 (1980). * [51] J. Cleymans, K. Redlich and E. Suhonen, _Z. Phys._**C51**, 137 (1991). * [52] We thank P. Braun-Munzinger for pointing out this issue. * [53] K. Redlich, _Nucl. Phys._**A698**, 94 (2002); hep-ph/0105104. * [54] P. Braun-Munzinger, J. Cleymans, H. Oeschler, and K. Redlich, _Nucl. Phys._**A697**, 902 (2002); hep-ph/0105104. * [55] K. Wroblewski, _Acta Phys. Polon._**B16**, 379 (1985). * [56] M. Gazdzicki, hep-ph/0305176.
Thermodynamical properties of hot and dense nuclear matter are analyzed and compared for different equations of state (EoS). It is argued that the softest point of the equation of state and the strangeness separation on the phase boundary can manifest themselves in observables. The influence of the EoS and the order of the phase transition on the expansion dynamics of nuclear matter and strangeness excitation function is analyzed. It is shown that bulk properties of strangeness production in A-A collisions depend only weakly on the particular form of the EoS. The predictions of different models are related with experimental data on strangeness production.
Provide a brief summary of the text.
arxiv-format/0308124v1.md
Comment on \"Quantum waveguide array generator for performing Fourier transforms: Alternate route to quantum computing\" [Apl 79, 2823 (2001)] Daniel A. Lidar [email protected] Chemical Physics Theory Group, University of Toronto, 80 St. George Street, Toronto, Ontario M5S 3H6, Canada ###### The arguments leading to this conclusion are unfortunately based on an incorrect assumption: that _interference_ is sufficient to obtain a quantum speedup. The essence of the waveguide approach is quantum interference. Indeed, the authors claim: \"Given that quantum mechanics is primarily a wave mechanics concept, these examples based on electromagnetic and acoustic waves suggest that there should be a more natural approach to quantum signal processing than that found in the existing quantum computing literature.\" It is by now well appreciated that the exponential speedup offered by quantum computers in computing the QFT _is impossible without entanglement_[2]. Detailed discussions of this issue exist in the literature, e.g., [3]. Most recently, Jozsa and Linden proved that for any quantum algorithm operating on pure states, the presence of multi-partite entanglement, with a number of parties that increases unboundedly with input size, is necessary if the quantum algorithm is to provide an exponential speedup over classical computation (Theorem 1, [4]). Entanglement is a property that depends on the existence of a _tensor product_ Hilbert space. This implies that it is possible to _efficiently_ (i.e., with resources that scale polynomially in the number of qubits) construct _local_ (e.g., single- and two-qubit) operators, even though such operators are represented by exponentially large matrices. It is further understood that approaches to quantum computing that rely on interference alone, always incur some form of exponential overhead (in energy, resolution, or number of building blocks of the quantum circuit) [5; 6]. The waveguide approach of Akis & Ferry is no different: by relying on interference, without entanglement, the authors have eliminated a key ingredient of the quantum speedup. Their proposed devise is not equivalent to the standard qubit paradigm of quantum computing because it does not support a tensor-product Hilbert space. It is a multi-level quantum system, which has computational power equivalent to an experiment in classical wave mechanics. The exponential overhead they incur is in the size of their waveguide, as is immediately evident from Fig. 1(b) in their paper. Their waveguide has the shape of a binary tree; the distance between its nodes (the radiating elements) cannot be made arbitrarily small. Hence the overall size of the device must grow exponentially. This can certainly not qualify as a valid quantum computer. Support from the DARPA-QuIST program (managed by AFOSR under agreement No. F49620-01-1-0468) is gratefully acknowledged. ## References * (1) R. Akis and D.K. Ferry, Appl. Phys. Lett. **79**, 2823 (2001). * (2) Note that the exponential speedup of the QFT on a quantum computer is still unproven, i.e., a classical (probabilistic) algorithm that is as fast as the quantum one may still be discovered, although this seems unlikely. * (3) A. Ekert and R. Jozsa, Phil. Trans. Roy. Soc. (Lond.) **356**, 1769 (1998). * (4) R. Jozsa and N. Linden, eprint quant-ph/0201143. * (5) D.A. Meyer, P.G. Kwiat, R.J. Hughes, P.H. Bucksbaum, J. Ahn, and T.C. Weinacht, Science **289**, 1431 (2000). * (6) D.A. Meyer, Phys. Rev. Lett. **85**, 2014 (2000).
In their letter [1] Akis & Ferry propose a quantum waveguide array approach for performing quantum Fourier transforms (QFTs). The waveguide produces \\(2^{n}\\) waves at its output with controllable relative phases; \\(n\\) is the number of binary splits of the input wave. The interference pattern from these waves is recorded and implements a Fourier transform. The authors claim that their waveguide approach is \"a more practical means\" and an alternative to the \"qubit paradigm that currently dominates the field of quantum computing\" (double quotation marks are direct quotes from [1]). The main result claimed by the authors is an implementation of the QFT that is as efficient as that obtained using the standard paradigm. In their conclusions they say: \" it is unclear whether the promised speedup in certain computations arises from the quantum nature of the systems or from the highly parallel analog processing that is provided by the array of qubits. We have argued that it is the latter that is important, and that equal speedup is available using analog processing arrays whose operation is based on general wave principles.\"
Provide a brief summary of the text.
arxiv-format/0309030v2.md
# Action scales for quantum decoherence and their relation to structures in phase space. Daniel Alonso\\({}^{1}\\), S. Brouard\\({}^{2}\\), Jose P. Palao\\({}^{2}\\), R. Sala Mayato\\({}^{2}\\) \\({}^{1}\\) Departamento de Fisica Fundamental y Experimental, Electronica y Sistemas. Universidad de La Laguna, La Laguna 38203, Tenerife, Spain \\({}^{2}\\) Departamento de Fisica Fundamental II, Universidad de La Laguna, La Laguna 38203, Tenerife, Spain ###### pacs: 03.65.Yz, 03.65.Ta Introduction The superposition principle and the interference terms that generates are the key components of the quantum formalism, and responsible for the main differences between the quantum and classical world. The boundary between these two worlds and the mechanisms that prevent the interference terms from being apparent in the classical realm have been the subjects of many theoretical and experimental studies since the very beginning of the \"quantum era\". Significant advances in the analysis and experimentation on the interaction between mesoscopic and microscopic systems are pushing the boundary between the two worlds. An example is the study of measurement processes where the'monitoring' apparatus is represented by a system with an increasingly larger number of degrees of freedom (more classical) and the analysis of the associated disappearance of the non-diagonal terms of the density operator of the microscopic system in some preferred matrix representation [1; 2]. The study of the effectiveness of a given system that plays the role of an environment or of a measurement apparatus to induce decoherence in another system is of fundamental and practical interest. For instance, the advances in the fields of quantum communication and quantum computation depend crucially on our ability to manipulate entanglement [3] and to control the capability of the environment or measurement devices to induce decoherence in our qubit (pointer) system [4; 5]. Many actual interactions between a two-level system \\(\\mathcal{S}\\), spanned by the pointer states \\(\\left|+\\right\\rangle\\) and \\(\\left|-\\right\\rangle\\), and a system \\(\\mathcal{E}\\) playing the role of the environment (for instance as a'monitoring' apparatus), can be described by means of a coupling Hamiltonian of von Neumann's form [4]. In particular, we will use a generic term \\(\\hat{V}_{\\mathcal{SE}}=(\\left|+\\right\\rangle\\left\\langle+\\right|-\\left|- \\right\\rangle\\left\\langle-\\right|)\\)\\((\\mathbf{c_{q}}\\cdot\\mathbf{\\hat{q}}+\\mathbf{c_{p}}\\cdot\\mathbf{\\hat{p}})\\), where \\(\\mathbf{\\hat{q}}\\equiv(\\hat{q}_{1},\\ldots,\\hat{q}_{f})\\) and \\(\\mathbf{\\hat{p}}\\equiv(\\hat{p}_{1},\\ldots,\\hat{p}_{f})\\) are position and momentum operators for an environmental system with \\(f\\) degrees of freedom (\\([\\hat{q}_{j},\\hat{p}_{j}]=i\\hbar\\), \\(j=1,\\ldots,f\\)). The coefficients \\(\\mathbf{c_{q}}\\equiv(c_{q}^{(1)},\\ldots,c_{q}^{(f)})\\) and \\(\\mathbf{c_{p}}\\equiv(c_{p}^{(1)},\\ldots,c_{p}^{(f)})\\) characterise the strength of the coupling. The reduced density operator describing the state of the system \\(\\mathcal{S}\\) after its coupling with the environment during a time interval \\(\\delta t\\) is given by \\[\\hat{\\rho}_{\\mathcal{S}}\\,=\\,\\left|\\alpha\\right|^{2}\\left|+\\right\\rangle \\left\\langle+\\right|+\\left|\\beta\\right|^{2}\\left|-\\right\\rangle\\left\\langle- \\right|\\,+(\\alpha\\beta^{*}\\left\\langle\\psi_{-}\\right|\\psi_{+})\\left|+\\right\\rangle \\left\\langle-\\right|+\\text{H.c.})\\, \\tag{1}\\] where \\(\\left|\\psi_{\\pm}\\right\\rangle\\equiv\\hat{D}(\\mp\\mathbf{c_{p}}\\delta t,\\mp \\mathbf{c_{q}}\\delta t)\\left|\\psi\\right\\rangle\\), \\(\\hat{D}(\\delta\\mathbf{q},\\delta\\mathbf{p})\\)\\(\\equiv\\)\\(\\exp\\{i(\\mathbf{\\hat{p}}\\cdot\\delta\\mathbf{q}+\\mathbf{\\hat{q}}\\cdot\\delta\\mathbf{p})/ \\hbar\\}\\), \\(\\delta\\mathbf{q}\\) and \\(\\delta\\mathbf{p}\\) are displacement vectors in \\(f\\)-dimensional spaces, and H.c. denotes the Hermitian conjugate of the preceding term in the equation. The states of the environmental and two-level systemsimmediately prior to the interaction are \\(\\left|\\psi\\right\\rangle\\) and \\(\\left|\\chi\\right\\rangle\\equiv\\alpha\\left|+\\right\\rangle+\\beta\\left|-\\right\\rangle\\) respectively. We have assumed that the coupling strength is large enough so that the evolution induced by each system Hamiltonian (\\(\\hat{H}_{\\mathcal{S}}\\) and \\(\\hat{H}_{\\mathcal{E}}\\)) can be neglected during the interaction time \\(\\delta t\\)[4]. Despite the simplicity of the model considered, it contains the basic elements relevant to our discussion. Eq. (1) relates the value of the non-diagonal term of the reduced density matrix of \\(\\mathcal{S}\\) in the preferred basis \\(\\{\\left|+\\right\\rangle,\\left|-\\right\\rangle\\}\\) to the mean value of a displacement operator over the state \\(\\left|\\psi\\right\\rangle\\) of system \\(\\mathcal{E}\\) since \\(\\left\\langle\\psi_{-}|\\psi_{+}\\right\\rangle=\\left\\langle\\psi\\right|\\hat{D}(-2 \\mathbf{c}_{\\mathbf{p}}\\delta t,-2\\mathbf{c}_{\\mathbf{q}}\\delta t)\\left|\\psi\\right\\rangle\\). Therefore the capability of \\(\\mathcal{E}\\) to induce decoherence in \\(\\mathcal{S}\\) through the coupling term \\(\\hat{V}_{\\mathcal{SE}}\\) is characterised by \\[C_{\\psi}(\\delta\\mathbf{q},\\delta\\mathbf{p})\\equiv\\left\\langle\\psi|\\hat{D}( \\delta\\mathbf{q},\\delta\\mathbf{p})|\\psi\\right\\rangle\\,=\\,e^{i\\delta\\mathbf{q} \\cdot\\delta\\mathbf{p}/2\\hbar}\\int d^{f}\\!q\\,e^{i\\mathbf{q}\\cdot\\delta\\mathbf{p} /\\hbar}\\,\\psi^{*}(\\mathbf{q})\\,\\psi(\\mathbf{q}+\\delta\\mathbf{q})\\,, \\tag{2}\\] where \\(\\psi(\\mathbf{q})\\equiv\\left\\langle\\mathbf{q}|\\psi\\right\\rangle\\), \\(\\delta\\mathbf{q}=-2\\mathbf{c}_{\\mathbf{p}}\\delta t\\), \\(\\delta\\mathbf{p}=-2\\mathbf{c}_{\\mathbf{q}}\\delta t\\), and \\(d^{f}\\!q\\) (\\(d^{f}\\!p\\)) is the \\(f\\)-dimensional differential element of volume in positions (momenta). All integrals in this paper run over the entire available volume. Complete decoherence is reached whenever the two states \\(\\left|\\psi_{+}\\right\\rangle\\) and \\(\\left|\\psi_{-}\\right\\rangle\\) are orthogonal to each other; in other words, when \\(C_{\\psi}=0\\). At this point, it is important to characterise the scale for which displacements \\((\\delta\\mathbf{q},\\delta\\mathbf{p})\\) in phase space will produce a significant decay of this expectation value of \\(\\hat{D}\\). The main subject of our interest is to find an action scale associated to the effectiveness of system \\(\\mathcal{E}\\) to induce decoherence in system \\(\\mathcal{S}\\), and to describe its dependence with the particular environmental state. This question has been previously studied by Zurek [2] by means of the Wigner phase space distribution associated to the state \\(\\left|\\psi\\right\\rangle\\)[6], \\[W_{\\psi}(\\mathbf{q},\\mathbf{p})\\,=\\,\\frac{1}{(2\\pi\\hbar)^{f}}\\int d^{f}\\!q^{ \\prime}\\,e^{i\\mathbf{q}^{\\prime}\\cdot\\mathbf{p}/\\hbar}\\,\\psi(\\mathbf{q}- \\mathbf{q}^{\\prime}/2)\\,\\psi^{*}(\\mathbf{q}+\\mathbf{q}^{\\prime}/2)\\,. \\tag{3}\\] In particular, Moyal's formula [7] \\[\\left|C_{\\psi}(\\delta\\mathbf{q},\\delta\\mathbf{p})\\right|^{2}\\,=\\,(2\\pi\\hbar)^ {f}\\,\\int\\,d^{f}\\!q\\,d^{f}\\!p\\,W_{\\psi}(\\mathbf{q},\\mathbf{p})\\,W_{\\psi}( \\mathbf{q}+\\delta\\mathbf{q},\\mathbf{p}+\\delta\\mathbf{p}) \\tag{4}\\] was used to analyse the behaviour of the overlap \\(|C_{\\psi}|^{2}\\) with \\(\\delta\\mathbf{q}\\) and \\(\\delta\\mathbf{p}\\). The choice of the Wigner phase space distribution was motivated by this simple expression for the scalar product between \\(\\left|\\psi_{+}\\right\\rangle\\) and \\(\\left|\\psi_{-}\\right\\rangle\\). In Ref. [2] Zurek showed that for a given time-dependent quantum chaotic system in one dimension (\\(f=1\\)) confined to a phase space volume characterised by the classical action \\(A\\), the Wigner distribution associated to the state develops in time a spotty random structure on the scale \\(\\hbar^{2}/A\\). Using Eq. (4) he argued that \\(|C_{\\psi}|^{2}\\approx 0\\)for phase space displacements on the scale of the smallest structure of the Wigner distribution \\(W_{\\psi}(q,p)\\). The basis for this result are: (a) Displacements characterised by \\(\\delta q\\delta p\\approx\\hbar^{2}/A\\) produce a significant decrease on the value of the integral in Eq. (4) due to the destructive interference between \\(W_{\\psi}(q,p)\\) and \\(W_{\\psi}(q+\\delta q,p+\\delta p)\\), and (b) the random distribution of the patches in the structure appearing in the Wigner function associated to such system states prevents the presence of recurrences in the value of the overlap. Jordan and Srednicki [8] extended the analysis in Ref. [2] to systems with an arbitrary number of degrees of freedom by using \\[C_{\\psi}(\\delta{\\bf q},\\delta{\\bf p})\\,=\\,\\int\\,d^{\\prime}\\!q\\,d^{\\prime}\\!p\\, e^{i({\\bf p}\\cdot\\delta{\\bf q}+{\\bf q}\\cdot\\delta{\\bf p})/\\hbar}\\,W_{\\psi}({\\bf q },{\\bf p})\\,. \\tag{5}\\] This equation establishes a relation between the small-scale (large-scale) structure of \\(W_{\\psi}\\) in the variables \\(({\\bf q},{\\bf p})\\) and the large-scale (small-scale) structure of \\(C_{\\psi}\\) in the variables \\((\\delta{\\bf p},\\delta{\\bf q})\\). Analysing a two-dimensional billiard and a gas of \\(N\\) hard spheres in a three-dimensional box (assuming the Berry-Voros conjecture [9] in both cases) they concluded that for systems with a small number of degrees of freedom, displacements \\(\\delta q_{i}\\approx L_{i}\\) and \\(\\delta p_{i}\\approx P_{i}\\) are needed to avoid oscillations in the overlap, where \\(L_{i}\\) and \\(P_{i}\\) are typical classical values of the position \\(q_{i}\\) and momentum \\(p_{i}\\) respectively (\\(i=1,\\ldots,f\\)). This means that displacements of the order of the size of the state support are needed to guarantee orthogonality in the general case. However, for systems with a large number of degrees of freedom they found that the conclusions in Ref. [2] remain valid, supporting the idea that a larger number of degrees of freedom increases the effectiveness in causing decoherence. Some care must be taken when relating the results in Refs.[2] and [8] since in principle the Berry-Voros conjecture is not valid for the system analysed by Zurek in Ref. [2] and the dependence of the overlap with the displacement could have qualitatively different features. In this work we characterise the behaviour of \\(C_{\\psi}\\) using a quantity \\(\\Delta S\\), with units of action, associated to the displacement \\((\\delta{\\bf q},\\delta{\\bf p})\\). A formal series expansion of \\(\\hat{D}\\) will allow us to identify the scale in the action \\(\\Delta S(\\delta{\\bf q},\\delta{\\bf p})\\) for which the overlap decreases significantly for any quantum system, irrespective of the number of degrees of freedom. This scale is manifested in the size of the structures present in the distribution associated to the state in some phase space representations, but they do not necessarily coincide. The paper is organised as follows. In Sec. II we define the characteristic action \\(\\Delta S\\) and determine the scale relevant for the decay of the overlap. In Sec. III we establish the relation between \\(C_{\\psi}\\) and the structure of the distribution associated to the state in an arbitrary phase space representation. The next two sections are devoted to studying in detail the dependence of \\(C_{\\psi}\\) on \\(\\Delta S\\) for states of particular quantum systems. Sec. IV considers a system with a time-dependent Hamiltonian whose classical counterpart exhibits chaos. In Sec. V we analyse the case of non-linear systems with a confining potential and discrete spectrum. In this case the main features of \\(C_{\\psi}\\) can be obtained from time average properties of the state evolution. We will focus on quantum systems with time-independent Hamiltonian for which analytical models are worked out by using the Berry-Voros conjecture [9]. Finally in Sec. VI the main results of this work are discussed. ## II Characteristic action scales for the decay of \\(C_{\\psi}(\\delta{\\bf q},\\delta{\\bf p})\\) A displacement operator \\(\\hat{D}(\\delta{\\bf q},\\delta{\\bf p})\\) acting on the state of an \\(f\\)-dimensional quantum system \\({\\cal E}\\), that describes an environment or a'monitoring' apparatus, can be written as \\[\\hat{D}(\\delta{\\bf q},\\delta{\\bf p})\\,=\\,e^{i(\\hat{\\bf p}\\cdot\\delta{\\bf q}+ \\hat{\\bf q}\\cdot\\delta{\\bf p})/\\hbar}\\equiv e^{i\\hat{S}(\\delta{\\bf q},\\delta{ \\bf p})/\\hbar}\\,. \\tag{6}\\] The main features of \\(|C_{\\psi}|^{2}\\) are therefore related to the fluctuation properties of the operator \\(\\hat{S}(\\delta{\\bf q},\\delta{\\bf p})\\), since the expectation value of \\(\\hat{D}\\) equals the characteristic function of \\(\\hat{S}\\) (see Eq. (2)). A formal expansion of \\(\\hat{D}\\) in terms of \\(\\hat{S}\\) gives \\[C_{\\psi}(\\delta{\\bf q},\\delta{\\bf p})\\,=\\,1+\\frac{i}{\\hbar}(\\hat{S})_{\\psi}- \\frac{1}{2\\hbar^{2}}\\langle\\hat{S}^{2}\\rangle_{\\psi}+O\\left(\\frac{s^{3}\\delta^ {3}}{\\hbar^{3}}\\right)\\,, \\tag{7}\\] and for the overlap, \\[|C_{\\psi}(\\delta{\\bf q},\\delta{\\bf p})|^{2}\\,=\\,1-\\frac{1}{\\hbar^{2}}\\Bigl{(} \\langle\\hat{S}^{2}\\rangle_{\\psi}-\\langle\\hat{S}\\rangle_{\\psi}^{2}\\Bigr{)}+O \\left(\\frac{s^{4}\\delta^{4}}{\\hbar^{4}}\\right)\\,. \\tag{8}\\] We denote by \\(\\delta^{n}\\) general products of \\(n\\) components of the vectors \\(\\delta{\\bf q}\\) and \\(\\delta{\\bf p}\\), and by \\(s^{n}\\) terms of the form \\(\\prod_{k=1}^{m}\\langle\\hat{O}_{k}\\rangle_{\\psi}\\), where \\(\\hat{O}_{k}\\) is the product of \\(g_{k}\\) operators \\(\\hat{q}\\) and \\(\\hat{p}\\), with the condition \\(\\sum_{k=1}^{m}g_{k}=n\\). The characteristic action \\[\\Delta S(\\delta{\\bf q},\\delta{\\bf p})\\equiv\\,\\sqrt{\\langle\\hat{S}^{2}\\rangle_ {\\psi}-\\langle\\hat{S}\\rangle_{\\psi}^{2}} \\tag{9}\\] controls the decay of the overlap for sufficiently small values of \\(\\delta{\\bf q}\\) and \\(\\delta{\\bf p}\\). Eq. (8) suggests that displacements \\((\\delta{\\bf q},\\delta{\\bf p})\\) for which \\(\\Delta S\\) is small compared to \\(\\hbar\\) do not lead generally to an important decay of \\(|C_{\\psi}|^{2}\\). In other words, displacements leading to \\(\\Delta S\\) of the order or larger than \\(\\hbar\\) are needed for the states \\(|\\psi_{+}\\rangle\\) and \\(|\\psi_{-}\\rangle\\) to be orthogonal. Therefore \\(\\Delta S\\approx\\hbar\\) establishes the scale for the action involved in displacements of the environmental state that could induce significant decoherence in system \\(\\cal S\\). For the case of Gaussian fluctuations of the operator \\(\\hat{S}\\), the only relevant fluctuation is \\(\\Delta S\\). In a more general situation higher order fluctuations may play a role in the particular features of the decay of \\(|C_{\\psi}|^{2}\\), nonetheless the \\(\\Delta S\\)-action scale is generally expected to be a good measure for the decoherence process. The rest of the paper will provide additional arguments for this interpretation of the scale associated to the quantity \\(\\Delta S\\). To be more specific, let us write \\((\\Delta S)^{2}\\) in terms of \\((\\delta{\\bf q},\\delta{\\bf p})\\), \\[(\\Delta S)^{2} = \\sum_{i=1}^{f}\\sum_{j=1}^{f}\\left[(\\langle\\hat{q}_{i}\\hat{q}_{j} \\rangle_{\\psi}-\\langle\\hat{q}_{i}\\rangle_{\\psi}\\langle\\hat{q}_{j}\\rangle_{ \\psi})\\delta p_{i}\\delta p_{j}+(\\langle\\hat{p}_{i}\\hat{p}_{j}\\rangle_{\\psi}- \\langle\\hat{p}_{i}\\rangle_{\\psi}\\langle\\hat{p}_{j}\\rangle_{\\psi})\\delta q_{i} \\delta q_{j}\\right. \\tag{10}\\] \\[+ \\left.(\\langle\\hat{q}_{i}\\hat{p}_{j}\\rangle_{\\psi}-\\langle\\hat{q }_{i}\\rangle_{\\psi}\\langle\\hat{p}_{j}\\rangle_{\\psi})\\delta p_{i}\\delta q_{j}+ (\\langle\\hat{p}_{i}\\hat{q}_{j}\\rangle_{\\psi}-\\langle\\hat{p}_{i}\\rangle_{\\psi }\\langle\\hat{q}_{j}\\rangle_{\\psi})\\delta q_{i}\\delta p_{j}\\right],\\] or \\[(\\Delta S)^{2}\\,=\\,\\delta{\\bf p^{T}\\gamma}^{qq}\\delta{\\bf p}+\\delta{\\bf q^{T} \\gamma}^{pp}\\delta{\\bf q}+\\delta{\\bf p^{T}\\gamma}^{qp}\\delta{\\bf q}+\\delta{\\bf q ^{T}\\gamma}^{pq}\\delta{\\bf p}\\,, \\tag{11}\\] where we have introduced the matrices \\(\\mathbf{\\gamma}^{AB}_{ij}\\equiv\\langle\\hat{A}_{i}\\hat{B}_{j}\\rangle_{ \\psi}-\\langle\\hat{A}_{i}\\rangle_{\\psi}\\langle\\hat{B}_{j}\\rangle_{\\psi}\\), and \\({\\bf a^{T}}\\) denotes the transposed of the vector \\({\\bf a}\\). To gain some insight into the meaning of this quantity, we will consider \\((\\Delta S)^{2}\\) for the one-dimensional case, \\[(\\Delta S)^{2}\\,=\\,(\\sigma_{q}\\delta p)^{2}+(\\sigma_{p}\\delta q)^{2}+(\\langle \\hat{q}\\hat{p}+\\hat{p}\\hat{q}\\rangle_{\\psi}\\,-\\,2\\langle\\hat{q}\\rangle_{\\psi} \\langle\\hat{p}\\rangle_{\\psi})\\delta q\\delta p\\,, \\tag{12}\\] where \\(\\sigma_{q}\\) and \\(\\sigma_{p}\\) are the root-mean-square deviations of \\(\\hat{q}\\) and \\(\\hat{p}\\) respectively. To continue with our discussion, a rotation in phase space is made, so that the term \\((\\langle\\hat{q}\\hat{p}+\\hat{p}\\hat{q}\\rangle_{\\psi}-2\\langle\\hat{q}\\rangle_{ \\psi}\\langle\\hat{p}\\rangle_{\\psi})\\) in the previous equation is zero, and \\(\\Delta S\\) is given, in terms of the new phase space variables, by \\[(\\Delta S)^{2}\\,=\\,(\\sigma_{\\tilde{q}}\\delta\\tilde{p})^{2}\\,+\\,(\\sigma_{ \\tilde{p}}\\delta\\tilde{q})^{2}\\,, \\tag{13}\\] where \\(\\sigma_{\\tilde{q}}\\) (\\(\\sigma_{\\tilde{p}}\\)) gives the support of the state in the variable \\(\\tilde{q}\\) (\\(\\tilde{p}\\)). A classical action \\(A\\equiv\\sigma_{\\tilde{q}}\\sigma_{\\tilde{p}}\\) can be associated to the state of the system. Eq. (13) implies that displacements such that \\(\\sigma_{\\tilde{q}}\\,\\delta\\tilde{p}\\approx\\hbar\\) or \\(\\sigma_{\\tilde{p}}\\,\\delta\\tilde{q}\\approx\\hbar\\) give \\(\\Delta S\\gtrsim\\hbar\\), and the main point of our analysis is that they also lead in general to a significant variation of \\(|C_{\\psi}|^{2}\\), irrespective of the value of action \\(A\\). In this sense values of the order or larger than \\(\\hbar\\) of the \\(\\Delta S\\)-action scale are always needed for this environmental system to induce decoherence. It is possible to define other relevant quantities with units of action. For instance, values of \\(\\Delta Z\\equiv\\delta\\tilde{q}\\delta\\tilde{p}\\) leading to a significant decrease of the overlap are related to the size of the structure of the distribution associated to the state in some particular phase space representations [2]. For the displacements discussed above \\(\\Delta Z\\approx\\hbar^{2}/A\\), and if \\(A>>\\hbar\\) the result that sub-Planck displacements on the \\(\\Delta Z\\)-action scale are relevant for the decoherence process induced by \\({\\cal E}\\) comes naturally. Coming back to the multi-dimensional case, when the dimension of the problem increases more terms will contribute to \\((\\Delta S)^{2}\\) in Eq. (10), and smaller displacements in each variable are needed to reach the threshold \\(\\Delta S\\approx\\hbar\\), leading to the result that a larger number of degrees of freedom will favour the decoherence process [8]. To illustrate the difference between \\(\\Delta Z\\)- and \\(\\Delta S\\)-action scales we consider a general Gaussian state in one dimension \\[\\psi(q)=\\left(\\frac{2z_{R}}{\\pi|z|^{2}}\\right)^{1/4}e^{ip_{0}q/\\hbar}e^{-(q-q_ {0})^{2}/z}, \\tag{14}\\] with \\(z\\equiv z_{R}+iz_{I}\\), \\(z_{R}=(\\hbar/\\sigma_{p})^{2}\\), and \\(z_{I}=z_{R}\\sqrt{4\\sigma_{q}^{2}\\sigma_{p}^{2}-\\hbar^{2}}/\\hbar\\). Straightforward calculations lead to the exact expression \\[|C_{\\psi}(\\delta q,\\delta p)|^{2}\\,=\\,\\exp\\left[-(\\Delta S)^{2}/\\hbar^{2} \\right]\\,, \\tag{15}\\] where \\[(\\Delta S)^{2}=(\\sigma_{p}\\delta q)^{2}+(\\sigma_{q}\\delta p)^{2}+\\hbar\\sqrt{ \\left(\\frac{2\\delta p\\delta q\\sigma_{p}\\sigma_{q}}{\\hbar}\\right)^{2}-\\left( \\delta p\\delta q\\right)^{2}}, \\tag{16}\\] in terms of the first two moments of \\(\\hat{S}\\), as expected for a Gaussian wavefunction. Eq. (15) shows that values of \\(\\Delta S\\gtrsim\\hbar\\) are needed to obtain a significant decrease of the overlap \\(|C_{\\psi}|^{2}\\). If we now choose, for instance, particular values of the widths \\(\\sigma_{q}\\) and \\(\\sigma_{p}\\) so that the Gaussian state is much narrower in coordinate than in momentum space, say \\(\\sigma_{q}\\simeq\\sqrt{\\hbar}/10\\) and \\(\\sigma_{p}\\simeq 10\\sqrt{\\hbar}\\) (in arbitrary units), it is clear that a displacement \\((\\delta q,\\delta p)=(\\sqrt{\\hbar}/2,\\sqrt{\\hbar}/2)\\) will take the shifted Gaussian completely away from the initial one. The different actions associated to that same displacement are \\(\\Delta S\\approx 5\\hbar\\) and \\(\\Delta Z=\\hbar/4\\), corresponding to over-Planck and sub-Planck values respectively. ## III Sub-Planck structures in phase space distributions. The behaviour of the overlap \\(|C_{\\psi}|^{2}\\) with \\((\\delta{\\bf q},\\delta{\\bf p})\\) can be alternatively studied through the distribution associated to the state in different phase space representations. In this section we will derive the relation between the overlap and the action \\(\\Delta S\\) using a wide class of quantum quasi-probability distributions \\(F({\\bf q},{\\bf p};\\chi)\\)[10], the Wigner [6] and Husimi [11] functions being nothing but particular cases. The choice among the \\(F\\) functions associated to the same quantum state of a system, or, equivalently, the selection of a particular representation (given by function \\(\\chi\\)), is similar to the choice of a convenient set of coordinates [12; 13; 14]. Within this framework, the expectation value of any operator \\(\\hat{G}({\\bf\\hat{q}},{\\bf\\hat{p}})\\) is written as the phase space integral \\[\\langle\\hat{G}({\\bf\\hat{q}},{\\bf\\hat{p}})\\rangle_{\\psi}\\,=\\,\\int d^{\\prime}\\! \\!q\\,d^{\\prime}\\!p\\,\\,F_{\\psi}({\\bf q},{\\bf p};\\chi)\\,g({\\bf q},{\\bf p};\\chi)\\,, \\tag{17}\\] where \\(F_{\\psi}({\\bf q},{\\bf p};\\chi)\\) is obtained from the quantum state \\(|\\psi\\rangle\\) as \\[F_{\\psi}({\\bf q},{\\bf p};\\chi)=\\frac{1}{(2\\pi)^{2f}}\\int d^{\\prime}\\!\\!\\theta \\,d^{\\prime}\\!\\tau\\,d^{\\prime}\\!u\\,\\,\\chi({\\mathbf{\\theta}},{\\mathbf{\\tau}})\\!\\left<{ \\bf u}+\\frac{{\\mathbf{\\tau}}\\hbar}{2}\\right|\\!\\psi\\right>\\!\\left<\\psi\\!\\left|{\\bf u }-\\frac{{\\mathbf{\\tau}}\\hbar}{2}\\right>\\!\\!e^{-i[{\\mathbf{\\theta}}\\cdot({\\bf q}-{\\bf u })+{\\mathbf{\\tau}}\\cdot{\\bf p}]}\\,. \\tag{18}\\] The Wigner and Husimi functions, for instance, are obtained by replacing \\(\\chi({\\mathbf{\\theta}},{\\mathbf{\\tau}})=1\\) and \\(\\chi({\\mathbf{\\theta}},{\\mathbf{\\tau}})=\\exp\\{-\\frac{\\hbar}{4}[({\\mathbf{\\tau}}{\\mathbf{\\lambda }})^{2}+({\\mathbf{\\theta}}/\\lambda)^{2}]\\}\\) respectively. The function \\(g({\\bf q},{\\bf p};\\chi)\\) is the _image_ of the operator \\(\\hat{G}\\) in phase space according to the kernel function \\(\\chi\\)[14], \\[g({\\bf q},{\\bf p};\\chi)\\,=\\,\\left(\\frac{\\hbar}{2\\pi}\\right)^{f}\\int d^{\\prime }\\!\\!\\theta\\,d^{\\prime}\\!\\tau\\,d^{\\prime}\\!\\!u\\,\\,\\frac{1}{\\chi({\\mathbf{\\theta }},{\\mathbf{\\tau}})}\\!\\left<{\\bf u}-\\frac{{\\mathbf{\\tau}}\\hbar}{2}\\right|\\!\\hat{G}\\! \\left|{\\bf u}+\\frac{{\\mathbf{\\tau}}\\hbar}{2}\\right>\\!\\!e^{i[{\\mathbf{\\theta}}\\cdot({ \\bf q}-{\\bf u})+{\\mathbf{\\tau}}\\cdot{\\bf p}]}\\,, \\tag{19}\\] and it is not necessarily equal to the classical magnitude. In particular, the expectation value of the displacement operator \\(\\hat{D}\\) can be written as the phase space average \\[C_{\\psi}(\\delta{\\bf q},\\delta{\\bf p}) = \\int d^{\\prime}\\!q\\,d^{\\prime}\\!p\\,\\,F_{\\psi}({\\bf q},{\\bf p}; \\chi)\\,d({\\bf q},{\\bf p};\\chi) \\tag{20}\\] \\[= \\frac{1}{\\chi(\\delta{\\bf p}/\\hbar,\\delta{\\bf q}/\\hbar)}\\int d^{ \\prime}\\!q\\,d^{\\prime}\\!p\\,\\,F_{\\psi}({\\bf q},{\\bf p};\\chi)\\,e^{i({\\bf q} \\cdot\\delta{\\bf p}+{\\bf p}\\cdot\\delta{\\bf q})/\\hbar}\\,,\\] where the function \\(d({\\bf q},{\\bf p};\\chi)\\) was obtained by integrating the r.h.s. of Eq. (19) with \\(\\hat{G}\\) replaced by \\(\\hat{D}\\). Notice that Eq. (5) is a particular case of Eq. (20) for which the Wigner function has been chosen as the distribution associated to the state, \\(C_{\\psi}(\\delta{\\bf q},\\delta{\\bf p})\\) being equal to the Fourier transform of \\(W_{\\psi}\\). Eq. (20) can be used to understand the relation between \\(C_{\\psi}\\) and \\(\\Delta S\\) from the point of view of the phase space distribution \\(F_{\\psi}\\). On one hand, if the exponential factor does not vary significantly over the support of \\(F_{\\psi}({\\bf q},{\\bf p};\\chi)\\), whichoccurs for small enough values of \\(\\delta{\\bf q}\\) and \\(\\delta{\\bf p}\\), then the overlap will only differ slightly from the normalisation integral of the original distribution, \\(\\int d^{\\prime}\\!q\\,d^{\\prime}\\!p\\ F_{\\psi}({\\bf q},{\\bf p};\\chi)=1\\), leading in general to a small decrease of the function \\(|C_{\\psi}|^{2}\\). (Notice that \\(\\chi(0,0)=1\\) is needed to guarantee that \\(F_{\\psi}({\\bf q},{\\bf p};\\chi)\\) is normalised to one [13].) The condition for these variations not to be significant is equivalent to the condition that the root-mean-square deviation of \\(\\hat{S}\\), \\(\\Delta S\\), is smaller than \\(\\hbar\\). On the other hand, to obtain significant decay of the overlap, rapid oscillations, with \\(\\Delta S\\) at least of the order of \\(\\hbar\\), are needed. Due to the properties of the Fourier transform, and since the value of \\(\\chi(\\delta{\\bf p}/\\hbar,\\delta{\\bf q}/\\hbar)\\) is close to one for small enough \\(\\delta{\\bf q}\\) and \\(\\delta{\\bf p}\\), the initial decay of \\(|C_{\\psi}|^{2}\\) with the displacement is related to the large scale structure of the distribution \\(F_{\\psi}\\), that depends mainly on the size of the state support in phase space. However, the detailed behaviour of \\(|C_{\\psi}|^{2}\\) with arbitrary displacements, and, in particular, qualitative features like oscillations, will depend on the state under study. A different question is how sub-Plank structures emerge in some phase space distributions associated to the state and how they are related to the main features of \\(C_{\\psi}\\). For kernel functions such that \\(|\\chi(\\mathbf{\\theta},\\mathbf{\\tau})|=1\\), the corresponding distributions verify [15] \\[\\left|C_{\\psi}(\\delta{\\bf q},\\delta{\\bf p})\\right|^{2}\\,=\\,(2\\pi\\hbar)^{f}\\, \\int\\,d^{\\prime}\\!q\\,d^{\\prime}\\!p\\,F_{\\psi}({\\bf q},{\\bf p})\\,F_{\\psi}({\\bf q }+\\delta{\\bf q},{\\bf p}+\\delta{\\bf p})\\,. \\tag{21}\\] (Notice that Eq. (4) is a particular case of Eq. (21).) For these representations, the fact that a given displacement leads to \\(|C_{\\psi}|^{2}\\approx 0\\) is manifested in a complex structure of the distribution \\(F_{\\psi}\\) on the scale of the \\(\\Delta Z\\)-action for that displacement, as pointed out in Ref. [2] for the Wigner distribution. When displacements with \\(\\Delta Z<<\\hbar\\) lead to small values of \\(|C_{\\psi}|^{2}\\), the distribution \\(F_{\\psi}\\) will show a complex structure at sub-Planck scales. This result is a consequence of the particular choice of the phase space distribution. For the same state, the Husimi distribution (obtained by smoothing the Wigner function, so eliminating the sub-Planck scale structure) will lead to the same overlap \\(|C_{\\psi}|^{2}\\). This is not surprising since the overlap depends on the state, and that dependence is manifested in different ways for different phase space representations. ## IV A One-dimensional Time-Dependent Environmental System In this section we will analyse the dependence of the overlap \\(|C_{\\psi}|^{2}\\) on the action \\(\\Delta S(\\delta q,\\delta p)\\) in the context of a particular one-dimensional model for the environmental system \\({\\cal E}\\), de scribed by Hamiltonian \\[\\hat{H}_{\\mathcal{E}}\\,=\\,\\frac{\\hat{p}^{2}}{2m}-\\kappa\\cos\\left(\\hat{q}-l\\sin t \\right)+\\frac{1}{2}a\\hat{q}^{2}\\,. \\tag{22}\\] This quantum model system has been previously used in the context of decoherence [2; 16], and describes a particle of mass \\(m=1\\) (arbitrary units are used throughout) confined by a harmonic potential that is perturbed by a spatially and temporally periodic term. For the parameter values used in this work, \\(\\kappa=0.36\\), \\(a=0.01\\), \\(l=3.8\\), and \\(\\hbar=0.16\\), the motion in the classical counterpart of this system exhibits a chaotic character [17]. To prepare the state of the environmental system prior to the interaction, we let a given initial state evolve until preparation time \\(T\\), when it is coupled to the pointer system \\(\\mathcal{S}\\). The coupling strength is assumed to be high enough so that the two-systems evolution can be followed as described in the introduction, i.e., by neglecting any contribution coming from the dynamics induced by Hamiltonian (22) during the interaction time. This approach allows us to discuss the values of actions involved in the decoherence process induced on system \\(\\mathcal{S}\\) in terms of general displacements in phase space, irrespective of the detailed values of the coupling constants and interaction times [18]. As the initial state (\\(T=t=0\\)) for the preparation process we have chosen a coherent state \\(|\\alpha\\rangle\\,=\\,e^{-|\\alpha|^{2}/2}\\,\\sum_{n=0}^{\\infty}\\frac{\\alpha^{n}}{ \\sqrt{n!}}\\,|n\\rangle\\,\\) of the harmonic oscillator \\(\\hat{H}_{OA}=\\hat{p}^{2}/(2m)+a\\hat{q}^{2}/2\\), where \\(|n\\rangle\\) is the eigenstate of \\(\\hat{H}_{OA}\\) with energy \\((n+1/2)\\hbar\\sqrt{a/m}\\). (We have checked other possible initial states obtaining qualitatively similar results.) Its time propagation under Hamiltonian (22) has been obtained by means of the split-operator method [19]. As time \\(T\\) increases, the state spreads in coordinate as well as in momentum space through the available phase space as shown in the insets of Fig. 1. To characterise this dynamics the quantity \\[a\\equiv\\frac{\\hbar^{2}}{\\sigma_{q}\\sigma_{p}} \\tag{23}\\] is used (see Fig. 1). It shows a rapid initial decay (until time \\(T\\approx 20\\)), followed by a much slower decrease for longer times. The behaviour of \\(a\\) for small preparation time \\(T\\) is related to the fast initial increase of the widths \\(\\sigma_{q}\\) and \\(\\sigma_{p}\\). The variation of \\(a\\) for longer times is mainly due to the time-dependent term in the Hamiltonian. Should not be for the presence of this time-dependent term \\(a\\) would not decrease beyond a certain minimum value related to the maximum position and momentum widths compatible with a fixed mean system energy. Fig. 2 shows \\(|C_{\\psi}|^{2}\\) versus \\(\\Delta S\\) for different preparation times and for a given direction in phase space. (The results for any other direction show the same qualitative features.) The different curves, corresponding to different preparation times, decay in the same \\(\\Delta S\\)-scale. To emphasise this result we represent in the inset the value of \\(\\Delta S\\) needed to obtain \\(|C_{\\psi}|^{2}=0.5\\) versus \\(T\\). The \\(\\Delta S\\)-action values for any preparation time are of the order of \\(\\hbar\\), supporting \\(\\Delta S\\approx\\hbar\\) as a relevant scale for the studied decoherence process. (Notice that the apparent convergence of \\(\\Delta S_{0}\\) to a value close to \\(\\hbar\\) is only a consequence of the chosen value for \\(|C_{\\psi}|^{2}\\).) The values of \\(\\Delta Z\\) for which \\(|C_{\\psi}|^{2}=0.5\\) are also shown in Fig. 1 and Fig. 2. After some time \\(T\\approx 5\\), the action \\(a\\) sets the scale of the random structure developed in the distribution associated to the states of this system in some phase space representations, for example in the Wigner function [2]. Taking into account the discussion below Eq. (21), the action \\(\\Delta Z\\) for displacements producing a significant decrease of \\(|C_{\\psi}|^{2}\\) will be of the order of the action \\(a\\) after \\(T\\approx 5\\), as shown in Fig. 1. Figure 1: Action \\(a\\equiv\\hbar^{2}/(\\sigma_{q}\\sigma_{p})\\) as a function of the preparation time \\(T\\) (solid line) for the initial coherent state with \\(\\alpha=5\\,i\\). The action \\(\\Delta Z_{0}\\) needed for a displacement in the direction \\(\\delta q\\simeq 6.8\\delta p\\) to reduce the value of \\(|C_{\\psi}|^{2}\\) to \\(0.5\\) is also shown (dashed line). The inset shows the dependence of the widths \\(\\sigma_{q}\\) (left) and \\(\\sigma_{p}\\) (right) with the preparation time. (Notice the different scales in the vertical axis for each case.) Arbitrary units are used. ## V Non-linear confined environmental systems For now on, a different model for the environmental system \\({\\cal E}\\) will be considered, that of a time-independent Hamiltonian, \\(\\hat{H}_{NL}\\), with a non-linear confining potential and a discrete energy spectrum. Under certain assumptions, this model will allow us to obtain analytical expressions for the overlap between the states \\(|\\psi_{+}\\rangle\\) and \\(|\\psi_{-}\\rangle\\). Instead of considering a particular environmental state \\(|\\psi\\rangle\\), obtained after some fixed preparation time \\(T\\), we will study the dependence of the overlap averaged over the preparation time on an averaged \\(\\Delta S\\)-action. For non-linear confined systems, the main features of this stationary description can be associated to all the states prepared from a given initial one, provided that their preparation time is long enough. In the first part of this section we will determine the stationary properties of \\(C_{\\psi}\\) relevant to our discussion. We will assume that the states are prepared from a given \\(|\\psi(T=0)\\rangle\\), and make use of the Wigner distribution in phase space Figure 2: \\(|C_{\\psi}|^{2}\\) as a function of the action \\(\\Delta S(\\delta q,\\delta p)\\) in the direction \\(\\delta q\\simeq 6.8\\delta p\\) in phase space and for different preparation times: \\(T=0\\) (solid line), \\(T=10\\) (dashed line), \\(T=20\\) (dotted line), and \\(T=500\\) (circles). (Same initial state as in Fig. 1.) The inset shows the actions \\(\\Delta S_{0}\\) (upper curve) and \\(\\Delta Z_{0}\\) (lower curve) needed for the displacement to reduce \\(|C_{\\psi}|^{2}\\) to the value \\(0.5\\) versus the preparation time \\(T\\). The straight line shows the action \\(\\hbar\\) for reference. Arbitrary units are used. associated to them. Although the procedure and the results are independent of the choice of a particular phase space representation, the use of the Wigner distribution will allows us to extend our analysis afterwards for systems for which the Berry-Voros conjecture is valid. ### Stationary properties of the overlap In the basis of eigenstates of the Hamiltonian \\(\\hat{H}_{NL}\\), which will be assumed to have, for simplicity, a non-degenerate spectrum, the wave function at preparation time \\(T\\) is given by \\[\\psi({\\bf q},T)=\\sum_{n}c_{n}e^{-iE_{n}T/\\hbar}\\varphi_{n}({\\bf q}), \\tag{24}\\] where \\(\\hat{H}\\varphi_{n}({\\bf q})=E_{n}\\varphi_{n}({\\bf q})\\) and \\(c_{n}=\\int dq\\,\\varphi_{n}^{*}({\\bf q})\\psi({\\bf q},0)\\). The Wigner distribution is obtained introducing expansion (24) into Eq. (3). Splitting the result into time-independent and time-dependent terms, \\[W_{\\psi}({\\bf q},{\\bf p},T) = \\sum_{n}|c_{n}|^{2}W_{\\varphi_{n}}({\\bf q},{\\bf p})\\] \\[+ \\sum_{n\ eq m}c_{n}c_{m}^{*}e^{-i(E_{n}-E_{m})T/\\hbar}\\int\\frac{ d^{\\prime}\\!q^{\\prime}}{(2\\pi\\hbar)^{f}}e^{i{\\bf q^{\\prime}}\\cdot{\\bf p}/ \\hbar}\\varphi_{n}({\\bf q}-{\\bf q^{\\prime}}/2)\\varphi_{m}^{*}({\\bf q}+{\\bf q^{ \\prime}}/2),\\] where \\(W_{\\varphi_{n}}({\\bf q},{\\bf p})\\) is the Wigner distribution associated to the energy eigenstate \\(\\varphi_{n}({\\bf q})\\). For non-linear systems, it turns out that the Wigner distribution spreads from its initial (\\(T=0\\)) support in phase space until it occupies most of the available phase space volume at some preparation time \\(T_{c}\\). From time \\(T_{c}\\) on, the small details of the Wigner distribution will change with time, but in general its long scale structure will remain as a stationary property. To extract that characteristic long scale structure we employ the time-averaged Wigner distribution \\[\\overline{W_{\\psi}({\\bf q},{\\bf p})}\\,\\equiv\\,\\lim_{\\tau\\to\\infty}\\frac{1}{ \\tau}\\int_{0}^{\\tau}dT\\,W_{\\psi}({\\bf q},{\\bf p},T)\\,=\\,\\sum_{n}|c_{n}|^{2}W_ {\\varphi_{n}}({\\bf q},{\\bf p})\\,, \\tag{26}\\] where we have taken \\[\\lim_{\\tau\\to\\infty}\\frac{1}{\\tau}\\int_{0}^{\\tau}dT\\,\\sum_{n\ eq m}c_{n}c_{m}^ {*}e^{-i(E_{n}-E_{m})T/\\hbar}\\int\\frac{d^{\\prime}\\!q^{\\prime}}{(2\\pi\\hbar)^{f }}e^{i{\\bf q^{\\prime}}{\\bf p}/\\hbar}\\varphi_{n}({\\bf q}-{\\bf q^{\\prime}}/2) \\varphi_{m}^{*}({\\bf q}+{\\bf q^{\\prime}}/2)\\,=\\,0\\,. \\tag{27}\\] Introducing \\(\\overline{W_{\\psi}}\\) into Eq. (5), we obtain the time-averaged quantity \\[\\overline{C_{\\psi}(\\delta{\\bf q},\\delta{\\bf p})}\\,=\\,\\int d^{\\prime}\\!q\\,d^{ \\prime}\\!p\\ e^{i({\\bf p}\\cdot\\delta{\\bf q}+{\\bf q}\\cdot\\delta{\\bf p})/\\hbar} \\,\\overline{W_{\\psi}({\\bf q},{\\bf p})}\\,, \\tag{28}\\]that describes the stationary properties of the overlap between \\(|\\psi_{+}\\rangle\\) and \\(|\\psi_{-}\\rangle\\). According to Eq. (28), \\(\\overline{C_{\\psi}}\\) can be identified as the generating function of all moments of \\(S\\equiv{\\bf p}\\cdot\\delta{\\bf q}+{\\bf q}\\cdot\\delta{\\bf p}\\) with respect to the distribution \\(\\overline{W_{\\psi}}\\). Therefore a set of equations similar to Eq. (7) and (8) can be obtained. These equations imply that the initial decay of \\(\\overline{C_{\\psi}}\\) is ruled by the fluctuation properties of \\(S\\) at stationary conditions. The action scale involved in the decay of \\(\\overline{C_{\\psi}}\\) for small displacements \\((\\delta{\\bf q},\\delta{\\bf p})\\) can be in general associated to any state with preparation time longer than \\(T_{c}\\). This result follows from \\(\\overline{W_{\\psi}}\\) describing properly the long scale structure for \\(T>T_{c}\\) and the discussion in Sec. III. In the rest of the section we consider a family of quantum systems for which \\(\\overline{C_{\\psi}}\\) can be obtained analytically. ### Systems described by the Berry-Voros conjecture We shall now pay special attention to (1) quantum systems with time-independent Hamiltonians and classical chaotic counterpart and (2) regular quantum systems with particular random components in their potentials [20; 21], for which the relevant quantities are obtained after averaging over the noise. There are both, experimental and numerical evidences, that for these systems the so called Berry-Voros conjecture is valid, namely, that one can approximate the Wigner density associated to an energy eigenstate by a microcanonical density [22; 23; 24], \\[W_{\\varphi_{n}}({\\bf q},{\\bf p})\\rightarrow\\frac{1}{(2\\pi\\hbar)^{f}}\\frac{ \\delta\\big{(}E_{n}-H({\\bf q},{\\bf p})\\big{)}}{\\rho(E_{n})}, \\tag{29}\\] where \\(\\rho(E_{n})=\\int\\frac{d^{\\prime}qd^{\\prime}p}{(2\\pi\\hbar)^{f}}\\delta\\big{(}E_ {n}-H({\\bf q},{\\bf p})\\big{)}\\) is the local average density of states at energy \\(E_{n}\\), and \\(H({\\bf q},{\\bf p})\\) is the classical Hamiltonian associated to the quantum one [9; 20; 25; 26]. The Wigner distribution in the semiclassical limit fills the available phase space that corresponds to an energy shell of thickness of the order \\(\\hbar\\) and its amplitude fluctuates around the microcanonical density. Furthermore the density function in Eq. (29) is just the leading approximation of a semiclassical expression for \\(W_{\\varphi_{n}}({\\bf q},{\\bf p})\\). The next to the leading terms depend on the periodic orbits of the classical system and take into account the possible scars [27; 28; 29]. Replacing \\(W_{\\varphi_{n}}({\\bf q},{\\bf p})\\), implicit in Eq. (28), by the expression in Eq. (29), it follows \\[\\overline{C_{\\psi}(\\delta{\\bf q},\\delta{\\bf p})}^{BV} = \\sum_{n}|c_{n}|^{2}\\rho^{-1}(E_{n})\\left\\langle e^{i({\\bf p} \\cdot\\delta{\\bf q}+{\\bf q}\\cdot\\delta{\\bf p})/\\hbar}\\right\\rangle_{\\varphi_{ n}}^{BV}, \\tag{30}\\]where \\[\\left\\langle e^{i({\\bf p}\\cdot\\delta{\\bf q}+{\\bf q}\\cdot\\delta{\\bf p})/\\hbar} \\right\\rangle^{BV}_{\\varphi_{n}}\\equiv\\int\\frac{d^{\\prime}qd^{\\prime}p}{(2\\pi \\hbar)^{f}}\\,e^{i({\\bf p}\\cdot\\delta{\\bf q}+{\\bf q}\\cdot\\delta{\\bf p})/\\hbar}\\, \\delta\\big{(}E_{n}-H({\\bf q},{\\bf p})\\big{)} \\tag{31}\\] is the microcanonical average of \\(e^{iS/\\hbar}\\). For a Hamiltonian of the form \\(H({\\bf q},{\\bf p})={\\bf p}^{2}/2M+V({\\bf q})\\), and after integrating over the momentum variables, one obtains \\[\\left\\langle e^{i({\\bf p}\\cdot\\delta{\\bf q}+{\\bf q}\\cdot\\delta{ \\bf p})/\\hbar}\\right\\rangle^{BV}_{\\varphi_{n}} = (2\\pi)^{f/2}M\\int\\frac{d^{\\prime}q}{(2\\pi\\hbar)^{f}}e^{i{\\bf q} \\cdot\\delta{\\bf p}/\\hbar}\\Big{(}\\frac{\\hbar}{|\\delta{\\bf q}|}\\sqrt{2M(E_{n}-V ({\\bf q}))}\\Big{)}^{\\frac{f}{2}-1} \\tag{32}\\] \\[\\times J_{\\frac{f}{2}-1}\\Big{(}\\frac{|\\delta{\\bf q}|}{\\hbar} \\sqrt{2M(E_{n}-V({\\bf q}))}\\Big{)},\\] where \\(J_{\\frac{f}{2}-1}(z)\\) is the Bessel function of order \\(f/2-1\\). Eqs. (30) and (32) lead to a formal expression of the time-averaged two-point correlation function \\(\\overline{C_{\\psi}(\\delta{\\bf q},\\delta{\\bf p})}^{BV}\\) in terms of the potential \\(V({\\bf q})\\). These equations constitute the main result of this section and are the starting point for the analysis of particular examples. In the following we shall particularise Eq. (30) for systems with a random component in the potential such that the average over the noise is the \\(f\\)-dimensional harmonic potential. #### iii.2.1 The \\(f\\)-dimensional harmonic oscillator The classical Hamiltonian for a generic \\(f\\)-dimensional harmonic oscillator, \\[H(\\tilde{\\bf q},\\tilde{\\bf p})\\,=\\,\\sum_{i=1}^{f}\\,\\frac{\\tilde{p}_{i}^{2}}{2 m_{i}}\\,+\\,\\frac{1}{2}m_{i}\\omega_{i}^{2}\\tilde{q}_{i}^{2}\\,, \\tag{33}\\] can be rewritten, in terms of the rescaled coordinates and momenta \\[p_{i} \\equiv \\sqrt{\\frac{M}{m_{i}}}\\,\\tilde{p}_{i}\\] \\[q_{i} \\equiv \\sqrt{\\frac{m_{i}\\omega_{i}^{2}}{M\\omega^{2}}}\\,\\tilde{q}_{i}\\,, \\tag{34}\\] as the spherical harmonic oscillator \\[H({\\bf q},{\\bf p})\\,=\\,\\frac{1}{2M}(p_{1}^{2}+p_{2}^{2}+\\cdots+p_{f}^{2})+ \\frac{1}{2}M\\omega^{2}(q_{1}^{2}+q_{2}^{2}+\\cdots+q_{f}^{2})\\,. \\tag{35}\\] The integral in Eq. (31) reads \\[\\left(\\prod_{i=1}^{f}\\frac{\\omega}{\\omega_{i}}\\right)\\int\\frac{d^{\\prime}q\\,d ^{\\prime}p}{(2\\pi\\hbar)^{f}}\\,e^{i({\\bf p}\\cdot\\delta{\\bf q}+{\\bf q}\\cdot \\delta{\\bf p})/\\hbar}\\,\\delta\\big{(}E_{n}-H({\\bf q},{\\bf p})\\big{)}\\,, \\tag{36}\\]with \\[\\delta{\\bf q} \\equiv \\left(\\sqrt{\\frac{m_{1}}{M}}\\,\\delta\\tilde{q}_{1},\\ldots,\\sqrt{\\frac {m_{f}}{M}}\\,\\delta\\tilde{q}_{f}\\right)\\] \\[\\delta{\\bf p} \\equiv \\left(\\sqrt{\\frac{M\\omega^{2}}{m_{1}\\omega_{1}^{2}}}\\,\\delta \\tilde{p}_{1},\\ldots,\\sqrt{\\frac{M\\omega^{2}}{m_{f}\\omega_{f}^{2}}}\\,\\delta \\tilde{p}_{f}\\right)\\,. \\tag{37}\\] After some manipulations, it follows that \\[\\left\\langle e^{i({\\bf p}\\cdot\\delta{\\bf q}+{\\bf q}\\cdot\\delta{ \\bf p})/\\hbar}\\right\\rangle^{BV}_{\\varphi_{n}} = 2^{f-1}\\,\\left(\\frac{\\omega^{f}}{\\prod_{i=1}^{f}\\,\\omega_{i}} \\right)\\frac{E_{n}^{f-1}}{(\\hbar\\omega)^{f}}\\Big{(}\\frac{\\hbar}{|\\delta{\\bf q }|}\\sqrt{\\frac{1}{2ME_{n}}}\\Big{)}^{\\frac{f}{2}-1}\\Big{(}\\frac{\\hbar}{|\\delta{ \\bf p}|}\\sqrt{\\frac{M\\omega^{2}}{2E_{n}}}\\Big{)}^{\\frac{f}{2}-1} \\tag{38}\\] \\[\\times\\int_{0}^{1}d\\,\\xi\\,\\xi^{\\frac{f}{2}}\\Big{(}\\sqrt{1-\\xi^{2 }}\\Big{)}^{\\frac{f}{2}-1}J_{\\frac{f}{2}-1}\\Big{(}\\frac{|\\delta{\\bf q}|}{\\hbar }\\sqrt{2ME_{n}}\\sqrt{1-\\xi^{2}}\\Big{)}J_{\\frac{f}{2}-1}\\Big{(}\\frac{|\\delta{ \\bf p}|}{\\hbar}\\xi\\sqrt{\\frac{2E_{n}}{M\\omega^{2}}}\\Big{)}\\,,\\] and integrating over variable \\(\\xi\\), \\[\\left\\langle e^{i({\\bf p}\\cdot\\delta{\\bf q}+{\\bf q}\\cdot\\delta{ \\bf p})/\\hbar}\\right\\rangle^{BV}_{\\varphi_{n}} = 2^{f-1}\\,\\left(\\frac{\\omega^{f}}{\\prod_{i=1}^{f}\\,\\omega_{i}} \\right)\\frac{E_{n}^{f-1}}{(\\hbar\\omega)^{f}}\\frac{J_{f-1}\\Big{(}\\sqrt{(\\frac{ |\\delta{\\bf p}|}{\\hbar}\\sqrt{\\frac{2E}{M\\omega^{2}}})^{2}+(\\frac{|\\delta{\\bf q }|}{\\hbar}\\sqrt{2ME})^{2}}\\Big{)}}{\\sqrt{\\Big{(}(\\frac{|\\delta{\\bf p}|}{\\hbar} \\sqrt{\\frac{2E}{M\\omega^{2}}})^{2}+(\\frac{|\\delta{\\bf q}|}{\\hbar}\\sqrt{2ME})^{ 2}\\Big{)}^{f-1}}}\\,.\\] For an eigenstate of energy \\(E_{n}\\), \\(\\rho(E_{n})=E_{n}^{f-1}/(\\Gamma(f)(\\hbar\\omega)^{f})\\), where \\(\\Gamma(f)\\) denotes the Gamma function of argument \\(f\\). Besides, \\(\\sigma_{p,n}^{2}=\\left\\langle{\\bf p}^{2}\\right\\rangle^{BV}_{\\varphi_{n}}=ME_{n}\\), \\(\\sigma_{q,n}^{2}=\\left\\langle{\\bf q}^{2}\\right\\rangle^{BV}_{\\varphi_{n}}=E_{n} /M\\omega^{2}\\) (for this case the mean values of position and of momentum vanish), giving \\[\\left\\langle e^{i({\\bf p}\\cdot\\delta{\\bf q}+{\\bf q}\\cdot\\delta{ \\bf p})/\\hbar}\\right\\rangle^{BV}_{\\varphi_{n}} = 2^{f-1}\\,\\left(\\frac{\\omega^{f}}{\\prod_{i=1}^{f}\\,\\omega_{i}} \\right)\\Gamma(f)\\rho(E_{n})\\frac{J_{f-1}\\Big{(}\\sqrt{2}\\sqrt{(\\frac{|\\delta{ \\bf p}|\\sigma_{q,n}}{\\hbar})^{2}+(\\frac{|\\delta{\\bf q}|\\sigma_{p,n}}{\\hbar})^{ 2}}\\Big{)}}{\\sqrt{((\\frac{|\\delta{\\bf p}|\\sigma_{q,n}}{\\hbar})^{2}+(\\frac{| \\delta{\\bf q}|\\sigma_{p,n}}{\\hbar})^{2})^{f-1}}} \\tag{40}\\] \\[= 2^{f-1}\\,\\left(\\frac{\\omega^{f}}{\\prod_{i=1}^{f}\\,\\omega_{i}} \\right)\\Gamma(f)\\rho(E_{n})\\frac{J_{f-1}(\\sqrt{2}\\Delta S_{n}^{BV}/\\hbar)}{( \\Delta S_{n}^{BV}/\\hbar)^{f-1}}\\,,\\] where the characteristic action \\[\\Delta S_{n}^{BV}=\\sqrt{(|\\delta{\\bf p}|\\sigma_{q,n})^{2}+(|\\delta{\\bf q}| \\sigma_{p,n})^{2}} \\tag{41}\\] has been introduced. \\(\\Delta S_{n}^{BV}\\) is nothing but the action \\(\\Delta S\\) introduced in Eq. (10) calculated for the \\(n\\)th eigenstate using the Berry-Voros conjecture. For the superposition state (24), one obtains \\[\\overline{C_{\\psi}(\\delta{\\bf q},\\delta{\\bf p})}^{BV}=\\sum_{n}2^{(f-1)/2}\\,|c_{ n}|^{2}\\,\\Gamma(f)\\frac{J_{f-1}(\\sqrt{2}\\Delta S_{n}^{BV}/\\hbar)}{(\\Delta S_{n}^{BV}/ \\hbar)^{f-1}}\\,, \\tag{42}\\]so that the typical action that controls the decay of the overlap is the one related with the coefficients \\(c_{n}\\) that contribute more to the initial state. The action \\(\\overline{\\Delta S}^{BV}\\), evaluated for the average distribution \\(\\overline{W_{\\psi}}\\) under the Berry-Voros conjecture, is related with (\\(\\Delta S_{n}^{BV}\\)) by \\[(\\overline{\\Delta S}^{BV})^{2}\\,=\\,\\sum_{n}|c_{n}|^{2}(\\Delta S_{n}^{BV})^{2}\\,. \\tag{43}\\] For any displacement (\\(\\delta{\\bf q},\\delta{\\bf p}\\)) the previous relation can be inverted and used to write \\(\\overline{C_{\\psi}}^{BV}\\) in terms of \\(\\overline{\\Delta S}^{BV}\\). To illustrate this result, in Fig. 3 we plot \\(\\overline{C_{\\psi}}^{BV}\\) versus \\(\\overline{\\Delta S}^{BV}\\) for the one-dimensional case \\(f=1\\). The \\(\\Delta S\\)-action scale for the decay of the overlap is dictated by the value \\(\\hbar=0.16\\), as in the case described in the previous section. Note that this result is expected as a power expansion of the Bessel function \\(J_{0}\\) in Eq. (42) to the second order in \\(\\overline{\\Delta S}^{BV}\\) consistently recovers the result in Eq. (8). Discussion The results of previous sections show that the relevant \\(\\Delta S\\)-action scale to the decay of the overlap \\(|C_{\\psi}|^{2}\\) for small displacements \\((\\delta{\\bf q},\\delta{\\bf p})\\) is given by \\(\\hbar\\). The one-dimensional Gaussian state is a special example for which the dependence of \\(|C_{\\psi}|^{2}\\) on \\(\\Delta S\\) is given explicitly by Eq. (15), and its monotonic exponential decay is independent of particular details of the state, as for instance the widths in position and momentum. (On the contrary, the decay of the overlap with the displacement will depend on \\(\\sigma_{q}\\) and \\(\\sigma_{p}\\) through Eq. (16)). In Figures 2 and 3 the exponential dependence associated to an initial (Gaussian) coherent state is compared to the one corresponding to states at different preparation times. Although all the curves shows a similar initial decay, (dictated by \\(\\Delta S\\approx\\hbar\\)), the ulterior behaviour can have qualitatively different features, the presence of oscillations in the overlap for intermediate values of \\(\\Delta S\\) being the most relevant one. It is worth noting that these oscillations can never be regarded as true revivals. \\(|C_{\\psi}|^{2}\\) can be interpreted as the overlap between the states \\(|\\psi\\rangle\\) and \\(\\hat{D}(\\delta{\\bf q},\\delta{\\bf p})|\\psi\\rangle\\), the second one being obtained by a rigid displacement of \\(|\\psi\\rangle\\). This implies that \\(|C_{\\psi}|^{2}\\) can not be equal to one for non-zero displacements since the support of the state in phase-space in finite. However, large amplitude oscillations are possible as shown in Fig. 2 for \\(T=10\\). The pattern of oscillations will change in general with the preparation time. In the system described in Sec. IV, no oscillations are present for the initial state. For small preparation times some oscillations appear (see Fig. 2 for \\(T=10\\)) but their amplitude decrease when the preparation time increases. For larger preparation times only oscillations with small amplitude are found. This behaviour can be interpreted by using Eq. (21) with the Wigner function, as proposed in Ref. [2]. For \\(T=0\\), the Wigner distribution associated to the coherent initial state is a Gaussian in phase space, and the monotonic decrease of \\(|C_{\\psi}|^{2}\\) with \\(\\Delta S\\) reflects the decrease of the overlapping regions between the states \\(|\\psi_{+}\\rangle\\) and \\(|\\psi_{-}\\rangle\\) (or, equivalently, between \\(|\\psi\\rangle\\) and \\(\\hat{D}|\\psi\\rangle\\)). For small preparation times, the isolated evolution of the environmental system prior to the coupling generates a regular large scale structure in the distribution (characterised by large values of \\(\\Delta Z_{0}\\) in Fig 2). For this case the coincidence between maxima and minima of that large scale structure in \\(|\\psi_{+}\\rangle\\) and \\(|\\psi_{-}\\rangle\\) is responsible for the oscillations in the overlap. For longer preparation times, smaller scale structures appear in the distribution (corresponding to smaller values of \\(\\Delta Z_{0}\\)), and more importantly,the randomness of the distribution of the patches in the structure increases (reflected in the similarity of the actions \\(\\Delta Z_{0}\\) and \\(a\\)). Then, as \\(T\\) increases the amplitude of the oscillations becomes smaller until they are eventually negligible. This behaviour is expected in general for any non-linear system, with the only difference in the preparation time \\(T\\) needed to develop the small scale structure. In the light of this discussion, special care must be taken in the interpretation of the results of Sec. V, where broad oscillations in the time averaged overlap \\(|\\overline{C_{\\psi}}^{BV}|^{2}\\) could appear for large \\(\\overline{\\Delta S}^{BV}\\) (see Fig. 3). As the systems considered are non-linear, the states will develop in general a complex small random structure for long enough preparation times, and only negligible oscillations will be present in the overlap. The broad oscillations in \\(|\\overline{C_{\\psi}}^{BV}|^{2}\\) are the result of the use of the Berry-Voros conjecture, that describes correctly the large scale structure but fails in describing the small scale correlations. Therefore, following the discussion in Sec. III related to Eq. (20), only the initial decay (corresponding to small displacements) for each particular sufficiently long preparation time is well described by \\(|\\overline{C_{\\psi}}^{BV}|^{2}\\). In the approach used in this work, the effect of the coupled evolution in the environmental system is equivalent to rigid displacements in phase-space of the state \\(|\\psi(T)\\rangle\\) to give \\(|\\psi_{+}(T;\\delta t)\\rangle\\) and \\(|\\psi_{-}(T;\\delta t)\\rangle\\). (The dependence of \\(|\\psi_{\\pm}\\rangle\\) with the interaction time \\(\\delta t\\) is made explicit.) No additional structure in phase-space in the states \\(|\\psi_{\\pm}\\rangle\\) is generated during the coupling, as the contribution of \\(\\hat{H}_{\\cal E}\\) is neglected. The interaction time \\(\\delta t_{0}\\) required to obtain a value \\(|C_{0}|^{2}\\) of the overlap is given by the condition \\(|C_{\\psi}(\\delta{\\bf q}_{0},\\delta{\\bf p}_{0})|^{2}=|C_{0}|^{2}\\), where the magnitude of the displacements are \\(\\delta{\\bf q}_{0}=-2{\\bf c}_{\\bf p}\\delta t_{0}\\) and \\(\\delta{\\bf p}_{0}=-2{\\bf c}_{\\bf q}\\delta t_{0}\\). Therefore the larger the coupling constants, the smaller the interaction time \\(\\delta t_{0}\\). The condition \\(\\Delta S\\approx\\hbar\\) establishes a lower bound for the value of the displacements and consequently for the interaction time needed to attain effective decoherence. An alternative derivation of the lower bound is pointed out in Ref. [16]. A different aspect is the dependence of this \\(\\delta t_{0}\\) with the environmental state prior to the interaction. The size of the displacements \\((\\delta{\\bf q}_{0},\\delta{\\bf p}_{0})\\) can be described by the action \\(\\Delta Z_{0}=\\delta{\\bf q}_{0}\\delta{\\bf p}_{0}\\) for each particular state. As discussed in Sec. III, \\(\\Delta Z_{0}\\) is of the order of the action that sets the scale of the structures in the distribution for some phase space representations fulfilling Eq. (21). As a result, \\(\\delta t_{0}\\) decreases as the structure in the distribution associated to the state becomes smaller. For example, in the system analysed in Fig 1, the interaction time \\(\\delta t_{0}\\) is proportional to \\(\\sqrt{a}\\)[16], provided the preparation time is long enough for \\(a\\) to describe properly the small scale structure. A more complex situation appears when the evolution induced by \\(\\hat{H}_{\\mathcal{E}}\\) is not neglected [16; 18]. In that case, besides the displacement, the distribution of the structure in phase space of the states \\(|\\psi_{+}\\rangle\\) and \\(|\\psi_{-}\\rangle\\) will change during the interaction. For the system studied in Fig 1 two different regimes can be distinguished. For \\(T\\lesssim 20\\), a rapid variation of the sizes of the structure with time is found and both mechanisms, the displacement and the development of structure, will determine the interaction time \\(\\delta t_{0}\\). However, for \\(T\\gtrsim 20\\), the variation of the sizes of the structure is much slower and \\(\\delta t_{0}\\) is determined by the time required to produce the displacement in phase space. As the displacement is approximately independent of the details of the state, \\(\\delta t_{0}\\) will be weakly dependent on the preparation time for \\(T\\gtrsim 20\\)[16]. Another important point to discuss is the dependence of the decoherence process with the number of degrees of freedom of the environmental system. As the number of degrees of freedom increases, smaller displacements in each variable are needed to obtain \\(\\Delta S\\approx\\hbar\\), that sets the action scale for the initial decay of the overlap in all cases, and the corresponding interaction time will be smaller too. This is compatible with the observation that the larger the environment the more effective the decoherence process. Experimental tests of the decoherence process in the context discussed in this work can be in principle realized in the systems described in Refs. [30]. The interaction between two oscillators is mediated by a term of the form \\(\\hbar\\,G\\,a_{\\mathcal{S}}^{\\dagger}\\,a_{\\mathcal{S}}\\,(a_{\\mathcal{E}}+a_{ \\mathcal{E}}^{\\dagger})\\), corresponding to a scattering process in which a quantum of energy of the environmental system \\(\\mathcal{E}\\) can be absorbed (\\(a_{\\mathcal{E}}\\)) or emitted (\\(a_{\\mathcal{E}}^{\\dagger}\\)) whereas the number of quanta of the pointer system \\(\\mathcal{S}\\) remains the same. For these cases, the coherences of the reduced density operator of the pointer system in the basis given by the Fock states are proportional to the overlap between the states \\(\\hat{D}(\\alpha=iGn\\delta t)|\\psi(T)\\rangle\\) and \\(\\hat{D}(\\alpha^{\\prime}=iGn^{\\prime}\\delta t)|\\psi(T)\\rangle\\). The operator \\(\\hat{D}(\\alpha=iGn\\delta t)\\equiv\\exp\\{\\alpha a_{\\mathcal{E}}^{\\dagger}- \\alpha^{*}a_{\\mathcal{E}}\\}\\) produces a displacement in phase space that depends linearly on the interaction time \\(\\delta t\\), the coupling constant \\(G\\), and the index \\(n\\) of one of the Fock states of \\(\\mathcal{S}\\) involved in the coherence under consideration. In summary, the role of \\(\\hbar\\) as a boundary between different decoherence regimes has been clarified in the context of a characteristic action \\(\\Delta S\\), which depends on the quantum state of the environmental system. We related the action \\(\\Delta S\\) with the complementary quantity \\(\\Delta Z\\), and described their connection with the pattern of structures developed in phase space. ###### Acknowledgements. We thank W. Zurek and the referee for useful comments. This work was supported by the \"Ministerio de Ciencia y Tecnologia\" under FEDER BFM2001-3349, by \"Consejeria de Educacion Cultura y Deportes (Gobierno de Canarias)\" under Contract No. PI2002-009, and by CERION II (Canadian European Research Initiative on Nanostructures). ## References * (1) See W.H. Zurek, Phys. Today **44**, 36 (1991), and references therein. * (2) W. H. Zurek, Nature **412**, 712 (2001). * (3) H. K. Lo, S. Popescu, and T. Spiller, _Introduction to Quantum Computation and Information_ (World Scientific, 1998). * (4) D. Giulini, E. Joos, C. Kiefer, J. Kupsch, I.-O. Stamatescu, and H. D. Zeh, _Decoherence and the Appearance of a Classical World in a Quantum Theory_ (Springer, 1996). * (5) D. J. Wineland, C. Monroe, W. M. Itano, D. Leibfried, B. E. King, and D. M. Meekhof, J. Res. Natl. Inst. Stand. Technol. **103**, 259 (1998). * (6) E. P. Wigner, Phys. Rev. **40**, 749 (1932). * (7) J. E. Moyal, Proc. Camb. Phil. Soc. **45**, 99 (1949). * (8) A. Jordan and M. Srednicki, e-print quant-ph/0112139 * (9) M. V. Berry, J. Phys. A: Math. Gen. **10**, 2083,(1977); M. V. Berry, Proc. R. Soc. Lond. A **423**, 219 (1989); A. Voros, Lecture Notes in Physics **93**, 326 (1979). * (10) D. Alonso, S. Brouard, J. P. Palao, and R. Sala Mayato, Atti. Fond. Giorgio Ronchi **58**, 863 (2003). * (11) N. Cartwright, Physica A **83**, 210 (1976). * (12) L. Cohen, J. Math. Phys. **7**, 781 (1996). * (13) L. Cohen, _Time-Frecuency Analysis_ (Prentice Hall, New York, 1995). * (14) R. Sala, J. P. Palao, and J. G. Muga, Phys. Lett. A **231**, 304 (1997). * (15) A. J. E. M. Janssen, Philips J. Res. **37**, 79 (1982). * (16) Z. P. Karkuszewski, C. Jarzynski, and W. H. Zurek, Phys. Rev. Lett. **89**, 170405 (2002). * (17) Z. P. Karkuszewski, J. Zakrzewski, and W. H. Zurek, Phys. Rev. A **65**, 042113 (2002) * (18) A different approach is used in Ref. [16], where the evolution under Hamiltonian (22) is notneglected during the interaction process, and the coupling term considered results in a shift of the position of the minimum of the confining harmonic potential. * (19) M. D. Fleit, J. Fleck, and A. Steiger, J. Comput. Phys. 47 (1982) 412. * (20) V. N. Prigodin, Phys. Rev Lett. **74**, 1566 (1995); V. N. Prigodin, N. Taniguchi, A. Kudrolli, V. Kidambi and S. Sridhar, Phys. Rev Lett. **75**, 2392 (1995). * (21) M. Srednicki, Phys. Rev. E **54**, 954, (1996). * (22) M. Feingold and A. Peres, Phys. Rev. A, **34**, 591 (1986). * (23) M. Srednicki, Phys. Rev. E, **50**, 888, (1994). * (24) D. Alonso and S. R. Jain, Phys. Lett. B **387**, 812 (1996); S. R. Jain and D. Alonso, J. Phys. A: Math. Gen. **30**, 4993 (1998). * (25) S. W. McDonald and A. N. Kaufman, Phys. Rev. A **37**, 3067, (1988). * (26) S. Hortikar and M. Srednicki, chao-dyn/9719925, (1997). * (27) O. Agam and S. Fishman, J. Phys. A **26**, 2113 (1993). * (28) E. J. Heller, Phys. Rev. Lett. **53**, 1515 (1984). * (29) E. Bogomolny, Physica D **31**, 169 (1988). * (30) C. K. Law, Phys. Rev. A **51**, 2537 (1994); S. Mancini, V. L. Man'ko, and P. Tombesi, Phys. Rev. A **55**, 3042 (1997); S. Bose, K. Jakobs, and P.L. Knight, Phys. Rev. A **59**, 3204 (1999); C. W. Gardiner and P. Zoller, _Quantum Noise_, 2nd ed. (Springer, 2000).
A characteristic action \\(\\Delta S\\) is defined whose magnitude determines some properties of the expectation value of a general quantum displacement operator. These properties are related to the capability of a given environmental'monitoring' system to induce decoherence in quantum systems coupled to it. We show that the scale for effective decoherence is given by \\(\\Delta S\\approx\\hbar\\). We relate this characteristic action with a complementary quantity, \\(\\Delta Z\\), and analyse their connection with the main features of the pattern of structures developed by the environmental state in different phase space representations. The relevance of the \\(\\Delta S\\)-action scale is illustrated using both a model quantum system solved numerically and a set of model quantum systems for which analytical expressions for the time-averaged expectation value of the displacement operator are obtained explicitly.
Summarize the following text.
arxiv-format/0310063v2.md
# GNSS-R: Operational Applications G. Ruffini, O. Germain, F. Soulat, M. Taani and M. Caparrini Starlab, Edifici de l'Observatori Fabra, 08035 Barcelona, Spain, [http://starlab.es](http://starlab.es) ## Introduction Several GNSS constellations and augmentation systems are presently operational, such as the Global Positioning System (GPS), owned by the United States, and, to some extent, the Russian GLObal Navigation Satellite System (GLONASS). In the next few years, the European Satellite Navigation System (Galileo) will be deployed. By the time Galileo becomes operational in 2008, more than 50 GNSS satellites will be emitting very precise L-band spread spectrum signals, and will remain in operation for at least a few decades. Although originally meant for localization, these signals will not doubt be used within GCOS/GOOS. The immediate objective of Starlab's Oceanpal1 project is the development of technologies for operational in-situ or low-altitude water surface monitoring using GNSS Reflections, a passive, all weather radar technology of great potential. Footnote 1: Such as the ESA projects OPPCSAT, OPPCSAT 2 (focusing on Specialometry/Scatterenetry), Paris-Alpha, Paris-Beta, Paris-Gamma (Altimetry) and GIOS-1 (focusing on Ionospheric monitoring). See the Acknowledgements for more details. Oceanpal1 is an offspring of technology developed within several ESA/ESTEC projects targeted on the exploitation of GNSS Reflections from space2, following the proposal of M. Martin-Neira (1993). We also note that GNSS-R is but an example of passive, bistatic radar (see, e.g., Cantafio 1993), a subject with a long history. In fact, bistatic radar was a subject of research in the early days--see Conant (2002) for a fascinating account of radar history. Footnote 2: Such as the ESA projects OPPCSAT, OPPCSAT 2 (focusing on Specialometry/Scatterenetry), Paris-Alpha, Paris-Beta, Paris-Gamma (Altimetry) and GIOS-1 (focusing on Ionospheric monitoring). See the Acknowledgements for more details. Although our focus here is on low altitude applications, it is worthwhile explaining in more detail the rationale for spaceborne deployment: an important aspect of the GNSS-R concept is the synergy between space and ground monitoring using the same technology and the same signal infrastructure, which will ensure homogeneity in the measurements. An overview of the parameters measured by GNSS-R is provided in Table 1. In Figure 1 we can see a schematic rendition of a spaceborne GNSS-R mission, as well as an illustrationshowing the multiple (GPS) reflection points available to a ground receiver during a 24-hour period. Note the multi-static character of the technique: a single passive instrument can provide a rather large swath, thanks to the availability of multiple emitters. From the ground and air, it can also provide simultaneous measurements in different geometric configurations over the same area--an important added value for geophysical inversion. ## 2 GNSS-R IN SPACE: THE PETREL EARTH EXPLOREER In the future, the artificial separation between geophysical \"layers\" (ocean, troposphere, stratosphere, etc.) will disappear, and future Earth global models will need to reflect the fundamental role of atmosphere-ocean coupling. The sea surface provides the ocean-atmosphere link, regulating momentum, energy and gas exchange, and several fundamental ocean circulation features are directly related to wind-wave induced turbulent transports in the oceanic mixed layer. In particular, eddies and gyres are fundamental agents for mixing, heat transport and feedback to general circulation, as well as transport of nutrients, chemicals and biota for biochemical processes. Moreover, at the atmosphere-ocean boundary, many temporal and spatial scales play an important role: from the molecular to the synoptic level, from seconds to eons. For this reason, observing this interface appropriately is an important challenge for global observation systems, which will require high resolution, wide swaths, frequent revisits and long-term stability (Le Traon et al, 2002). All of these are actively addressed by the GNSS-R concept. The ocean-atmosphere interface is characterized (to the lowest statistical order) by the geophysical variables of local mean sea level (_h_), significant wave height (_swh_) and directional mean square slope (_dmss_). Mesoscale measurements of sea surface _dmss_ are an important missing element from the global climate and ocean observation systems, and would greatly help to understand and quantify the atmosphere-ocean flux of energy, momentum and gas. In addition, since ocean forcing is a non-linear and strongly intermittent phenomenon (both in space and time), frequent space-time co-located mesoscale measurements of \\(h\\) and _dmss_ are highly desirable. A similar statement, asserting the importance of simultaneous altimetry and scatterometry measurements, was already stated in 1981 (WOCE CCCO, see Thompson et al., p. 35 in Siedler et al., 1981). The scientific objectives of a spaceborne GNSS-R mission such as PETREL, recently submitted to the Earth Explorer ESA program (Ruffini and Chapron, 2002) should thus address the medium and long-term components for physical climate observation (Theme 2 of the ESA Earth Explorer Program) with a focus on providing a key element for the study of atmosphere-ocean coupling. The elementary geophysical products provided by such a mission highlight mesoscale collocated altimetric and sea surface directional mean square slope measurements. \\begin{table} \\begin{tabular}{|p{34.1pt}|p{34.1pt}|p{34.1pt}|} \\hline **GRUND AIR SPACE** & **DISS PACE** & **T** \\\\ \\hline \\end{tabular} \\end{table} Table 1: Summary of the main measurements of GNSS-R for oceanography. Other possibilities include Surface Currents, Surface Pressure (from space) and Dielectric constant. These measurements are also of great interest for the observation of surface winds, mean sea surface, sea-ice, salinity, ionospheric electron content and tropospheric delay (e.g., for measurement of surface pressure over the oceans). The measurement of currents is in principle also feasible. Results from recent ESA studies and experiments and also from our colleagues in the United States indicate that GNSS-R data can provide sufficient information to resolve mesoscale features in the ocean, as well as co-located directional mean square slope measurements (see, e.g., other papers from the _2003 Workshop on Oceanography with GNSS Reflections_ in the references). As recent ESA studies indicate, a single GNSS-R Low Earth Orbiter can provide samplings of less than 100 km resolution and less than 10 days revisit time with an equivalent altimetric precision better than 5 cm, sufficient for competitive Mesoscale Altimetry applications. Such a mission, capable of picking as many as 12 signals from GPS, Galileo and Inmarsat satellites would have significant impact on the mapping of the mesoscale variability. According to the present understanding of the GNSS-R error budget (based on theoretical studies and related experimental campaigns carried in Europe and the US), impact studies with simulated data carried out within the scope of the Paris Beta and Paris Gamma ESA studies show that a GNSS-R mission should allow mapping of the mesoscale variability in high eddy variability regions better than Jason-1+ENVISAT together. These studies also indicate that the combination of GNSS with Jason-1 and ENVISAT data can improve the sea level mapping derived from the combination of Jason-1 and ENVISAT by a factor of about 2. In well sampled regions, the improvement could reach up to a factor of 4 (see Le Traon et al., 2003). The precision and sampling provided by such measurements may also make GNSS-R an effective tool for Tsunami detection and the measurement of surface pressure over the oceans (e.g., in the Southern Hemisphere or inside hurricanes). We recall that the troposphere induces a delay in GNSS signals which can be parameterized, to first order, by a measurement of surface pressure. Figure 1: Left: Artist concept of the bistatic GNSS spaceborne concept. GNSS signals reflect off the Earth surface and are gathered by a spaceborne receiver. All direct signal links are not shown for simplicity. Right: GPS-R specular points after 24 hours as seen from a receiver at 50 m altitude in the Barcelona coast. A conservative cut-off of 20 degrees in the local elevation of the reflected signals has been used for display purposes. ## Recent Coastal Experimental Campaigns Many experiments have taken place to date, carried out by different institutions in Europe and the US: from space, stratospheric balloons, aircraft and the ground. The reader is invited to read through the references for abundant experimental work. Here we report briefly on Starlab's 2003 Coastal campaign. Recent coastal GNSS-R experimental campaigns led by Starlab have collected data from low altitude stationary platforms in a wide range of sea state conditions, using both experimental GPS-R equipment sent by ESA/ESTEC and an Oceanpal(r) prototype. Some of these experiments (the Coastal series) have been carried out in the Barcelona harbor breakers, with the logistic support of the Barcelona Port Authority. As shown in Figure 2, two antennas are usually employed to collect GPS signals: one antenna (the \"direct\" or \"up-looking\" antenna) is zenith looking and Right Hand Circularly Polarized to collect the direct GPS signal, while the other one (the \"reflected\" or \"down-looking\" antenna) is nadir/side looking and Left Hand Circularly Polarized to recover the reflected signal. The output from each the antenna is sent to a GPS front end. The IF data generated by the receivers is then recorded at a sufficiently high sampling frequency, after (typically) being digitized at one bit. The experimental data has been fed to Starlab's GPS-Reflections processor (STARLIGHT2), which retrieves the reflected electromagnetic field and estimates sea level and sea state. Footnote 2: STARLab Interferometric GNSS Toolkit. The STARLIGHT processor, through the conventional correlation method, evaluates the reflected field magnitude and phase. The retrieved field contains very useful information on the characteristics of the reflecting surface. Comparison between this field and the direct one is then performed to infer the desired quantities, such as sea roughness and sea level. Recent altimetric results using the phase in mild sea conditions are at the centimeter level (Caparrini et al., 2003), and there appears to be very good correlation between sea state and field dynamics. Figure 2 shows some details of the experimental hardware set-up. The particular experiment shown in Figure 2 took place at dawn. Along with the GPS signals another source of opportunity was exploited: the Mediterranean rising sun. The use of multi-frequency bistatic specular scattering instruments is very important to validate models, and may provide clues on how to separate ocean surface spectral parameters (such as surface wind and wave age). To understand the geophysical content of the data it is useful to perform a \"gedanken3\". As seen from a static platform, the electric field scattering from a frozen ocean could be represented as a static complex phasor (representing the phase and amplitude of the electric field). The reader can then readily imagine that the motion of the ocean surface translates into motion of the phasor in the complex plane. Footnote 3: “Thought experiment”, in German. In Figure 4 we can see such a phasor representation of the reflected electric field (at GPS frequencies) simulated using Fresnel scattering from an virtual ocean generated using the Elfouhaily et al. (1997) ocean spectrum, as well as the real thing obtained using experimental data from GPS L1 signals (from the Coastal campaign in the Barcelona harbor). Analysis of the dynamics of the reflected phasor provides the key to estimating sea surface parameters from such static platforms. In Figure 3, for instance, we show some results from a study carriedout with the help of the scattering simulator for sea state retrieval from phase statistics (Soulat, 2003). It also shows preliminary results obtained through a Fourier analysis of the complex reflected field gathered at the Barcelona Port for different sea conditions. As observed and as expected from simulations and analytic work, the energy and width of the spectrum increase quite clearly with surface wind speed, which is a very promising indication for the development of our inversion algorithms. Aircraft or spacecraft observations must be analyzed differently, basically exploiting the size and shape of the \"glistening\" zone (as in Spooner, 1822, or in the classic work by Cox and Munk using optical data). The fundamental tool to study this is provided by the Delay Doppler mapping SAR-like capability of GNSS-R (see, e.g., Ruffini 1999 and 2000a, Germain 2003 and Soulat 2003). ## 3 GNSS-R as TIDE Gauge and Sea State Sensor: The Oceanpal* concept Starlab is now developing an operational instrument based on GNSS-R, Oceanpal(r). As we have seen, initial results indicate that this sensor will provide very useful altimetry and sea state information from, at least, low altitude applications (e.g., coasts or aircraft). The company is perfecting robust algorithms for operational code and phase tracking of the reflected field and extraction of geophysical parameters. As discussed, reflected signals carry significant information on sea state and topography, and both experimental work and simulations have demonstrated the potential of this concept for coastal and airborne altimetry and sea state monitoring. Figure 2: Simplified schematic representation of the GNSS-R concept. The direct and reflected signals originating from a GNSS source are combined at the receiver to estimate the distance to the surface and to the Earth Centre (atmospheric errors cancel out at low altitudes). On the bottom right, a detail of the Coastal Experiment in the Barcelona Port using equipment provided by ESA-ESTEC and with the support of the Barcelona Port Authority. The up and down looking antennas can be seen. On the bottom left, example of analysis of optical jitter to support GNSS-R analysis. Slope statistics can be calculated using optical measurements. As seen from the instrument, several GNSS emitters are simultaneously in view at any given time, providing information from separated scattering points with different geometries and thus strengthening the extraction of oceanographic variables (geophysical inversion). Reflected signals are affected by surface \"roughness\", motion (sea state, orbital motion, currents), surface dielectric properties (i.e., salinity and pollution), and mean surface height. The instrument exploits the \"noisy\" reflected electric field to infer ocean, river, or lake surface properties, using robust techniques. Although bistatic radar can work exploiting various sources of opportunity, GNSS are in many ways unique: GNSS-R altimetric products are very stable, long-term and could provide, automatically, absolutely calibrated mean sea level in the GNSS reference system. Thanks to its GNSS \"pedigree\", Oceanpal(r) is an inexpensive, all-weather, passive concept for remote sensing of the ocean and other water surfaces, for accurate provision of sea state and altimetry. The instrument is design so it can be deployed on multiple platforms: static (coasts, harbors, offshore), and slow-moving (e.g., boats, floating platforms, buoys, stratospheric platforms, aircraft, etc.). Spaceborne application of GNSS-R requires further technology development, and is the subject of several ongoing ESA projects. We envisage that this system will act as an accurate, distributed, \"dry\" tide gauge network while conducting surface scattering monitoring, providing a stable and precise service based on the growing long term GNSS infrastructure. As such, Oceanpal(r) is part of another Starlab concept in which small, multiple inexpensive sensors will exchange information to \"synthesize\" an extended remote sensing system and provide relevant oceanographic information to a whole array of end-users (GOOS, Public Authorities, harbors, shipping, fishing industry, off-shore mining, and in general to those conducting their activities in or near the sea). ## Summary and Outlook GNSS-R is a budding new technology with a bright outlook. We foresee powerful applications for altimetry and scatterometry from ground, air and space using GNSS based bistatic radar technology: geophysical applications will clearly benefit from the precision, accuracy, abundance, stability and long-term availability of GNSS signals. In this paper we have highlighted an inexpensive, passive, dry operational sensor concept for use on coastal platforms and aircraft, now under development at Starlab. This sensor will provide precise sea level information and sea state, and we believe it will occupy an important niche in operational oceanography and marine operations. Other marine applications of this technology (salinity, pollution, currents) are also being studied. However, we emphasize that ESA and other agencies are currently working on the development of GNSS-R space sensors: recent studies indicate that GNSS-R data will have a significant altimetric and speculometric impact from space in conjunction with standard approaches. Mesoscale altimetry is an important target of recent studies, since one of the strongest assets of GNSS-R is the intense availability of reflected signals, which can provide very dense and accurate samplings. Speculometry can provide measurements of directional sea surface roughness, which can then be correlated with surface windsand sea state for operational applications as well as used directly for scientific studies of ocean-atmosphere coupling. Given the growing GNSS availability and long-term outlook for GNSS service signals, the combination of GNSS-R data from air, ground and space can provide a long lasting oceanographic monitoring infrastructure for decades to come. ## Acknowledgements This work was partly supported by a Spanish Ministry of Science and Technology PROFIT project. We are also thankful for the support received in the context of several GNSS-R Starlab-ESA/ESTEC contracts: OPPCAT (13461/99/NL/GD), the ongoing OPPCAT 2 (3-10120/01/NL/SF), both dedicated to GNSS-R scatterometry (Speculometry), as well as ESA/ESTEC Contract 15083/01/NL/MM (PARIS BETA), ESA/ESTEC Contract No. 14285/85/nl/pb, Starlab CCN3-WP3 (PARIS ALPHA) and the ongoing ESA PARIS GAMMA project (all dedicated to the study of GNSS-R spaceborne altimetric applications). Special thanks to ESA/ESTEC for allowing us to use their GPS-R experimental equipment, to the Barcelona Port Authority (J. Vila) and Polytechnic University of Catalunya/TSC (A. Camps) for experimental logistic support during the Coastal campaign, and to our partners in these ESA projects. _All Starlab authors have contributed significantly; the Starlab author list has been ordered randomly._ Figure 3: Left: Coastal campaign data: Fourier analysis of the complex reflected field for different wind speeds: 1.6 m/s (green), 3 m/s (red) and 8.2 m/s (blue). Right: Simulations of phase dynamics statistics versus sea height RMS using Starlab’s GNSS-R simulator (GRADAS). Figure 4: Top, left: Starlab’s Oceanpal® prototype is a GNSS-R sensor ideally suited for coastal, river or lake applications. Top right: The dynamic L-band reflected electric field in a complex phasor representation of the amplitude and phase modulation of the carrier as produced by a virtual moving ocean after a few seconds of time evolution, from a simulation using Starlab’s GRADAS software package (phasor amplitude units are arbitrary). Bottom: On the left, the dynamic phasor of the direct and reflected GPS L1 field after one second of time evolution, using data from the Coastal experiment processed using Starlab’s STARLIGHT software (units are SNRV, integration time is 10 ms). On the right, a typical Oceanpal® correlation waveform. Figure 5: Oceanpal® interface concept. On the top panel, general information on the location of the sensor is provided, as well as on the available network of sensors and resulting overall sea state or Sea Surface Height map. On the second panel, the sea state (SWH index) is shown, as well as visual and acoustic cues on sea state. In the third panel, the Sea Surface Height is shown, as well as information in the form of text. Finally, information on the available satellites and signal “health” are provided. ## References * Cantafio (1989) Cantafio, L.J., 1989, \"Space-based Radar Handbook\", Artech House, 1989. * Caparrini (1998) Caparrini, M., 1998, Using reflected GNSS signals to estimate surface features over wide ocean areas. ESTEC Working Paper No. 2003, Dec 1998. * Caparrini et al. (2003) Caparrini, M., Ruffini, L., Ruffini, G., GNSS-R Altimetry with L1 Data from the Bridge 2 Campaign, in Proceedings of the 2003 Workshop on Oceanography with GNSS Reflections, Barcelona, 2003. * Cardellach et al. (2002) Cardellach, E., G. Ruffini, D. Pino, A. Rius, and A. Komjathy, 2002, MEditerranean Balloon EXperiment: GPS reflections for wind speed retrieval from the stratosphere. To appear in Remote Sensing and Environment, 2003. * Elfouhaily et al. (1997) Elfouhaily, T., B. Chapron, K. Katsaros, and D. Vandemark, 1997, A unified directional spectrum for long and short wind-driven waves, J. Geoph. Res., vol 102, no. C7, p. 15,781-1,796. * Garrison et al. (2000) Garrison, J.L, G. Ruffini, A. Rius, E. Cardellach, D. Masters, M. Armatys, and V.U. Zavorotny, 2000, Preliminary results from the GPSR Mediterranean Balloon Experiment (GPSR-MEBEX), _Proceedings of ERIM 2000, Remote Sensing for Marine and Coastal Environments_, Charleston, 1-3 May, ISSN 1066-3711. * Germain (2003) Germain, O., The Eddy Experiment II: L-band and Optical Speculometry for sea-roughness retrieval from Low Altitude Aircraft, in Proceedings of the 2003 Workshop on Oceanography with GNSS Reflections, Barcelona, 2003. * Le Traon et al. (2002) Le Traon, P.-Y., G. Dibarbour, G. Ruffini, and E. Cardellach, 2002, Mesoscale Ocean Altimetry Requirements and Impact of GPS-R measurements for Ocean Mesoscale Circulation Mapping, Abridged Starlab ESA/ESTEC Technical Report from the Paris Beta project, [http://arxiv.org/abs/physics/0212068](http://arxiv.org/abs/physics/0212068). * Lowe et al. (2002) Lowe, et al., 2002, First spaceborne observation of an Earth-reflected GPS signal, Radio Science, Vol. 37, No. 1. * Martin-Neira (1993) Martin-Neira, M., 1993, A passive reflectometry and interferometry system (PARIS): application to ocean altimetry, ESA Journal, vol 17, pp 331-355. * Martin-Neira et al. (2001) Martin-Neira, M., M. Caparrini, J. Font-Rossello, S. Lannelongue, and C. Serra Vallmitjana, 2001, The PARIS Concept: an Experimental Demonstration of Sea Surface Altimetry Using Reflected GPS Signals, IEEE Trans. Geoscience and Remote Sensing, vol. 39, no. 1. * Ruffini et al. (1999) Ruffini, G., et al., 1999, GNSS-OPPCSAT WP1000 ESA Report: Remote Sensing of the Ocean by Bistatic Radar Observations: a Review. Available at [http://217.126.65.140/library/WP1000.ps.gz](http://217.126.65.140/library/WP1000.ps.gz) * Ruffini et al. (2000a) Ruffini, G., J.L. Garrison, E. Cardellach, A. Rius, M. Armatys, and D. Masters, 2000a, Inversion of GPSR Delay-Doppler Mapping Waveforms for wind retrieval, IGARSS, Honolulu, July 2000a. Available at [http://217.126.65.140/staff/guiliospapers/igars2000.ps.gz](http://217.126.65.140/staff/guiliospapers/igars2000.ps.gz). * Ruffini and Soulat (2000b) Ruffini, G., and F. Soulat, 2000b, Paris Interferometric Processor Theoretical Feasibility Study part, [http://arxiv.org/ps/physics/0011027](http://arxiv.org/ps/physics/0011027) * Ruffini et al. (2002) Ruffini, G., M. Caparrini, and L. Ruffini, 2002, PARIS Altimetry with L1 Frequency Data from the Bridge 2 Experiment, Abridged Starlab ESA/ESTEC Technical Report. [http://arxiv.org/abs/physics/0212055](http://arxiv.org/abs/physics/0212055). * Ruffini et al. (2003) Ruffini, G., Soulat, F., Caparrini, M., Germain, O., The Eddy Experiment I: GNSS-R Altimetry from Low Altitude Aircraft, in Proceedings of the 2003 Workshop on Oceanography with GNSS Reflections, Barcelona, 2003. * Soulat (2003) Soulat, F., 2003, Sea Surface Remote Sensing with GNSS and Sunlight Reflections, UPC-Starlab PhD Thesis. * Spooner (1982) Spooner, J.,1822, Sur la lumiere des ondes de la mer, Corresp. Astronomique du Baron de Zach, 6:331. * Zavorotny and Voronovich (2001) Zavorotny, V.U., and A.G. Voronovich, A.G., Scattering of GPS Signals from the Ocean with Wind Remote Sensing Application, IEEE Transactions on Geoscience and Remote Sensing, Vol. 38, No. 2, pp. 951-964.
This paper provides an overview of operational applications of GNSS-R, and describes Oceanpal1, an inexpensive, all-weather, passive instrument for remote sensing of the ocean and other water surfaces. This instrument is based on the use of reflected signals emitted from GNSS, and it holds great potential for future applications thanks to the growing, long term GNSS infrastructure. The instrument exploits the fact that, at any given moment, several GNSS emitters are simultaneously in view, providing separated multiple scattering points with different geometries. Reflected signals are affected by surface \"roughness\" and motion (i.e., sea state, orbital motion, and currents), mean surface height and dielectric properties (i.e., salinity and pollution). Oceanpal1 is envisioned as an accurate, \"dry\" tide gauge and surface roughness monitoring system, and as an important element of a future distributed ocean remote sensing network concept. We also report some results from the Starlab Coastal campaign, focusing on ground GNSS-R applications. Footnote 1: Such as the ESA projects OPPCSAT, OPPCSAT 2 (focusing on Specialometry/Scatterenetry), Paris-Alpha, Paris-Beta, Paris-Gamma (Altimetry) and GIOS-1 (focusing on Ionospheric monitoring). See the Acknowledgements for more details.
Summarize the following text.
arxiv-format/0310092v1.md
# The GNSS-R Eddy Experiment I: Altimetry from Low Altitude Aircraft G. Ruffini F. Soulat M. Caparrini O. Germain _Starlab, C. de l'Observatori Fabra s/n, 08035 Barcelona, Spain, [http://starlab.es_](http://starlab.es_) M. Martin-Neira _ESA/ESTEC, Keplerlaan 1, 2200 Noordwijk, The Netherlands, [http://esa.int_](http://esa.int_) ## 1 Introduction Several Global Navigation Satellite System (GNSS) constellations and augmentation systems are presently operational or under development, such as the pioneering US Global Positioning System (GPS), the Russian Global Navigation Satellite System (GLONASS) and the European EGNOS. In the next few years, the European Satellite Navigation System (Galileo) will be deployed, and GPS will be upgraded with more frequencies and civilian codes. By the time Galileo becomes operational in 2008, more than 50 GNSS satellites will be emitting very precise L-band spread spectrum signals, and will remain in operation for at least a few decades. Although originally meant for (military) localization, these signals will no doubt be used within GCOS/GOOS\\({}^{*}\\) in many ways (e.g., atmospheric sounding). We focus here on the budding field known as GNSS Reflections, which aims at providing tools for remote sensing of the ocean surface (especially sea surface height and roughness) and the atmosphere over the oceans. This paper reports a new development of the GNSS-R altimetric concept (PARIS). The PARIS concept (Passive Reflectometry Interferometric System [12]) addresses the exploitation of reflected Global Navigation Satellite Systems signals for altimetry over the oceans. Ocean altimetry, the measurement of Sea Surface Height (SSH), is indeed one of the main applications of the GNSS-R passive radar concept. GNSS-R can give automatically integrated measurements in the GNSS reference system. In addition, this technique can provide the unprecedented spatio-temporal samplings needed for mesoscale monitoring of ocean circulation. It is at mesoscale where phenomena such as eddies play a fundamental role in the transport of energy and momentum, yet current systems are unable to probe them. Many GNSS-R altimetry and scatterometry experiments have been carried out to date, and the list continues to grow thanks to dedicated efforts in Europe and the US. GNSS-R experimental data has now been gathered from Earth fixed receivers ([1, 13, 20, 2] among others), aircraft ([9, 7, 3, 11, 8] among others), stratospheric balloons ([6, 4] among others), and from space platforms ([10] among others). This experimental work is converging to a unified understanding of the GNSS-R error budget, but so far these experiments have focused on waveform modeling and short term ranging precision. None to date have attempted to retrieve a mesoscale altimetric profile as provided by monostatic radar altimeters such as Jason-1. In the four main sections of this paper we report PARIS altimetric processing results using data from the 09-27-2002 Eddy Experiment, carried out in the frame of the European Space Agency \"PARIS Gamma\" contract. The first section addresses the issue of _tracking_ the direct and reflected GPS signals, which consist in appropriately placing the delay and Doppler gating windows and in despreading the GPS signals by means of correlation with clean replicas. Tracking produces incoherently averaged _waveforms_ (typically with a cadence of 1 second). The extraction of the information needed for the altimetric algorithm from the waveforms is described in the second section. This is the _retracking_ step, and it yields to the so-called _measured temporal lapse_ (or lapse, for short) between the direct and reflected signal. In the third section, the altimetric algorithm (producing the Sea Surface Height estimates) is described and, finally, results are presented in the fourth section. ## 2 Data collection and pre-processing ### Data set The GNSS-R data set was gathered during an airborne campaign carried out by Starlab in September 2002. The GPS/INS (Inertial Navigation System) equipped aircraft overlflew the Mediterranean Sea, off the coast of Catalonia (Spain), northwards from the city of Barcelona for about 150 km (Figure 1). This area was chosen because it is crossed by a ground track of the Jason-1 altimeter (track number 187). The aircraft overlflew this track during the Jason-1 overpass, for precise comparison. In addition, a GPS buoy measurement of the SSH on a point along the same track was obtained, basically to validate the Jason-1 measurement2. Footnote 2: The buoy campaign was carried out by the Institut d’Estudis Espacials de Catalunya (IEEC). During the experiment, the aircraft overlflew twice the Jason-1 ground track, both in the South-North direction and back, gathering about 2 hours of raw GPS-L1 IF data sampled at 20.456 MHz (see [17] for the experimental setup). In this paper, we focus on the data processing of the ascending track, i.e., from P1 to P2 in Figure 1, with the Jason-1 overpass happening roughly in the middle of the track. ### Tracking GNSS-R signals Altimetry with GNSS-R signals is based on the measurement of the temporal _lapse_ between the time of arrival of the direct GNSS signal and the time of arrival of the same signal after its reflection from the target surface. Successful tracking of both signals is the first step for altimetric processing. Under general sea conditions, GPS signals reflected from a rough sea surface cannot be tracked by a standard receiver, because of the signal corruption due to the reflection process itself (in GPS terminology, the signal is affected by severe multipath from the sea clutter). For this reason, a dedicated software receiver has been developed3. The high level block diagram of this receiver is shown in Figure 2. The processor is composed of two sub-units, one for each signal. The unit which processes the direct signal-- the master unit --uses standard algorithms to track the correlation peak of the signal, both in time and frequency. The unit which processes the reflected signal-- the slave unit --performs correlations blindly, with time delay and frequency settings which depend on those of the master unit. Footnote 3: STARLIGHT, also described in [2]. One of the most relevant tracking parameters is the temporal span of the correlations, i.e., the coherent integration time used to despread the GPS signal. The coherent integration time was set here to 10 milliseconds: it was verified that with this value the ratio of the correlation peak height and the out-of-the-peak fluctuations achieved a maximum. In practical terms, an integration time of 10 milliseconds simplifies the tracking process, as the duration of this time interval is a sub-multiple of the GPS navigation bit duration (with the 50 Hz navigation code in L1). Moreover, we believe that longer correlation times provide some protection from aircraft multipath by mimicking a higher gain antenna (a belief supported by tests with shorter integration times). ## 3 Retracking the waveforms Once a correlation waveform is obtained for both the direct and reflected signals, the lapses can be estimated. We emphasize that this is not as trivial as considering the maximum sample of each waveform or the waveform centroid, for instance, as the bistatic reflection process deforms severely the signals and, in general, such distortions will displace the waveform maximum or centroid location. Moreover, the sampling rate of 20.456 MHz, while much higher than the Nyquist rate for the C/A code, is equivalent to an inter-sample spacing of 49 ns--about 15 lightmeters. This coarseness in the lapse estimation is not sufficient to reach the final altimetric precision target. The main challenge for GNSS-R using the GPS Coarse Acquisition (C/A) code is to provide sub-decimetric altimetry using a 300 m equivalent pulse length, something that can only be achieved by intense averaging with due care of systematic effects. For reference, pulse lengths in monostatic altimeters such as Jason-1 are almost three orders of magnitude shorter. For these reasons, the temporal position of each waveform (the _delay_ parameter) is estimated via a Least Mean Squares model fitting procedure. This is the so-called _retracking_ process, an otherwise well-known concept in the monostatic altimetric community. The implementation of accurate waveform models (for direct and reflected signals) is fundamental to retracking. Conceptually, a retracking waveform model allows for the transformation of the reflected waveform to an equivalent direct one (or vice-versa), and a coherent and meaningful comparison of direct and reflected waveforms for the lapse estimation. ### Waveform model The natural model for retracking of the direct signal waveforms is the mean autocorrelation of the GPS C/A code in presence of additive Gaussian noise, which accounts mainly for the receiver noise. As far as the reflected signal is concerned, the model is not so straightforward. In fact, the reflection process induces modifications on the GNSS signals which depend on sea surface conditions (directional roughness), receiver-emitter-surface kinematics and geometry, and antenna pattern. Among all these quan Figure 1: Flight trajectory. ities, the least known ones are those related to the sea surface conditions. In principle, these quantities should be considered as free parameters in the model for the reflected signal waveform and estimated during the retracking process along with the delay parameter of the waveform. On the basis of the two most quoted models in the literature for bistatic sea-surface scattering ([14] and [21]), we have developed an upgraded waveform model for the reflected signal. This new model, as the two aforementioned, is based on the Geometric Optics approximation of the Kirchhoff theory--that is to say, with a tangent plane, high frequency and large elevation assumption, which is reasonable for the quasi-specular sea-surface scattering at L-band ([17]). The main characteristics of this model are the following: * a fully bistatic geometry (as in [21], but not in [14]), * the description of the sea surface through the L-band Directional Mean Square SlopeSS (DMS\\({}_{L}\\)) (as in [21]), and Footnote §: See [8] for a discussion on the role of wavelength in the definition of DMSS. * the use of a fourth parameter, the significant wave height (SWH), to describe the sea surface (as in [14], but not in [21]). We have checked that the impact of SWH mismodeling in our case is negligible, since the GPS C/A code equivalent pulse width is about 300 meters. Nonetheless, we emphasize that all potential sources of systematic effects must be considered. We foresee a higher and non-negligible impact of SWH if the GPS P-code (the Precision code) or similar codes in Galileo are used, since they are an order of magnitude shorter. ### Inversion scheme The retracking process has been performed through a Least Mean Squares fit of the waveforms with the models described. Because of the speckle noise affecting the reflected signal waveforms, the fit has not been performed on each complex waveform obtained from the 10 ms correlations. Rather, these waveforms have first been incoherently averaged: the average of the magnitude of a hundred 10 ms waveforms has then been used to perform the inversion--i.e., 1 second incoherently averaged real waveforms have been generated for retracking. In this way, reflected/direct temporal lapses have been produced at 1 Hz rate. In both cases, the fit of the waveform has been performed over three parameters: the time delay, a scaling factor and the out-of-the-peak correlation mean amplitude. The geophysical parameters that enter in the model of the reflected signal waveform have not been jointly estimated here. These parameters have been set to some reasonable _a priori_ value obtained from other sources of information (Jason-1 for wind speed, ECMWF for wind direction) or from theory (for the sea slope PDF isotropy coefficient). For con Figure 2: Simplified block diagram of the GNSS-R tracking concept. venience, we describe the sea surface state using a wind speed parameter, wind direction and the sea slope PDF isotropy coefficient. Using a spectrum model, these parameters can be univocally related to DMSS\\({}_{L}\\) with the assumption of a mature, wind-driven sea (the sea spectrum in [5] has been used in this case). We emphasize that DMSS\\({}_{L}\\) is the actual parameter set needed in the reflection model under the Geometric Optics approximation. Concerning the reflected signal waveform, re-tracking has been performed using only the leading edge and a small part of the trailing edge, since the trailing edge is more sensitive to errors in the input parameters (including geophysical parameters and antenna pattern). ## 4 The altimetric algorithm The output of the retracking process is the time series of measured lapses. The next step is finally SSH estimation. In order to solve for this quantity, we have used a differential algorithm: the classical PARIS equation (see [12]) has not been directly used. Instead, a model for the lapse over a reference surface near the local geoid has been built, and the difference of this reference lapse and the measured one has been modeled as a function of the height over the reference surface. We call this the _differential_ PARIS equation (Equation 1). We note that the aircraft INS has been used to take into account the direct-reflected antenna baseline motion and that we have also included both dry and wet tropospheric delays in the model by using exponential models for them with different scale heights and surface values derived from surface pressure measurements and surface tropospheric delays obtained from ground GPS and SSM/12). Footnote 12: Special Sensor Microwave Imager, a passive microwave radiometer flown aboard United States Defense Meteorological Satellite Program. The _differential_ PARIS equation writes \\[\\Delta_{DM}\\ =\\ \\Delta_{D}-\\Delta_{M}\\ =\\ 2\\ \\delta h\\ \\sin(\\epsilon)+b, \\tag{1}\\] where * \\(\\Delta_{D}\\) is the measured lapse, in meters, as estimated from the data, * \\(\\Delta_{M}\\) is the modeled lapse, in meters, based on an ellipsoidal model of the Earth, GPS constellation precise ephemeris, aircraft GPS/INS precise kinematic processing, and a tropospheric model, * \\(\\delta h\\) is the normal offset between the sea surface and the (model) ellipsoid surface, * \\(\\epsilon\\) is the GPS satellite elevation angle at the specular point of reflection, over the ellipsoid, and * \\(b\\) is the hardware system bias. The precision obtained after 1-second of incoherent averaging in the estimation of \\(\\Delta_{DM}\\) using this approach is displayed in Table 1. For each PRN number, the root mean squared error of the 1-second lapse is shown (in meters). It is roughly of 3 m. This noise level is as expected from the C/A code bistatic ranging in our experimental setup (antenna gain, altitude, etc.) and consistent with earlier experiments. ## 5 Results The algorithm outlined in the previous section has been used with data from the three better behaved GPS satellites. The other two visible satellites appeared to be severely affected by aircraft-induced multipath (probably due to the landing gear and wing surfaces). A SSH profile has been retrieved for each satellite and the average value of the three profiles is shown in Figure 3 along with the Jason-1 SSH, Mean Sea Level (MSL) and one SSH value obtained with the control buoy. This solution has been obtained setting a model wind speed of 10 m/s (values provided by Jason-1 along the track vary between 9 and 13 m/s), wind direction of 0 degrees North (values provided by ECMWF vary between 30 and -20 deg North), and sea slope PDF isotropy coefficient equal to 0.65 (the theoretical value for a mature, wind-driven sea according to [5]). It is important to underline that the use of constant values for the geophysical parameters along the whole track (more than 120 km) induces non-linear errors on the final altimetric estimation. Nonetheless, the bias of the final solution with respect to the SSH (the error mean) is 1.9 cm while the root mean error is 10.5 cm. ## 6 Conclusion The Eddy Experiment has validated the use of PARIS as a tool for airborne sea surface height measurements, providing both the precision and accuracy predicted by earlier experimental and theoretical work. We have observed that the use of a waveform model for the reflected signal, based on geophysical parameters describing the sea surface conditions, is essential for the accuracy of the altimetric solution--a fact which may explain earlier results in which no geophysical retracking was performed (e.g., [15]). The accuracy achieved by our algorithm is of the order of 1 decimeter, but we expect that further analysis and refinements, such as the inclusion of DMSS\\({}_{L}\\) parameters in the inversion procedure, will improve these numbers. Our sensitivity analysis has also shown that the altitude of this flight was not optimal for GNSS-R altimetry and made the experiment more prone to aircraft multipath problems. A higher altitude flight would lead to a smaller angular span of the reflecting area on the sea surface, thus reducing the impact of geophysical parameters, antenna pattern and aircraft multipath on the retracking process of the leading edge, making the overall altimetric solution more robust. Higher altitudes are also needed to better understand the space-borne scenario. We would like to emphasize that GNSS-R signals can be profitably used also for scatterometric measurements (i.e., _speculometry_, from the Latin word for mirror, \"speculo\", see [16]). In a parallel paper ([8]), the inversion of GNSS-R Delay-Doppler Maps for sea-surface DMSS\\({}_{L}\\) estimation is presented for the same data set. The next step is to merge the altimetric and speculometric processing in an attempt to provide an autonomous GNSS-R complete description of the sea--topography and surface conditions. We believe the Eddy Experiment is an important \\begin{table} \\begin{tabular}{|c|c|c|} \\hline PRN & Complete track & Beginning of the track \\\\ \\hline \\hline 8 & 3.5 m & 2.7 m \\\\ \\hline 24 & 3.4 m & 2.8 m \\\\ \\hline 27 & 2.9 m & 2.7 m \\\\ \\hline \\end{tabular} \\end{table} Table 1: Precision in the estimation of the time lapses (root mean squared error of the lapses, in meters). Figure 3: Final altimetric result. The red line represents the average altimetric result of the three GPS satellite analyzed filtered over a 20 km—i.e., 400 seconds, the aircraft speed being about 50 m/s. The black line represents the Jason-1 SSH, while the green dashed line represents the MSL. The blue diamond is the SSH measurement obtained from the reference buoy. milestone on the road to a space mission. We underline that the obtained precision and accuracy are in line with earlier experiments and theoretical error budgets (see, e.g., [10]). We note that the same error budgets have been used to investigate and confirm the strong impact of space-borne GNSS-R altimetric mission data on mesoscale ocean circulation models ([18, 19]). Further analysis of existing datasets (which could be organized in a coordinated database for the benefit of the community) and future experiments at higher altitudes will continue to refine our understanding of the potential of this technique. ## Acknowledgments This study was carried out under the ESA contract TRP ETP 137.A. We thank EADS-Astrium and all sub-contractors (Grupo de Mecanica del Vuelo, Institut d'Estudis Espacials de Catalunya, Collecte Localisation Satellites, and Institut Francais de Recherche pour l'Exploitation de la Mer) for their collaboration in the project, and the Institut Cartografic de Catalunya for flawless flight operations and aircraft GPS/INS kinematic processing. Finally, we thank Irene Rubin from CRESTech, for providing us with SSM/I IWV (Integrated Water Vapor) data. _All Starlab authors have contributed significantly; the Starlab author list has been ordered randomly._ ## References * [1] M. Caparrini. Using reflected GNSS signals to estimate surface features over wide ocean areas. Technical Report EWP 2003, ESA report, December 1998. * [2] M. Caparrini, L. Ruffini, and G.Ruffini. Ghss-r altimetry with gps 11 data from the bridge 2 campaign. In _Proceedings of the 2003 Workshop on Oceanography with GNSS-R_. Starlab Barcelona, July 2003. * ESA Contract 14285/85/NL/PB. * [4] E. Cardellach, G. Ruffini, D. Pino, A. Rius, A. Komjathy, and J. Garrison. Mediterranean balloon experiment: GPS reflection for wind speed retrieval from the stratosphere. _To appear in Remote Sensing of Environment_, 2003. * [5] T. Elfouhaily, B. Chapron, K. Katsaros, and D. Vandemark. A unified directional spectrum for long and short wind-driven waves. _Journal of Geophysical Research_, 102(15):781-796, 1997. * [6] J.L. Garrison, G. Ruffini, A. Rius, E. Cardellach, D. Masters, M. Armatys, and V.U. Zavorotny. Preliminary results from the GPSR mediterranean balloon experiment (GPSR-MEBEX). In _Proceedings of ERIM 2000_, Charleston, South Carolina, USA, May 2000. * [7] L. Garrison, S. Katzberg, and M. Hill. Effect of sea roughness on bistatically scattered range coded signals from the GPS. _Geophysical Research Letters_, 25:2257-2260, 1998. * [8] O. Germain, G. Ruffini, F. Soulat, M. Caparrini, B. Chapron, and P. Silvestrin. The GNSS-R Eddy Experiment II: L-band and optical speculometry for directional sea-roughness retrieval from low altitude aircraft. In _Proceedings of the 2003 Workshop on Oceanography with GNSS-R_. Starlab, July 2003. * [9] A. Komjathy. GPS surface reflection using aircraft data: analysis and results. In _Proceedings of the GPS surface reflection workshop_. Goddard Space Flight Center, July 1998. * [10] S. Lowe, J.L. LaBrecque, C. Zuffada, L.J. Romans, L. Young, and G.A. Hajj. First space-borne observation of an earth-reflected gps signal. _Radio Science_, 37(1):1-28, 2002. * [11] S. Lowe, C. Zuffada, Y. Chao, P. Kroger, J.L LaBrecque, and L.E. Young. 5-cm precision aircraft ocean altimetry using GPS reflections. _Geophysical Research Letters_, (29):4359-4362, 2002. * [12] M. Martin-Neira. A PAssive Reflectometry and Interferometry System (PARIS): application to ocean altimetry. _ESA Journal_, 17:331-355, 1993. * [13] M. Martin-Neira, M. Caparrini, J. Font-Rossello, S. Lannelongue, and C. Serra. The PARIS concept: An experimental demonstration of sea surface altimetry using GPS reflected signals. _IEEE Transactions on Geoscience and Remote Sensing_, 39:142-150, 2001. * [14] G. Picardi, R. Seu, S. G. Sorge, and M. Martin-Neira. Bistatic model of ocean scattering. _IEEE Trans. Antennas and Propagation_, 46(10):1531-1541, 1998. * [15] A. Rius, J.M. Aparicio, E. Cardellach, M. Martin-Neira, and B. Chapron. Sea surface state measured using GPS reflected signals. _Geophysical Research Letters_, 29(23):2122, 2002. * ESA ESTEC Contract No. 15083/01/NL/MM, 2001. Available online at [http://starlab.es](http://starlab.es). * [17] F. Soulat. Sea surface remote-sensing with GNSS and sunlight reflections. _Doctoral Thesis_, Universitat Politecnica de Catalunya/Starlab, 2003. * [18] P.Y. Le Traon, G. Dibarboure, G. Ruffini, and E. Cardellach. Mesoscale ocean altimetry requirements and impact of GPS-R measurements for ocean mesoscale circulation mapping. In _Proceedings of the 2003 Workshop on Oceanography with GNSS-R_. Starlab Barcelona, July 2003. * an update. In _Proceedings of the 2003 Workshop on Oceanography with GNSS-R_. Starlab Barcelona, July 2003. * [20] R.N. Treuhaft, S.T. Lowe, C. Zuffada, and Y. Chao. 2-cm gps altimetry over crater lake. _Geophysical Research Letters_, 22(23):4343-4346, December 2001. * [21] V. Zavorotny and A. Voronovich. Scattering of GPS signals from the ocean with wind remote sensing application. _IEEE Trans. Geoscience and Remote Sensing_, 38(2):951-964, 2000.
We report results from the Eddy Experiment, where a synchronous GPS receiver pair was flown on an aircraft to collect sampled L1 signals and their reflections from the sea surface to investigate the altimetric accuracy of GNSS-R. During the experiment, surface wind speed (U10) was of the order of 10 m/s, and significant wave heights of up to 2 m, as discussed further in a companion paper. After software tracking of the two signals through despreading of the GPS codes, a parametric waveform model containing the description of the sea surface conditions has been used to fit the waveforms (retracking) and estimate the temporal lapse between the direct GPS signals and their reflections. The estimated lapses have then been used to estimate the sea surface height (SSH) along the aircraft track using a differential geometric model. As expected, the precision of GNSS-R ranges was of 3 m after 1 second integration. More importantly, the accuracy of the GNSS-R altimetric solution with respect to Jason-1 SSH and _in situ_ GPS buoy measurements was of 10 cm, which was the target with the used experimental setup. This new result confirms the potential of GNSS-R for mesoscale altimetric monitoring of the ocean, and provides an important milestone on the road to a space mission. GNSS-R, GPS-R, altimetry, meosocale, PARIS, waveform, retracking, bistatic.
Give a concise overview of the text below.
arxiv-format/0310093v1.md
The GNSS-R Eddy Experiment II: L-band and Optical Speculometry for Directional Sea-Roughness Retrieval from Low Altitude Aircraft O. Germain G. Ruffini F. Soulat M. Caparrini Starlab, C. de l'Observatori Fabra s/n, 08035 Barcelona, Spain, [http://starlab.es](http://starlab.es) B. Chapron Ifremer, Technopole de Brest-Iroise BP 70, 29280 Plouzane, France, [http://ifremer.fr](http://ifremer.fr) P. Silvestrin ESA/ESTEC, Keplerlaan 1, 2200 Noordwijk, The Netherlands, [http://esa.int](http://esa.int) ## 1 Introduction The use of Global Navigation Satellite System (GNSS) signals reflected by the sea surface as a remote sensing tool has generated considerable attention for over a decade [12, 13]. Among several applications, two classes have rapidly emerged in the community: sea-surface altimetry, which aims at retrieving the mean sea level like classical radar altimeters do, and sea-surface scatterometry or \"speculometry\" (see below for a justification of this neologism) for the determination of sea roughness and near surface wind. This paper addresses the latter application. Inferring sea roughness from GNSS-R data requires (i) a parametric description of the sea surface, (ii) an electromagnetic model for sea-surface scattering at L-band and (iii) the choice of a GNSS-R data product to be inverted. In the literature, there is quite an agreement on the two first aspects. It has been recognized that the scattering of GNSS signals can be modeled as a Geometric Optics process (GO), where the fundamental physical process is the scattering from mirror-like surface elements. This is the reason why we use the term \"speculometry\", from the latin word for mirror, _speculo_. Therefore, the most important feature of the sea surface is the statistics of facet slopes at about the same scale as the electromagnetic wavelength (\\(\\lambda\\)). This is described by the bi-dimensional slope probability density function (PDF). Under a Gaussian assumption, three parameters suffice to fully define the sea-surface PDF: the directional mean square slope \\(\\text{DMS}_{\\lambda}\\), which results from the integration of the ocean energy spectrum at wavelengths larger than \\(\\lambda\\). The symbol \\(\\text{DMS}_{\\lambda}\\) englobes the three parameters defining the ellipsoidal shape of the slope PDF: scale (total MSS), direction (Slope PDF azimuth) and isotropy (Slope PDF isotropy). The GNSS-R scattering model proposed by Zavorotny and Voronovich in [17] is based on GO, and is, to date, the reference model for the GNSS-R community. While for the purposes of specular scattering the sea-surface roughness is parametrized by the directional mean square slope in a direct manner, \\(\\text{DMS}_{\\lambda}\\) is rarely emphasized as the geophysical parameter of interest. Instead, most authors prefer to link sea roughness to the near surface wind vector, which is thought to be more useful for oceanographic and meteorological users, but misleading. Indeed, this link requires an additional modeling layer and is an extra source of error. For instance, a wind-driven sea spectra is not suitable for inferring sea surface \\(\\text{DMS}_{\\lambda}\\) when swell is present or the sea not fully developed. The connection between DMSS\\({}_{\\lambda}\\) and wind is thus modulated by other factors (e.g., swell, fetch and degree of maturity). Usually, for technical reasons, the product inverted in GNSS-R speculometry is a simple Delay Waveform, that is, a 1D delay map of the reflected signal amplitude. Using a single GNSS emitter, the wind speed can be inferred assuming an isotropic slope PDF (i.e., the PDF's shape is a circle) [8, 3, 11]. Attempts have also been made to measure the wind direction by fixing the PDF isotropy to some theoretical value (around 0.7) and using at least two satellites reflections with different azimuths [18, 9]. As investigated in the frame of the ESA OPPSCAT project (see [1] and [14]), it is nonetheless possible to work on a product of higher information content: the Delay-Doppler Map (DDM), a 2D delay-Doppler map of the reflected signal amplitude. The provision of an extra dimension opens the possibility to perform the full estimation of the DMSS\\({}_{\\lambda}\\). In [7], Elfouhaily _et al._ developed a rapid but sub-optimal method based on the moments of the DDM to estimate the full DMSS\\({}_{\\lambda}\\): this approach neglects the impact of the bistatic Woodward Ambiguity Function modulation of the Delay Doppler return. The present paper was motivated by a recent experiment conducted by Starlab for the demonstration of GNSS-R altimetry. The altimetric aspects are reported elsewhere [15]. We note that the configuration of the flight was not optimized for speculometry: from 1000 m altitude, the sea-surface reflective area is essentially limited by the PRN C/A code, and the glistening zone is coarsely delay-Doppler mapped. In addition to the GNSS-R experiment, high resolution optical photographs of sun glitter were also taken, providing the SORES dataset (SO-lar Reflectance Speculometer). Since the classic study of Cox and Munk [4], it is well known that sea-surface DMSS\\({}_{Opt}\\) can be inferred from such data. The availability of optical photographs thus provided us with an extra source of colocated information. Because there is a strong similarity between products --DDM for GNSS-R and the Tilt Azimuth Map (TAM) for SORES-- and models --GO-- the same inversion methodology can be applied to both datasets. The goal of the paper is to investigate the full exploitation of the bidimensional GNSS-R DDM and SORES TAM products to infer the set of three DMSS\\({}_{\\lambda}\\) parameters. The driver of the study has been the exhaustive exploitation of the information contained in those 2D products. Consequently, the proposed approach relies on a least-square fit of speculometric models to datasets. We first describe in details the collected datasets and the associated pre-processing. Then, we present the speculometric models needed to infer data together with the inversion scheme. Finally, we provide the estimation results and discuss their coherence with other sources of data. ## 2 Dataset collection and pre-processing The campaign took place Friday September 27th 2002 around 10:00 UTC, along the Catalan coast from Barcelona (Spain) up to the French border. An aircraft overflew the Jason-1 track 187 at 1000 m along 150 km and gathered 1.5 hours of GPS-R raw signals (see [16] for more details). Since it would have been computationaly too expensive to process the full track, it was divided into 46 10-second arcs (each spanning roughly 500 meters), sampled every 50 s (see Figure 1). The first arc started at GPS Second Of the Week 468008.63. The aircraft kinematic parameters were kept close to the nominal values specified in the mission plan: altitude=1000 m, speed=50 m/s and heading from North=30\\({}^{\\circ}\\). We have selected three GPS satellites in optimal view during the experiment whose elevation and azimuth are given in Figure 2. The raw GNSS-R data were recorded using the GPS reflection equipment provided by ESA. Specifically, the GPS direct and reflected signals were 1-bit sampled and stored a rate of 20.456 Mbit/s. The pre-processing step consisted in performing a delay-Doppler Pseudo Random Noise (PRN) code despreading to coherently detect the direct signal (from GPS emitter) and the reflected signal (scattered by sea-surface). We used the Starlab in-house software to produce three DDMs time-series (one per PRN), sampled into 46 arcs of 10 seconds each. The general processing strategy was to track the delay-Doppler of direct signal and then compute DDMs for both direct and reflected signals. These DDMs actually represent the filtered electromagnetic field of incoming signals, as processed with delay-Doppler value slightly different from those corresponding to the specular point. The coherent integration time was set to 20 ms to ensure a Doppler resolution of 50 Hz. The delay map spanned 80 correlation lags (i.e. +/- 1.95 \\(\\mu\\)s) with a lag step of 48.9 ns, while the Doppler range spans -200 Hz to 200 Hz with a step of 20 Hz. Incoherent averaging was applied to each arc (the accumulation time was set to 10 s). This process aimed at reducing both thermal and speckle noise by a factor of \\(\\sqrt{500}\\). At the end, the GNSS-R product for one PRN and one arc was an average amplitude delay-Doppler map of size 81\\(\\times\\)21. The SORES photographs were taken from time to time along the track when the roll, pitch and yaw angles of the plane were negligible. The camera was a Leica dedicated to aerial photography. An inertial system (by Applanix) provided the time-tagged position for each snapshot. The film was a panchromatic Aviphot Pan 80. The focal length was 15.2 cm, and the photographic plate had an area of 23\\(\\times\\)23 cm\\({}^{2}\\). The aperture angle was consequently 74.2\\({}^{o}\\). The observed area was a square with area 1.124\\(\\times\\)1.124 km\\({}^{2}\\). The exposure time was fixed to 1/380 and aperture to f/4. The silver photographs were scanned and digital images were averaged to 400\\(\\times\\)400 squared pixels, in order to be easily processed. ## 3 Models and inversion ### Directional mean square slope As discussed above, forward scattered signals at short wavelengths (optical but also L-band) are mostly sensitive to the specular scattering mechanism. Therefore, the strongest sea-surface signature in received signal is expected to be due to facet Figure 1: Map of the aircraft track divided into 46 10-second arcs. Figure 2: Elevation and Azimuth of three GPS satellites in view during the 46 10-second arcs. slope statistics at the relevant scales. The 2D slope probability density function (PDF) Gaussian model is given by \\[\\mathcal{P}(s)=\\frac{e^{-\\frac{1}{2}s^{\\dagger}K^{-1}s}}{2\\pi\\sqrt{\\text{det}(K)}}, \\tag{1}\\] where \\(s^{\\dagger}=[\\partial\\zeta/\\partial x\\ \\partial\\zeta/\\partial y]\\) stands for the vector of directional slopes in some frame \\(xy\\) and \\(K\\) is the matrix of slope second order moments. The \\(xy\\) frame mapped on sea-surface is defined as follows: it is centred on the specular point and has its \\(x\\) axis aligned with the Transmitter-Receiver line. Mean-square slopes along major and minor principal axes are often referred to as MSS up-wind (\\(mss_{u}\\)) and MSS cross-wind (\\(mss_{c}\\)) respectively. The \\(K\\) matrix is then obtained via a simple rotation, \\[K=R_{\\psi}\\cdot\\left[\\begin{array}{cc}mss_{u}&0\\\\ 0&mss_{c}\\end{array}\\right]\\cdot R_{-\\psi}, \\tag{2}\\] where \\(R_{\\psi}\\) is the usual rotation matrix of angle \\(\\psi\\), the angle between the \\(x\\)-axis and the slope principal axis. Thus, \\(mss_{u}\\), \\(mss_{c}\\) and \\(\\psi\\) are the three geophysical parameters we wish to estimate. They can be thought of as the three parameters of an ellipse (see Figure 3) representing the slope PDF mapped on sea-surface. In the following, we will consider the equivalent set of parameters: * Total MSS, defined as: \\(2\\sqrt{mss_{u}.mss_{c}}\\). This magnitude is actually proportional to ellipse area and can be interpreted in terms of wind speed, based on a particular wind-driven sea-surface spectrum like the Elfouhaily's spectrum [6]. * Slope PDF azimuth (SPA), defined as the direction of semi-major axis with respect toNorth. As shown by Figure 3, this angle is \\(\\pi+\\Phi-\\psi\\), if \\(\\Phi\\) i s the satellite azimuth from North. * Slope PDF isotropy (SPI), defined as \\(mss_{c}/mss_{u}\\). When SPI=1, the slope PDF is isotropic and the glistening zone is circular. Low values of SPI indicate a highly directive PDF. Typically, SPI is expected to be around 0.65 for well developed sea-surface. ### GNSS-R speculometric model The classical GNSS-R bistatic radar equation [17] links the average GNSS-R power return to sea-surface slope PDF. Processing the raw signal with various delay-Doppler values \\((\\tau,f)\\), a DDM is computed whose theoretical expression is: \\[P(\\tau,f)=\\int dxdy\\,\\frac{G_{r}}{R_{t}^{2}R_{r}^{2}}\\cdot \\sigma^{0}\\cdot\\\\ \\chi^{2}\\left(\\tau_{m}-\\tau_{c}-\\tau,f_{m}-f_{c}-f\\right) \\tag{3}\\] where \\(G_{r}\\) is the receiver antenna pattern, \\(R_{t}\\) and \\(R_{r}\\) are the distances from generic point on sea-surface to transmitter and receiver, \\(\\sigma^{0}\\) is the reflectivity, \\(\\chi\\) is the Woodward Ambiguity Function (WAF, see [13]), \\(\\tau_{m}(x,y)\\) and \\(f_{m}(x,y)\\) are the delay-Doppler mapping on sea-surface and \\(\\tau_{c}\\) and \\(f_{c}\\) are delay-Doppler centers. To first order, the reflectivity is proportional to the slope PDF: \\[\\sigma^{0}=\\pi|\\mathcal{R}|^{2}\\frac{q^{4}}{q_{z}^{4}}\\ \\mathcal{P}\\left(\\frac{-q_{x}}{q_{z}},\\frac{-q_{y}}{q_{z}}\\right), \\tag{4}\\] where \\((q_{x},q_{y},q_{z})\\) is the scattering vector and \\(|\\mathcal{R}|^{2}=0.65\\) is the specular Fresnel coefficient. The presence of thermal noise biases the value of average power return. Hence, the average amplitude of the DDM can by modeled by \\[A(\\tau,f)=\\sqrt{\\alpha.P(\\tau,f)+b}, \\tag{5}\\] where \\(b\\) stands for the bias in power. In particular, this effect is visible in the early-delay domain of the DDM: for delays lower than one-chip, the DDM amplitude has a no null value, often called \"grass level\". As we do not have a calibrated model an overall scaling parameter \\(\\alpha\\) is also needed in the model. To sum up, the model features three parameters of interest and four \"nuisance parameters\": * the DMSS\\({}_{\\lambda}\\), characterizing the Gaussian slope PDF: total MSS, isotropy (SPI) and azimuth (SPA), * the DDM delay-Doppler centers: \\(\\tau_{c}\\) and \\(f_{c}\\), * overall scaling parameter: \\(\\alpha\\), * grass level: \\(b\\). Other parameters required to run the forward model are recalled in Table 1. ### SORES speculometric model To date, results derived from the glitter-pattern of reflected sunlight as photographed by Cox and Munk in 1951 remain the most reliable direct measurements of wind-dependent slope statistics. As explained in their well-documented report [5], the sea-surface can be gridded with a Tilt (\\(\\beta\\)) Azimuth (\\(\\alpha\\)) Mapping of the small facet slopes. These are just a polar parametrization of the (\\(s_{x}\\),\\(s_{y}\\)) slopes discussed in the previous section: \\[\\left\\{\\begin{array}{rcl}s_{x}&=&\\cos\\alpha\\cdot\\tan\\beta,\\\\ s_{y}&=&\\sin\\alpha\\cdot\\tan\\beta.\\end{array}\\right. \\tag{6}\\] The link between the small facet slope PDF \\(\\mathcal{P}\\) and the intensity in the photograph \\(I_{m}\\) is given by \\[I_{m}(\\alpha,\\beta)=A_{0}\\cdot\\mathcal{P}(\\alpha,\\beta)\\cdot f( \\alpha,\\beta,\\phi)\\] \\[+K\\cdot I_{b}(\\alpha,\\beta), \\tag{7}\\] where \\(I_{b}(\\alpha,\\beta)\\) is the intensity of the picture background (i.e. far from the sun glint), \\(K\\) and \\(A_{0}\\) are multiplicative constants and \\(f\\) is a transfer function, \\[f(\\alpha,\\beta,\\phi)=\\frac{\\rho(1-\\rho)^{3}\\sin\\phi\\cos^{3}\\mu}{\\cos^{3}\\beta \\cos\\omega}, \\tag{8}\\] with \\(\\phi\\): sun elevation, \\(\\rho\\): coefficient of reflection and \\(\\mu\\), \\(\\omega\\): two angles shown on Figure 4. The pixel intensity on the image comes principally from the additive contribution of sunlight and reflected skylight. The sunlight scattered by particles beneath the sea surface is assumed negligible and is not considered here. A model has been developed to remove reflections of sky radiance from the glint. The approach consists in considering each sea surface facet specular because, for a given location of the receiver, there always exists a \"source\" in the sky satisfying the specular reflection condition. Let's consider the cell \\((\\alpha_{i},\\beta_{i})\\) of the TAM. It corresponds to the slope components required to reflect the solar rays onto the camera. The radiance of the sea surface due to reflected skylight in the cell \\begin{table} \\begin{tabular}{|l|l l|} \\hline \\multirow{2}{*}{**Geometry**} & Aircraft: & Altitude, speed and heading provided at 1 Hz \\\\ & Satellite: & Elevation and azimuth provided at 1 Hz \\\\ \\hline \\multirow{2}{*}{**Instrument**} & Antenna Pattern: & measured in anechoic chamber \\\\ & Band: & GPS L1 (19 cm) \\\\ & GPS Code: & C/A \\\\ \\hline \\multirow{3}{*}{**Processing**} & Integration Time: & 20 ms \\\\ & Accumulation Time: & 10 s \\\\ \\cline{1-1} & Doppler span: & [ -200 Hz, 200 Hz ], 20 Hz step \\\\ \\cline{1-1} & Delay span: & [-40 samples, 40 samples], 1 sample step \\\\ \\hline \\end{tabular} \\end{table} Table 1: Overview of the parameters necessary for running the DDM forward model. Figure 3: Sketch of the slope PDF and related frames. \\((\\alpha_{i},\\beta_{i})\\) can thus be simply modeled by the integration of intensity over all the azimuths \\(\\alpha\\) and tilts \\(\\beta\\) except the azimuth \\(\\alpha_{i}\\) and the tilt \\(\\beta_{i}\\) of the corresponding cell. ### Inversion scheme Inversion was performed through a minimization of the root mean square difference between model and data products (i.e., DDMs for GNSS-R and TAMs for SORES). Numerical optimization was carried out with a steepest-slope-descent algorithm (Levenberg-Marquardt type adjustment). The three Figure 4: Geometry of the SORES experiment. Figure 5: Tilt Azimuth Mapping overlaid on a SORES photograph. DMSS\\({}_{\\lambda}\\) as well as nuisance parameters (\\(r_{c}\\), \\(f_{c}\\) and \\(\\alpha\\) for the DDMs and \\(A_{0}\\) and \\(K\\) for the TAMs) were jointly estimated in an iterative manner: nuisance parameters (as a first step) and DMSS\\({}_{\\lambda}\\) (as a second step) were successively optimized, repeating this two-step sequence until convergence. Figure 6 gives qualitative examples of fit results for DDM and TAM. ## 4 Results and discussion Figure 7 shows total MSS, Slope PDF Azimuth and Slope PDF Isotropy along the aircraft track between latitudes 41.2\\({}^{o}\\) and 42.2\\({}^{o}\\), as estimated by SORES and GNSS-R. Other sources of information are also shown for comparison: * total MSS for C- and Ku- bands, derived from the Jason-1 \\(\\sigma^{o}\\) measurements at 1 Hz sampling (7 km) and 20 km resolution, * wind direction provided by the ECMWF numerical weather model, with accuracy of 20\\({}^{o}\\), and * swell direction derived from a spectral analysis of SORES images (the periodic pattern of long waves is indeed clearly observed on the photographs). ### Total MSS The total MSS has been plotted in log-scale in order to compare different frequency measurements more easily. The common trend for all bands is the increase of slope variance with latitude until a relative plateau is reached. Measurements of PRN08 and 24 show good agreement while PRN10 seems to be somewhat up-shifted. As expected, we observe that the level and dynamic of MSS decrease with longer wavelength: Optical, Ku, C and L band, in this order. Nevertheless, the level and dynamic range of GNSS-R plots (especially PRN10) seem a bit large for L-band measurements, when compared to C-band. Note however that Jason-1's MSS have been obtained through the relationship \\(MSS=\\kappa/\\sigma^{o}\\), \\(\\kappa\\) being an empirical parameter accounting for calibration offsets. Unfortunately, the uncertainty on \\(\\kappa\\) makes the absolute levels of Jason-1's plots very doubtful. Here, as an illustration purpose, we have set \\(\\kappa\\)=0.45 and \\(\\kappa\\)=0.95 for C- and Ku-band respectively. ### Slope PDF Azimuth Using a single DDM, the estimation of SPA is degenerate in two particular cases: when the transmitter is at zenith or when the receiver moves torwards the transmitter [2]. In these two cases, the Delay-Doppler lines that map the glistening zone are symmetric around the receiver direction. Hence, one cannot distinguish between a slope PDF and its mirror image about the receiver direction. Here, PRN08 has its elevation comprised between 74 and 83 degrees. It is then very likely that the SPA estimated for this PRN is degenerate. For this reason, we have added on the plot the mirror image of the SPA about the receiver direction (30\\({}^{o}\\)). We also note that the azimuth of PRN10 decreases down to 230\\({}^{o}\\) at the end of the track, quite close from 210\\({}^{o}\\), the complementary of the receiver's direction. According to ECMWF data and SORES spectral analysis, wind and swell were slightly misaligned. PRN08 (or its mirror image) matches very well the swell direction and so does PRN10 along most of the track. This result underlines the fact that GNSS-R is not sensitive to wind only and that swell has a strong impact too. PRN24 has a different behaviour, in line with SORES. These two measurements agree relatively better with wind direction, although a discrepancy of 30\\({}^{o}\\) is observed at the beginning of the track. ### Slope PDF Isotropy It is worth remembering that Elfouhaily's wind-driven spectrum predicts a SPI value of 0.65, hardly sensitive to wind speed. Here we note that SPI varies quite significantly along the track for both GNSS-R and SORES. The important departure observed from the 0.65 nominal value is probably a signature of an under-developed sea and the presence of strong swell. Further research should be undertaken in order to better understand the potential information contained in this product. ### Link to wind speed On Figure 8, we have plotted the estimated total MSS versus Jason-1's wind speed \\({}^{*}\\) together with two models: * Elfouhaily's sea-height spectrum, integrated for different cut-off wavelengths, and * an empirical model proposed by Katzberg for L-band, based on a modification of Cox and Munk's relationship: MSS\\(=\\)\\(0.9.10^{-3}\\sqrt{9.48U+6.07U^{2}}\\), \\(U\\) being wind speed (private communication with J.L. Garrison, Purdue University). We see that both SORES and GNSS-R estimations follow Elfouhaily's model trend (MSS obtained by integrating the spectrum with the usual cut-off of 3 times the wavelength) but give higher values of MSS (from to 20 to 40% up-shifted). Actually, we have found that MSS estimates of PRN08 and 24 are very well fitted by Elfouhaily's spectrum with a cut-off of one wavelength only. The 20% discrepancy can be explained by a strong sea state with a SWH twice as high as the one observed during the Cox and Munk's experiment (almost 2 m compared to 1 m). At any rate, these results indicate that the wind-MSS link is not straightforward and that the DMS\\({}_{\\lambda}\\) should be considered as a self-standing product for oceanographic users. ## 5 Conclusion We have reported the first inversion of GNSS-R full Delay-Doppler Maps for the retrieval of the sea-surface directional mean square slope, DMS\\({}_{L}\\). In addition, we have presented a repetition of the Cox and Munk experiment for DMS\\({}_{Opt}\\) retrieval through inversion of Tilt Azimuth Map of sun glitter optical photographs. Our results show that both optical and L-band total MSS are 20% higher than predicted using Elfouhaily's model for the observed wind speed (9 to 13 m/s). The SPA estimated by GNSS-R matches the swell direction with good accuracy for at least 2 out of 3 PRNs. A new geophysical product has been discussed: the slope PDF isotropy, which can be related to wind/wave misalignement as well the degree of sea development. The analysis highlighted the important impact of sea-state unmodeled parameters (such as swell) in addition to wind stress on the measured DMS\\({}_{\\lambda}\\). Since speculometry is sensitive to slope processes over a wide range of scales, the link between DMS\\({}_{\\lambda}\\) and wind is not straightforward: total MSS and SPA are definitely affected by swell. Quantitatively, the 20 % bias observed in SORES results can be explained by the impact of swell on the elevation spectrum. Consequently, DMS\\({}_{\\lambda}\\) can and should be studied as an independent parameter, of independent geophysical value. We note however that the use of several wavelenghts could in principle be used to invert for all the parameters modulating the elevation spectrum, a line of future work. Let us finally emphasize that the flight was not optimized for speculometry (1000 m altitude, 50 m/s speed) and that higher/faster flights are needed in the future in order to consolidate the concept of DDM inversion for DMS\\({}_{\\lambda}\\) estimation. ## Acknowledgements This study was carried out under ESA contract 10120/01/NL/SF. The dataset was collected in the frame of ESA contract TRP ETP 137.A. We acknowledge all partners of the consortium (EADS-Astrium, Grupo de Mecanica del Vuelo, Institut d'Estudis Espacials de Catalunya and Institut Cartografic de Catalunya) for their contribution. _All Starlab authors have contributed significantly; the Starlab author list has been ordered randomly._ ## References * [1] GNSS-OPPSCAT, Utilization of scatterometry using sources of opportunity. Technical Report ESA-Contract 13461/99/NL/GD, 2000. * ESA Contract 13461/99/NL/GD. * [3] E. Cardellach, G. Ruffini, D. Pino, A. Rius, A. Komjathy, and J. Garrison. Mediterranean balloon experiment: GPS reflection for wind speed retrieval from the stratosphere. _To appear in Remote Sensing of Environment_, 2003. * [4] C. Cox and W. Munk. Measurement of the roughness of the sea surface from photographs of the sun's glitter. _Journal of the Optical Society of America_, 44:838-850, 1954. * [5] C. Cox and W. Munk. Slopes of the sea surface deduced from photographs of sun glitter. _Bull. Scripps Inst. Ocean._, 6:401-488, 1956. * [6] T. Elfouhaily, B. Chapron, K. Katsaros, and D. Vandemark. A unified directional spectrum for long and short wind-driven waves. _Journal of Geophysical Research_, 102(15):781-796, 1997. * [7] T. Elfouhaily, D.Thompson, and L. Linstrom. Delay-Doppler analysis of bistatically reflected signals from the ocean surface: Theory and application. _IEEE Transactions on Geoscience and Remote Sensing_, 40(3), 2002. * [8] J.L. Garrison. Wind speed measurement using forward scattered GPS signals. _IEEE Trans. Geoscience and Remote Sensing_, 40:50-65, 2002. * [9] J.L. Garrison. Anisotropy in reflected GPS measurements of ocean winds. In _Proc. IEEE IGARSS, Toulouse, France_, 2003. * [10] J. Gourrion, D. Vandemark, S. Bailey, B. Chapron, C. Gommenginger, P.G. Challenor, and M.A. Srokosz. A two parameter wind speed algorithm for Ku-band altimeters. _J. Atmos. Oceanic Tech._, 19:2030-2048, 2002. * [11] A. Komjathy, V.U. Zavorotny, P. Axelrad, G.H Born, and J.L. Garrison. GPS signal scattering from sea surface: Wind speed retrieval using experimental data and theoretical model. _Remote Sensing of Environment_, 73:162-174, 2000. * [12] M. Martin-Neira. A PAssive Reflectometry and Interferometry System (PARIS): application to ocean altimetry. _ESA Journal_, 17:331-355, 1993. * ESA ESTEC Contract No. 15083/01/NL/MM, 2001. Available online at [http://starlab.es](http://starlab.es). * [14] G. Ruffini, J.L. Garrison, E. Cardellach, A. Rius, M. Armatys, and D. Masters. Inversion of GPS-R delay-Doppler mapping waveforms for wind retrieval. In _Proc. IEEE IGARSS, Honolulu, HA_, 2000. * [15] G. Ruffini, F. Soulat, M. Caparrini, and O. Germain. The GNSS-R Eddy Experiment I: altimetry from low altitude aircraft. In _Proceedings of the 2003 Workshop on Oceanography with GNSS-R_. Starlab, July 2003. * [16] F. Soulat. Sea surface remote-sensing with GNSS and sunlight reflections. _Doctoral Thesis_, Universitat Politecnica de Catalunya/Starlab, 2003. * [17] V. Zavorotny and A. Voronovich. Scattering of GPS signals from the ocean with wind remote sensing application. _IEEE Trans. Geoscience and Remote Sensing_, 38(2):951-964, 2000. * [18] C. Zuffada and T. Elfouhaily. Determining wind speed and direction with ocean reflected GPS signals. In _Proceedings of Sixth Int. Conf. on Remote Sensing for Marine and Coastal Environments, Charleston_, 2000. Figure 6: Examples of data products and their best-fit models. **First column:** GNSS-R Delay-Doppler Map (PRN08, arc 01). **Second column:** SORES Tilt-Azimuth Map (photograph 41). **First row :** Data. **Second row :** Best-fit Model. **Third row :** Data-Model Residual. Figure 7: DMSS\\({}_{\\lambda}\\) estimated along the aircraft track. **First row:** Total MSS (in dB). **Second row:** Slope PDF Azimuth. **Third row:** Slope PDF Isotropy. Figure 8: Total MSS versus Jason-1’s wind speed.
We report on the retrieval of directional sea-roughness (the full directional mean square slope, including MSS, direction and isotropy) through inversion of Global Navigation Satellite System Reflections (GNSS-R) and SOlar REFlectance Speculometry (SORES) data collected during an experimental flight at 1000 m. The emphasis is on the utilization of the entire Delay-Doppler Map (for GNSS-R) or Tilt Azimuth Map (for SORES) in order to infer these directional parameters. Obtained estimations are analyzed and compared to Jason-1 measurements and the ECMWF numerical weather model. GNSS-R, GPS, Galileo, Speculometry, SORES, sea roughness, Directional Mean Square Slope, Delay-Doppler Map.
Write a summary of the passage below.
arxiv-format/0311052v1.md
# PARFATT: GNSS-R coastal altimetry M. Caparrini M. Caparrini L. Ruffini G. Ruffini Starlab, C. de l'Observatori Fabra s/n, 08035 Barcelona, Spain, [http://starlab.es](http://starlab.es). ## 1 Introduction Specular reflections dominate medium to short wavelength electromagnetic forward scattering on the ocean, examples of which include GNSS and solar reflections. As reported in [23], during the last decade many GPS-R (Global Positioning System Reflections) experimental campaigns have now been successfully carried out. A partial and surely incomplete list is provided in Table 1. In this paper we focus on the potential of GNSS-R (Global Navigation Satellite System Reflections) for altimetric coastal applications. The techniques developed, however, can also be implemented in other scenarios (airborne, spaceborne). The specularly scattered field is composed of a coherent component and a random, Hoyt-distributed component [2]. When the surface is very rough, the latter becomes incoherent and the former becomes very small. In fact, if the surface height distribution is normal with deviation \\(\\sigma_{\\zeta}\\), then \\[\\langle r^{2}\\rangle\\sim n^{2}e^{-(4\\pi\\sigma_{\\zeta}\\cos\\theta/\\lambda)^{2}}+ \\tag{1}\\] \\[n(1-e^{-(4\\pi\\sigma_{\\zeta}\\cos\\theta/\\lambda)^{2}}),\\] where \\(\\langle r^{2}\\rangle\\) is the power average, \\(n\\) is the number of scatterers, \\(\\lambda\\) the EM wavelenght and \\(\\theta\\) the local incidence angle [23, 24]. GNSS-R signals thus contain a coherent and an incoherent component. In companion papers we discuss the analysis of the incoherent component for sea state monitoring [15, 25] and for code altimetry [27]. Here we present a new approach for the extraction and analysis of the coherent component of GNSS reflected signals to perform phase altimetry. The data discussed here was collected by ESA/ESTEC during the Bridge-2 experiment. The experiment aimed at gathering direct and reflected GPS signals from antennas located about 18 m above the mean sea level of an Estuary in the North sea of Holland. For more information on the experimental setup, the reader is directed to [21]. This paper is structured as follows: * Discussion on coherence properties of reflected signals and their use for phase altimetry. * Analysis of the direct and reflected signals and EM field extraction. * Implementation of PARFAIT altimetry. * Comparison with other data and discussion of the altimetric results. ## 2 The complex field The importance of retrieving the coherent part of the EM field backscattered by the sea surface stems from its altimetric content. Measuring the phase of the coherent component allows for accurately estimating the delay of the reflected signal with respect to the the direct one, i.e., for estimation of the temporal lapse_. This is the essential measurement for altimetry [26]. In order to collect the complex EM field, the complex signal is generated from the real one. Although this operation is often performed by the receiver front-end, in the Bridge-2 experiment only the in-phase component of the signal was sampled at high frequency and stored on digital tape. The quadrature component was generated afterwards. The process is illustrated in Figure (1). We can then represent the direct signal received at the antenna input2 as Footnote 2: This contains only the C/A code part. The P code component can be neglected thanks to the subsequent correlation of the signal with replicas of the C/A code—the two codes are orthogonal. \\[S_{d}(t)=A_{d}\\cdot C(t)\\cdot D(t)\\cdot e^{i(\\omega_{L1}+\\omega_{d})t}+\\eta_{d}, \\tag{2}\\] where \\(A_{d}\\) is the direct signal amplitude, \\(C(t)\\) represents the C/A code, \\(D(t)\\) the navigation code, \\(\\omega_{L1}\\) the L1 carrier frequency, \\(\\omega_{d}\\) the Doppler frequency offset, and \\(\\eta_{d}\\) (thermal) noise. The reflected signal at low altitudes can be modelled by \\[S_{r}(t)=C(t)\\cdot D(t)\\cdot e^{i(\\omega_{L1}+\\omega_{d})t}. \\tag{3}\\] \\[\\Big{(}A_{r}\\cdot e^{2\\pi i\\mathcal{L}/\\lambda}+O(t)\\Big{)}+\\eta_{r},\\] where \\(A_{r}\\) is the reflected signal mean amplitude, \\(O(t)\\) represents the perturbation due to ocean motion and \\(\\mathcal{L}\\) the reflected signal extra path length. In coastal applications \\(O(t)\\) is a relatively slowly varying quantity with zero mean, while \\(\\mathcal{L}\\), which contains the geophysical tide signal, can be considered effectively frozen during correlation processing. After modulation with a local oscillator of frequency \\(\\omega_{L1}-\\omega_{IF}\\) and low-pass filtering, the signal will have a residual carrier at \\(\\omega_{d}+\\omega_{IF}\\). This signal is mixed with a phasor at frequency \\(\\omega_{IF}+\\tilde{\\omega}_{d}\\), where \\(\\tilde{\\omega}_{d}\\) is an estimate of the Doppler frequency for the satellite under investigation, and finally low-pass filtered. With the assumption that the navigation bit is constant during the integration time (which is correct if the correlation is bit-aligned and the coherent integration time \\(T_{E}\\) is less than 20 ms), and considering that during an integration time interval the value of \\(\\Delta\\omega_{d}\\) is constant, the complex _p-th_ sample of the correlation coefficient for the direct signal writes \\[C_{p} \\sim \\frac{1}{2}A_{d}\\,D_{k}\\,R_{p}\\,e^{-i\\Delta\\omega_{d_{p}}T_{E}(p- 1)}\\cdot \\tag{4}\\] \\[\\cdot e^{-i\\Delta\\omega_{d_{p}}\\frac{T_{E}}{2}}\\frac{\\sin\\Big{(} \\frac{\\Delta\\omega_{d_{p}}}{2}T_{E}\\Big{)}}{\\sin\\Big{(}\\frac{\\Delta\\omega_{d_ {p}}}{2}T_{s}\\Big{)}},\\] where \\(T_{s}\\) is the sampling interval and \\(R_{p}\\) the corresponding correlation coefficient function. For the reflected signal we can write an equivalent expression, modulated by the slowly varying phasor \\(A_{r}\\cdot\\exp(2\\pi i\\mathcal{L}/\\lambda)+O(t)\\). For coastal applications we can assume there will be little filtering of this quantity by the coherent integration process, as the ocean moves slowly compared to coherent integration times (a few ms). In the case of the direct signal, we can easily track the carrier phase. To this end, the delta-phases obtainable from equation (4) can be accumulated using \\[\\phi_{p+1} - \\phi_{p}= \\tag{5}\\] \\[= \\mathrm{Im}\\left(\\log\\frac{C_{p+1}}{C_{p}}\\right)\\,=\\,-\\Delta \\omega_{d_{p}}T_{E}.\\] This equation holds while \\(\\Delta\\omega_{d_{p+1}}\\approx\\Delta\\omega_{d_{p}}\\). This is a good approximation, since the time during which this variation is measured is the coherent integration time. The main advantage of using this algorithm for phase tracking is that, due to its differencing nature, it allows for easy detection of the navigation bit \\(\\pi\\) radians phase change. Figures 2 to 8, which illustrate these concepts, refer to the processing of another set of GPS-R data--collected during the Casablanca oil platform Experiment. This Repsol owned drilling platform is about 40 km off the coast of Tarragona, Spain (\\(40^{\\circ}43^{\\prime}4^{\\prime\\prime}N\\), \\(1^{\\circ}21^{\\prime}34^{\\prime\\prime}E\\)). The measurement campaign took place on March 14th, 2000. In Figure 2, the histogram of the delta-phases is shown. The x-axis represents cycles and the y-axis is in arbitrary units. Most \\(\\delta\\)-phase values are clearly concentrated around zero. Other values gather just before \\(\\pm\\pi\\). These values represent in fact _small_ values to which \\(\\pm\\pi\\) radians have been added on occurrence of a navigation bit transition. In Figure 3, the direct signal phase with and without navigation bit correction is plotted. In Figure 4, the phase for the (navigation bit corrected) direct and reflected signals is shown. The effect of the reflection on the sea surface is clearly visible in the large variations present in Figure 4(b) with respect to Figure 4(a). In Figure 5 and 6, the amplitude magnitude and the complex vector of the direct and reflected fields, respectively, are shown. In Figure 8, a simulation of the L1 GPS complex field phasor _dustball_ after reflection, akin to the one in Figure 6(b), is shown. The simulation parameters have been chosen to match the Casablanca experiment sea state. Those were reported as a \"quite calm sea with a gentle breeze\", with SWH of about 0.7 m as measured by a nearby buoy. ## 3 The PARFAIT approach In general, the altimetric information content in the PARIS interferometric field phase will be very difficult to use. This is due to the impact of the incoherent component in the reflected signals. The incoherent component causes fading and winding. On the one hand, at a practical level, fading events will prevent stable phase tracking of the complex field. Even if as in in the Bridge-2 experiment the sea surface is relatively smooth and fading events are not so frequent, a single event can severly complicate the use of phase information if countermeasures are not taken. In general, however, the reflected field will fade very often. As discussed in [21], it is possible to inject in the system (during a fading event) a model-based phase to \"glue\" the phase history, but this approach will in general necessitate the input of too much model information into the data in rough sea conditions. More importantly, as explained in the previous section, the reflected field incoherent component will cause arbitrary winding of the field phasor. This means that the reflected unwrapped phase, unlike the direct one, cannot be directly used for ranging. Indeed, as we have shown in previous work [26], the reflected field accumulated phase will generally wander around the complex plane, travelling to different winding number kingoms, _even in the absence of fadings_ (see Figure 7). That is to say, even if a very high SNR system is devised to get around the problem of field fadings, the interferometric unwrapped phase will not be directly usable for ranging. Unlike the problem of fadings, this is a fundamental issue, not a practical one. An approach discussed in [26, 19], PIP3, involves the use of multiple frequencies for the synthesis of a longer wavelength which will be more immune to fadings. Here we discuss another approach, PARFAIT4, which is in fact complementary to PIP. Footnote 3: PARIS Interferometric Processor. Footnote 4: PARFAIT stands for PARis Filtered-field AltImetric Tracking. In the PARFAIT approach, we begin by noting that although the reflected field unwrapped phase carries no ranging information, this need not be a fundamental problem. What is needed is the coherent _geophysical_ field component in the signals, which is near zero frequency in comparison with the others--a sort of average field. This average field is just the coherent component in the reflected signals after downconversion. With this in mind, PARFAIT consists of the three steps described next. The first practical step to extract the coherent part is to work with the interferometric field, the ratio for reflected versus direct complex field. This has the advantadge of error cancellation, e.g., in Doppler matching of the incoming signals, and of depending only on the lapse. The second step is to \"counter-rotate\" the interferometric field using an a-priori model of the reflection process. The third step is filtering the resulting counter-rotated interometric complex field to finally extract the coherent phase for estimation of the carrier lapse phase. Counter-rotation allows for longer filtering times. These are fundamental to extract the coherent component, which decays exponentially with the square of sea surface standard deviation (sea state) over effective wavelenght (the wavelength divided by the sine of satellite elevation). Finally, the phase lapse information obtained from the couter-rotated, averaged, complex interferometric field is used for altimetry. This new approach to PARIS altimetry is described in more detail in the following sections. As we discuss, it has proven to be a very robust and precise processing method. ## 4 PARFATT processing of the Bridge 2 dataset At low altitudes, simple geometrical considerations lead to the following equation relating the height of the receiver over the reflecting surface (considering the same height for the upward looking antenna and for the downward looking antenna) with the lapse--the measured delay measured between the direct and reflected GNSS signals: \\[\\mathcal{L}_{P}(t)=c\\Delta\\tau_{P}\\left(t\\right)=2h(t)\\cdot\\sin\\left(\\epsilon_ {P}\\left(t\\right)\\right)+b, \\tag{6}\\] where \\(\\mathcal{L}_{P}(t)\\) is the lapse in meters at time \\(t\\), \\(c\\) is the speed of light, \\(\\Delta\\tau_{P}(t)\\) is the temporal lapse in seconds, \\(h(t)\\) is the height of the bridge, \\(\\epsilon_{P}(t)\\) is the elevation of the GPS satellite with a specific PRN number \\(P\\), and \\(b\\) is the hardware-induced delay bias, considered to be a common constant in time. A first estimation of the height of the receiver can easily be performed through a linear fit of the lapse with respect to the sine of the elevation angle of each satellite. In phase processing, the lapse is measured only up to an integer number of cycles \\(N\\). Equation (6) must be rewritten as follows \\[\\mathcal{L}_{P}^{c}(t)=2h(t)\\cdot\\sin\\left(\\epsilon_{p}\\left(t\\right)\\right) +b+N_{p}\\lambda, \\tag{7}\\] with \\(\\mathcal{L}_{p}^{c}\\left(t\\right)\\) is the carrier lapse in meters and \\(\\lambda\\) is the carrier (L1) wavelength. In other words, the equation for each satellite contains an additional unknown parameter, \\(N_{P}\\). In order to use all the satellites for one height estimation, it becomes necessary to estimate also \\(N_{P}\\), i.e. to solve the ambiguity problem. In order to solve the estimation problem, a minimization search is carried out for all these paremeters: \\(h\\) and \\(b\\) (as real constants) and \\(N_{P}\\) (as integers). However, as discussed, the interferometric field should be first filtered to extract its coherent component. Filtering should be long enough to extract the coherent component but short enough to keep the geophysical signals of interest pass through. This means that the geophysical coherent component we are after should not change for more than a small fraction of a cycle during the time duration of the filter. The maximal allowable time thus depends on the elevation angle and rate of change of elevation of the satellite and, just slightly, on the tide motion. In the case of interest, it turns out the maximum filtering time should be around 10 seconds. In other words, in 10 seconds, at least for one satellite, the coherent interferometric phase changes by more than \\(\\frac{\\pi}{2}\\) radians. With this filter length it is not possible to separate coherent and incoherent components of the field, and fading events are not completely eliminated. However, a realistic estimation of the bridge height (and bias) does become possible. As mentioned, to extract the coherent component a longer averaging period should be used. To this end, we first counter-rotate the interferometric field using a first guess of the bridge height, as we now explain in more detail. After downconversion and despreading, we can express the reflected complex field as a sum of the coherent and incoherent components, \\[E(t)=A_{r}\\,e^{i\\mathcal{L}(t)/\\lambda}+O(t). \\tag{8}\\] Now consider that we have a first guess for the height and bias parameters, i.e., a model for the lapse \\(\\mathcal{L}_{m}\\). This model is used to counter-rotate the field: \\[E^{cc}(t) =E(t)/E_{m}(t) \\tag{9}\\] \\[=A_{r}\\,e^{2\\,i\\,\\delta h(t)\\,\\sin(\\epsilon_{p}(t))+i\\delta b}+O ^{\\prime}(t).\\] Clearly, the phase of the coherent field in equation (9) will now vary much slower than the phase of the original reflected field as a function of the elevation (and therefore time). This allows for a longer filtering time, and the exraction of the coherent component of the signal (recall that \\(O(t)\\) has zero mean). The equation which relates the counter-tated phase lapse between direct and reflected signal, the satellite elevations and the \\(\\delta h\\) (i.e. the error between the first guess of the bridge height and the real value) is \\[\\mathcal{L}_{P}^{c}(t)=2\\,\\delta h\\left(t\\right)\\cdot\\sin\\left(\\epsilon_{P} \\left(t\\right)\\right)+\\delta N_{P}\\lambda+\\delta b. \\tag{10}\\] This is the new equation to be used to fit the lapse versus sine of elevation straight line and infer the height offset of the bridge and bias (with respect to the first guess used to counter-rotate the field). In order to solve the ambiguity problem, a search is performed in the space of the integer n-tuples and the one that produces the linear fit with smallest residue is selected. It is important to point out that the n-tuple search space is drastically reduced by the priorfield counter-rotation. For example, if the guess is within \\(\\pm\\) half meter, the n-tuple subspace to be scanned can be limited to those n-tuples whose components belong to the interval \\([-3,3]\\), centered on the first guess of the n-tuple, as obtained from a real (as opposed to integer) ambiguity resolution. Another way to reduce the cardinality of the subspace of the n-tuples to check is to consider that satellites with similar elevation angles cannot have very different integer ambiguities. ## 5 Results and comparison The PARFAIT algorithm described in the previous section, has been used to analyze the first 10 minutes of the Bridge-2 data, Part A1 and to the first 10 minutes of Part A2. The following steps have been performed accordingly in batches of 2 minutes: * The EM fields, direct and reflected, have been computed through the usual correlation process. * The complex interferometric field has been counter-rotated (Equation (9)). * The counter-rotated field has been filtered using a 30 s window. * The phase of the interferometric, counter-rotated and filtered field has been unwrapped. * For every possible set of values of the ambiguities \\(N_{P}\\), a straight line has been interpolated to the phase histories (one for each visible satellite) against the elevation angle (Equation (10)). The best fit has been identified. The analysis has been carried out for almost5 all visible satellites (see Table 2 and Figure 10). The phase histories are shown in Figure 11(a). A straight line has been fit through these phase histories, against the sine of the satellite elevation angle (Figure 11(b)). Footnote 5: Satellites outside the _Zeeland Mask_[20, 21] are not considered (see also the caption in Figure 10). This fitted line gives an estimation of the bridge height of 18.61 m, a hardware bias of -0.81 m and, as first guess for the n-tuple that solves the ambiguity problem, the values \\([0\\;0\\;1\\;1\\;2\\;3]\\). Now, a search in a subset of \\(\\mathcal{I}^{6}\\) is carried out to minimise the residuals of the fit in the space of the n-tuples of integers. The subspace considered is the one spanned by all the combination of integers between \\(\\pm 3\\) around the first guess. The result is the n-tuple \\([0\\;0\\;2\\;2\\;4\\;5]\\) which gives a bridge height estimation of 18.82 m and an instrumental bias of \\(-0.45\\) m. This procedure has been implemented with data from the first 10 minutes of Part A1 and A2 of the Bridge-2 experiment. The results are reported in Table 3 and in Figure 12 for Part A1 and in Table 4 and in Figure 13 for Part A2. Fitting both parts to the tide curve, i.e. choosing the bias that minimizes the standard deviation of the data to the available tide \"ground truth\", leads to an altimetry bias of 40 cm and a standard deviation of less than 1 cm. This bias could have an origin in the ground \"truth\", due either to an error in the determination of the absolute value of the height of the bridge performed using the GPS buoy available data (only a few seconds, which may have caused ambiguity resolution problems) or, partially, to some anomalies in the water mass flow in the vicinity of the bridge structures. Moreover, considering also that the tide dynamics measured below the bridge may be delayed with respect to the place were the tide was measured, the best fit (over both bias and delay) is obtained with a delay of 1 minutes and 37 seconds with respect to the time of the tide data collection and with a bias of 40 cm. The standard deviation of the fitted data with respect to the tide curve is in this case of 0.3 cm. To summarize, the proposed approach to PARIS altimetry, the parfait technique, leads a very precise estimation of the tide, * without the need to insert any kind of model for the phase of the reflected signal during fadings, * without rejecting too many visible satellites because of their poor SNR and/or frequent fadings. Finally, we note that this technique is directly applicable for PARIS phase processing from air and spaceborne applications, as long as a suitable model for the lapse phase can be constructed. This will be the subject of future work. ## Acknowledgements The authors wish to thank Manuel Martin-Neira (Technical Manager of the ESA/ESTEC Contract No. 14285/85/nl/pb under which this work was carried out) and Maria Belmonte (ESA/ESTEC) for useful discussions and real collaboration. We also thank the other partners in the project, especially GMV for the GPS buoy data analysis. _All Starlab authors have contributed significantly; the Starlab author list has been ordered randomly._ ## References * [1] M. Armatys, A. Komjathy, P. Axelrad, and S. Katzberg. A comparion of GPS and scatterometr sensing of ocean wind speed and direction. In _Proc. IEEE IGARSS, Honolulu, HA_, 2000. * [2] P. Beckmann and A. Spizzichino. _The scattering of electromagnetic waves from rough surfaces_. 1963. * [3] M. Caparrini. Using reflected GNSS signals to estimate surface features over wide ocean areas. Technical Report EWP 2003, ESA report, December 1998. * [4] M. Caparrini and G.Ruffini. Casablanca data processing. Starlab \"Knowledge Nugget\" kn-0111-001, 2001. * ESA Contract 14285/85/NL/PB. * ESA Contract RFQ/3-10120/01/NL/SF. * ESA Contract RFQ/3-10120/01/NL/SF. * ESA Contract RFQ/3-10120/01/NL/SF. * [9] E. Cardellach, G. Ruffini, D. Pino, A. Rius, A. Komjathy, and J. Garrison. Mediterranean balloon experiment: GPS reflection for wind speed retrieval from \\begin{table} \\begin{tabular}{|c|c|c|c|} \\hline \\multirow{2}{*}{PRN} & \\multirow{2}{*}{elevation} & mean \\(SNR\\) & mean \\(SNR\\) \\\\ & & (direct) & (reflected) \\\\ \\hline \\hline 14 & 17\\({}^{o}\\) & 29.4 dB & 25.0 dB \\\\ \\hline 25 & 17\\({}^{o}\\) & 32.0 dB & 25.8 dB \\\\ \\hline 1 & 30\\({}^{o}\\) & 31.2 dB & 24.6 dB \\\\ \\hline 7 & 38\\({}^{o}\\) & 33.2 dB & 29.4 dB \\\\ \\hline 11 & 62\\({}^{o}\\) & 34.0 dB & 29.4 dB \\\\ \\hline 20 & 78\\({}^{o}\\) & 30.4 dB & 26.6 dB \\\\ \\hline \\end{tabular} \\end{table} Table 2: Visible satellites, their elevation in degrees, the 10 ms coherent integration mean \\(SNR_{dB_{w}}\\) (\\(20\\log_{10}\\)[peak-grass/grass correlation coefficient]) for the direct and the reflected signal. \\begin{table} \\begin{tabular}{|c||c||c|c||c|} \\hline \\multirow{2}{*}{time (minutes from start)} & \\multirow{2}{*}{instrumental bias [cm]} & \\multirow{2}{*}{bridge height estimation [m]} & assumed & \\multirow{2}{*}{difference [cm]} \\\\ & & & & \\\\ \\hline \\hline 1 & -0.45 & 18.83 & 18.44 & 39.71 \\\\ \\hline 3 & -0.45 & 18.82 & 18.42 & 39.63 \\\\ \\hline 5 & -0.46 & 18.81 & 18.41 & 40.19 \\\\ \\hline 7 & -0.45 & 18.79 & 18.39 & 40.21 \\\\ \\hline 9 & -0.26 & 18.78 & 18.38 & 39.83 \\\\ \\hline \\end{tabular} \\end{table} Table 3: Results of the bridge height estimation during the first 10 minutes of the Part A1 data. the stratosphere. _To appear in Remote Sensing of Environment_, 2003. * [10] J.L. Garrison, S. Katzberg, V. Zavorotny, and D. Masters. Comparison of sea surface wind speed estimates from reflected GPS signals with buoy measurements. In _Proc. IEEE IGARSS, Honolulu, HA_, 2000. * [11] J.L. Garrison, S.J. Katzberg, and C.T. Howell. Detection of ocean reflected GPS signals: theory and experiment. In _IEEE Southeaston '97_. IEEE, April 1997. * [12] J.L. Garrison, G. Ruffini, A. Rius, E. Cardellach, D. Masters, M. Armatys, and V.U. Zavorotny. Preliminary results from the GPSR mediterranean balloon experiment (GPSR-MEBEX). In _Proceedings of ERIM 2000_, Charleston, South Carolina, USA, May 2000. * [13] L. Garrison, S. Katzberg, and M. Hill. Effect of sea roughness on bistatically scattered range coded signals from the GPS. _Geophysical Research Letters_, 25:2257-2260, 1998. * ESA Contract RFQ/3-10120/01/NL/SF. * [15] O. Germain, G. Ruffini, F. Soulat, M. Caparrini, B. Chapron, and P. Silvestrin. The GNSS-R Eddy Experiment II: L-band and optical speculometry for directional sea-roughness retrieval from low altitude aircraft. In _Proceedings of the 2003 Workshop on Oceanography with GNSS-R_. Starlab, July 2003. * [16] A. Komjathy. GPS surface reflection using aircraft data: analysis and results. In _Proceedings of the GPS surface reflection workshop_. Goddard Space Flight Center, July 1998. * [17] J. LaBrecque, S.T. Lowe, L.E. Young, E.R. Caro, L.J. Romans, and S.C. Wu. The first spaceborne observation of GPS signals reflected from the ocean surface. In _Proceedings IDS workshop_. JPL, December 1998. * [18] M. Martin-Neira, M. Caparrini, J. Font-Rossello, S. Lannelongue, and C. Serra. The PARIS concept: An experimental demonstration of sea surface altimetry using GPS reflected signals. _IEEE Transactions on Geoscience and Remote Sensing_, 39:142-150, 2001. * [19] M. Martin-Neira, P. Colmenarejo, and G. Ruffini. Ocean altimetry interferometric method and device using gnss signals, April 2003. U.S. Patent No. 6,559,165 B2. * [20] M. Belmonte Rivas and M. Martin-Neira. GNSS reflections: First altimetry products from bridge-2 field campaign. In _Proceedings of NAVITEC, 1st ESA Workshop on Satellite Navigation User Equipment Technology_, pages 465-479. ESA, 2001. * [21] M. Belmonte Rivas and M. Martin-Neira. GPS coherent reflections from a smooth marine surface. In _Proceedings of the 2003 Workshop on Oceanography with GNSS-R_. ESA, July 2003. * ESA ESTEC Contract No. 15083/01/NL/MM, 2001. \\begin{table} \\begin{tabular}{|c||c||c|c||c|} \\hline time & instrumental & bridge height & assumed & difference \\\\ (minutes & bias [cm] & estimation [m] & [m] & [cm] \\\\ \\hline \\hline 1 & -0.27 & 17.54 & 17.11 & 42.34 \\\\ \\hline 3 & -0.28 & 17.52 & 17.08 & 42.36 \\\\ \\hline 5 & -0.26 & 17.47 & 17.04 & 42.52 \\\\ \\hline 7 & -0.08 & 17.44 & 17.02 & 41.95 \\\\ \\hline 9 & -0.08 & 17.41 & 16.98 & 42.71 \\\\ \\hline \\end{tabular} \\end{table} Table 4: Results of the bridge height estimation during the first 10 minutes of the Part A2 data. - ESA ESTEC Contract No. 15083/01/NL/MM, 2001. Available online at [http://starlab.es](http://starlab.es). * ESA ESTEC Contract 13461/99/NL/GD, 1999. Available online at [http://starlab.es](http://starlab.es). * [25] G. Ruffini, O. Germain, F. Soulat, M. Taani, and M. Caparrini. GNSS-R: Operational applications. In _Proceedings of the 2003 Workshop on Oceanography with GNSS-R_. Starlab, July 2003. * ESA Contract 14071/99/NL/MM, 2000. * [27] G. Ruffini, F. Soulat, M. Caparrini, and O. Germain. The GNSS-R Eddy Experiment I: altimetry from low altitude aircraft. In _Proceedings of the 2003 Workshop on Oceanography with GNSS-R_. Starlab, July 2003. * [28] V. Zavorotny and A. Voronovich. Scattering of GPS signals from the ocean with wind remote sensing application. _IEEE Trans. Geoscience and Remote Sensing_, 38(2):951-964, 2000. * [29] C. Zuffada, R. Treuhaft, S. Lowe, G. Hajj, M. Lough, L. Young, Wu S, and M. Smith. Altimetry with reflected GPS signals: results from a lakeside experiment. In _Proceedings IGARSS 2000_, 2000. Figure 4: Example of the tracked phase, without the Doppler contribution. The units are milliseconds on the x-axis and cycles on the y axis. The integration time is 20 ms. Figure 3: The directy signal carrier phase obtained accumulating the \\(\\delta\\)-phase according to Equation (5). The stepped plot represents the accumulated phase _as it is_, i.e. without compensating for the navigation bit half-cycle variation, which is clearly visible. The lower curve represents the same phase after removal of this effect (navigation bit correction). The units on the x-axis are milliseconds, and on the y-axis thgy are cycles. Figure 2: The histogram of the carrier phase variation, measured on an integration time interval (in this case 1 ms). The x-axis represents cycles, while on the y-axis there are arbitrary units. The accumulation of \\(\\delta\\)-phase values around zero can be seen, as well as in the vicinity of \\(\\pm\\) half a cycle. Figure 5: Example of field amplitude time series (phasor dustball, Casablanca Experiment). The units are milliseconds on the x-axis and correlation coefficient units in dB on the y axis. The integration time is 20 ms. Figure 6: Example of complex field time series (phasor dusball, Casablanca Experiment). The units are correlation coefficient units on both axis. The coherent integration time is 20 ms. Figure 7: Histogram of the phase for a noiseless simulated field. It can be observed that the unwrapped phase wanders around multiple winding number kingdoms, while tending to spend more time around an average complex field point. This illustrates the fact that the unwrapped phase cannot be used directly for altimetry, even in the absense of noise. Figure 8: The simulated EM field at L1 frequency (phasor dustball), after reflection on the sea surface. The simulation has been performed with the GRADAS software [22] developed by Starlab. This simulation is for a wind speed of \\(U_{10}=3\\) m/s. Figure 9: The blue curve is the phase of the interferometric field for PRN number 7, from minute 0 to minute 8 of Part A2 of the Bridge 2 experiment. The occurrence of a fadings and of isolated cycle slips can be seen. These phenomena disappear in the phase of the filtered interferometric field (red line). Figure 10: Each colored arc represents the position of a GPS satellite from the start of the Part A1 of the experiment to the beginning of Part A2 plus 10 minutes. The green mask represent the area where the GPS signal reflections are supposed to be free of shadowing phenomena due to the bridge structure, and therefore only the satellite within this mask can be taken into consideration for PARIS processing. The bold parts of the lines represent the first and the second 10 minutes periods. Figure 11: Each colored spot represents the reflected-minus-direct phase delay versus satellite-elevation for a different satellite (PRN number). Figure 12: The bridge height estimate during the first 10 minutes of the Part A1 of the data. Figure 13: The bridge height estimation during the first 10 minutes of the Part A2 of the data. Figure 14: The solid line is the distance between the up-looking antenna and the sea surface, according to the available tide measurements and the GPS buoy measurement [20]. The green dots are the estimated values, after removing the vertical bias.
GNSS-R signals contain a coherent and an incoherent component. A new algorithm for coherent phase altimetry over rough ocean surfaces, called PARFAIT, has been developed and implemented in Starlab's STARLIGHT1 GNSS-R software package. In this paper we report our extraction and analysis of the coherent component of L1 GPS-R signals collected during the ESTEC Bridge 2 experimental campaign using this technique. The altimetric results have been compared with a GPS-buoy calibrated tide model with a resulting precision of the order 1 cm. Footnote 1: STARLab Interferometric Gnss Toolkit. Passive radar, GNSS, GPS, Galileo, GNSS-R, GPS-R, altimetry, PARIS, PIP, PARFAIT, coastal applications.
Provide a brief summary of the text.
arxiv-format/0311070v1.md
# Spin-orbit correlation energy in neutron matter M. Baldo C. Maieron Istituto Nazionale di Fisica Nucleare, Sezione di Catania, Via S. Sofia 64, I-95123 Catania, Italy November 4, 2021 ## I Introduction The properties of nuclear matter at high density play a crucial role in the modeling of neutron stars (NS's) interior [1]. The observed NS masses are in the range of \\(\\approx(1-2)M_{\\odot}\\) (where \\(M_{\\odot}\\) is the mass of the sun, \\(M_{\\odot}=1.99\\times 10^{33}\\)g), and the radii are of the order of 10 km. The matter inside NS's, below the outer crust, possesses densities ranging from a fraction to a few times the normal nuclear matter density \\(\\rho_{0}\\) (\\(\\approx 0.17\\) fm\\({}^{-3}\\)). The equation of state (EoS) at such densities is one of the main ingredients to determine the structure parameters of NS's. Due to beta-stability conditions, NS matter is much closer to neutron matter than to symmetric nuclear matter [1]. Since no phenomenological data can be used to constrain the neutron matter EoS, one has to rely on microscopic many-body calculations based on realistic nucleon-nucleon (NN) interactions. Predictions of the neutron matter EoS based on purely phenomenological Skyrme forces can dramatically differ among each other, even at relatively low density (for a recent compilation see _e.g._[2]). For these reasons, an accurate determination of the neutron matter EoS in the density range typical of NS's, based on many-body theory and realistic NN forces, appears of great relevance. Unfortunately, despite the enormous progress in the many-body theory of nuclear matter in general and neutron matter in particular, discrepancies among different calculations are still persisting. Among the variety of methods that have been developed in the many-body theory of nucleon systems, one can mention the variational method [3], in his various degrees of sophistication (including Monte-Carlo procedures), the Green' s function Monte-Carlo method (GFMC) [4], which represents a numerical algorithm converging in principle to the \"exact\" solution, and the Bethe-Brueckner-Goldstone (BBG) expansion [5]. It has been pointed out [6] that the spin-orbit interaction and correlations are particularly relevant in neutron and nuclear matter and require accurate many-body and numerical treatments. Indeed, a large fraction of the observed discrepancies seems to reside in the proper treatment of the spin-orbit interaction terms. In this paper we focus on the many-body effects due to the spin-orbit terms of the NN interaction within the BBG expansion. To this purpose, and for the sake of comparison with the results obtained with other methods, we perform calculations for neutron matter with the \\(v_{8}^{\\prime}\\) and \\(v_{6}^{\\prime}\\) NN realistic two-body interactions, including and, respectively, not including the spin-orbit terms. These interactions are simplified versions of the \\(Av_{18}\\) potential, but they can be still considered realistic enough to provide meaningful results. In particular, they both contain tensor interaction operators. This paper is organized as follows. In Sec. II we briefly introduce the BBG expansion method and discuss the corresponding neutron matter EoS in the relevant density range. In Sec. III we make a detailed comparison with the results of other many-body methods, in particular the GFMC and the Auxiliary Field diffusion Monte-Carlo method (AFDMC). Sec. IV is devoted to the conclusions. ## II Neutron matter EOS and the BBG expansion The main difficulty in the many-body theory of nuclear matter is the treatment of the strong repulsive core, which dominates the short range behavior of the NN interaction. Simple perturbation theory cannot of course be applied, and one way of overcoming this difficulty is tointroduce the two-body scattering G-matrix, which has a much smoother behavior even for a large repulsive core. It is possible to rearrange the perturbation expansion in terms of the reaction G-matrix, in place of the original bare NN interaction, and this procedure is systematically exploited in the BBG expansion [5]. The expansion of the ground state energy at a given density, i.e. the EoS at zero temperature, can be ordered according to the number of independent hole-lines appearing in the diagrams representing the different terms of the expansion. This grouping of diagrams generates the so-called hole-line expansion [7]. The diagrams with a given number \\(n\\) of hole-lines are expected to describe the main contribution to the \\(n\\)-particle correlations in the system. At the two hole-line level one gets the Brueckner-Hartree-Fock (BHF) approximation. The BHF approximation includes the self-consistent procedure of determining the single particle auxiliary potential, which is an essential ingredient of the method. Once the auxiliary self-consistent potential is introduced, the expansion is implemented by introducing the set of diagrams which include \"potential insertions\" [5]. To be specific, the introduction of the auxiliary potential can be formally performed by splitting the hamiltonian in a modified way from the usual one \\[H=T+V=T+U+(V-U)\\equiv H^{\\prime}_{0}+V^{\\prime}\\;, \\tag{1}\\] where \\(T\\) is the kinetic energy and \\(V\\) the nucleon-nucleon interaction. Then one considers \\(V^{\\prime}=V-U\\) as the new interaction potential and \\(H^{\\prime}_{0}\\) as the new single particle hamiltonian. The modified single particle energy \\(e(k)\\) is given by \\[e(k)=\\frac{\\hbar^{2}k^{2}}{2m}+U(k) \\tag{2}\\] while \\(U\\) must be chosen in such a way that the new interaction \\(V^{\\prime}\\) is, in some sense, \"reduced\" with respect to the original one \\(V\\), so that the expansion in \\(V^{\\prime}\\) should be faster converging. The introduction of the auxiliary potential turns out to be essential, otherwise the hole-expansion would be badly diverging. The total energy \\(E\\) can then be written as \\[E=\\sum_{k}e(k)+B \\tag{3}\\] where \\(B\\) is the interaction energy due to \\(V^{\\prime}\\). The first potential insertion diagram cancels out the potential part of the single particle energy of Eq. (2), in the expression for the total energy \\(E\\). This is actually true for any definition of the auxiliary potential \\(U\\). At the two hole-line level of approximation, one therefore gets \\[E=\\sum_{k<k_{F}}\\frac{\\hbar^{2}k^{2}}{2m}+\\tilde{B} \\tag{4}\\] The result that only the unperturbed kinetic energy appears in the expression for \\(E\\) and that all correlations are included in the potential energy part \\(\\tilde{B}\\) holds true to all orders and it is a peculiarity of the BBG expansion. Of course, the modification of the momentum distribution, and therefore of the kinetic energy, is included in the interaction energy part, but it is treated on the same footing as the other correlation effects. This presents a noticeable advantage: in fact, the modification of the kinetic energy itself is quite large and, of course, positive and it should be therefore compensated by an extremely accurate calculation of the (negative) correlation energy, if the two quantities are calculated independently. Up to three-hole lines, the diagrams for \\(\\tilde{B}\\) can be schematically represented as in Fig. 1. Diagrams (a) and (b) in the first line represent the usual BHF approximation, while the remaining lines include the three hole-line diagrams. The box in the third line represents the three-body scattering matrix \\(T^{(3)}\\), which can be introduced following a procedure similar to what is done for the Brueckner G-matrix and satisfies the Bethe-Fadeev integral equations [5; 8]. Diagram (f) generates, to lowest order in the G-matrix, diagrams (c) and (e), which are usually calculated separately, since they require an accurate numerical procedure (and they cancel each other to a large extent). Diagram (d) is a potential insertion diagram, the only one at the three hole-line level, and it Figure 1: Schematic representation of two hole- and three hole-line diagrams. Both direct and exchange diagrams are included. The wavy line indicates a Brueckner G-matrix, the dotted line an U-insertion. For other details, see text. is non-zero only if the single particle potential is non-zero at momenta larger then the Fermi momentum \\(k_{F}\\) (_e.g._ in the so-called \"continuous choice\" [5; 9]). In a previous paper [10] we have shown that the BBG expansion for neutron matter displays a relatively rapid rate of convergence, and that calculating the total energy up to the three hole-line diagrams is enough to get an accurate EoS, even for densities a few times larger than the saturation density. These calculations were done for the Argonne \\(v_{14}\\) and \\(v_{18}\\) potentials, which contain a large set of interaction operators, including the spin-orbit ones. In order to simplify the analysis and the comparison with other methods, we have performed the same type of calculations for the \\(v^{\\prime}_{8}\\) NN potential [11], which is a simplified version of the \\(v_{18}\\) interaction, but still realistic enough to keep the calculations meaningful. To better focus on the effects of the spin-orbit interaction, we have also considered the \\(v^{\\prime}_{6}\\) potential, which is obtained from \\(v^{\\prime}_{8}\\) by dropping the spin-orbit terms. The difference of the EoS obtained with these two interactions can, therefore, be taken as an estimate of the contribution and relevance of the spin-orbit NN interaction in neutron matter. The EoS of neutron matter obtained within the BBG expansion for the two considered NN interactions is shown in Fig. 2. All calculations have been performed in the continuous choice for the single particle potential \\(U(k)\\). As shown in Ref. [10], the results are independent, to a high degree of accuracy, of the choice of \\(U(k)\\). At the BHF level the difference between the EoS for the \\(v^{\\prime}_{8}\\) NN interaction (dash-dotted line) and the EoS for the \\(v^{\\prime}_{6}\\) NN interaction (dotted line) looks sizable, even at relatively low density. This discrepancy increases with density, reaching about 14 MeV at \\(\\rho=0.4\\) fm\\({}^{-3}\\). When three-body correlations are included (dashed and full lines), the gap between the two EoS is reduced, the strength of this reduction being about 40% at the highest considered density. Notice that the contribution of three hole-lines diagrams is positive for \\(v^{\\prime}_{8}\\) and negative for \\(v^{\\prime}_{6}\\), therefore this difference turns out to be quite sensitive to three-body correlations (as defined within the BBG expansion). It is important to stress that the contribution of three-body correlations is only a few percent with respect to the two-body one (BHF), even at the highest density. Indeed, in the considered density region the interaction energy \\(\\tilde{B}\\) of Eq. (4) is negative and large and compensates a large fraction of the positive kinetic energy contribution. This substantial cancellation between kinetic energy and interaction energy has the effect of amplifying the (small) differences of the correlation energy obtained in the different EoS calculations for a given NN interaction, as discussed in the next Section. ## III Results and discussion BBG results for the EoS of neutron matter using the \\(v^{\\prime}_{8}\\) and \\(v^{\\prime}_{6}\\) interactions are compared in Fig. 2 and, respectively, in Tables 1 and 2 with other calculations based on different many-body methods. Green's function Monte-Carlo results were obtained in Ref. [12] within an unconstrained path (UC) approach, by considering 14 neutrons inside a periodic box and setting the interaction potential discontinuously to zero at distances larger than one-half the box size. The GFMC should give, in principle, the exact ground state energy of the system. For comparison to infinite neutron matter a \"box correction\" must then be applied. For the \\(v^{\\prime}_{8}\\) interaction the latter was estimated in Ref. [12] using variational chain summation (VCS) techniques and turned out to be mostly due to the truncation of the potential. The resulting box corrected UC-GFMC EoS is listed in the fourth column of Table 1 and indicated in Fig.2 by full circles. Up to the highest density considered in [12], the agreement with the BBG results looks remarkable, given the uncertainty contained in the box correction procedure. The authors of Ref. [12] also calculated the neutron matter EoS within the variational chain summation approach, both for 14 neutrons in a periodic box and directly for an uniform gas of neutrons. Their results for the infinite system with the \\(v^{\\prime}_{8}\\) NN interaction are listed in the last column of Table 1 and plotted as stars in Fig. 2. They show fairly good agreement with the other methods, except, maybe, for the last two points at higher density, which seem to display a slightly different slope with respect to UC-GFMC and BBG [14]. Switching the spin-orbit interaction terms off and considering the \\(v^{\\prime}_{6}\\) NN potential, we can again compare the Figure 2: EoS of neutron matter. The dotted and dash-dotted lines correspond to BBG calculations at the BHF level for the \\(v^{\\prime}_{6}\\) and \\(v^{\\prime}_{8}\\) NN interactions, respectively, while the short dashed (\\(v^{\\prime}_{6}\\)) and solid (\\(v^{\\prime}_{8}\\)) lines include three hole-line contributions. Empty (\\(v^{\\prime}_{6}\\)) and full (\\(v^{\\prime}_{6}\\)) circles are the unconstrained GFMC results of Ref. [12] and crosses (\\(v^{\\prime}_{6}\\)) and stars(\\(v^{\\prime}_{8}\\)) are the VCS results of Ref. [12]. Finally empty (\\(v^{\\prime}_{6}\\)) and full (\\(v^{\\prime}_{8}\\)) squares represent the AFDMC results of Ref. [6]. See text for a discussion of finite size corrections applied to the variational and Monte Carlo results. BBG results with UC-GFMC and variational calculations, which are also available [12]. Unfortunately in this case the box correction to the GMFC has not been provided; however it looks reasonable to use the same corrections as in the \\(v_{8}^{\\prime}\\) case. Once these corrections are applied to the UC-GMFC of Ref. [12] for the \\(v_{6}^{\\prime}\\) NN interaction, we obtain the neutron matter EoS listed in Table 2 and indicated by the open circles in Fig. 2. Again, fairly good agreement with the BBG calculations is observed up to the highest density, \\(\\rho=0.24\\) fm\\({}^{-3}\\), considered in [12]. The overall trend seems to indicate that this agreement continues also at higher \\(\\rho\\) values. It is interesting to notice that the inclusion of three-body correlations in the BBG expansion plays a relevant role in improving the agreement with the UC-GMFC results, which is, in any case, already satisfactory at the BHF level. This happens both for the \\(v_{8}^{\\prime}\\) potential, for which, as already mentioned, the contribution of three hole-line diagrams is positive, and for the \\(v_{6}^{\\prime}\\) case, where, instead, the correction is negative. The variational VCS results, plotted as crosses in Fig. 2, have been obtained also from Ref. [12], again correcting periodic box \\(v_{6}^{\\prime}\\) results with the box correction values given for the \\(v_{8}^{\\prime}\\) interaction. Also in this case the agreement can be considered satisfactory, and again the trend with density seems to be slightly different with respect to UC-GFMC and BBG. Another Monte-Carlo scheme has been recently developed [6] for neutron matter, the Auxiliary Field Diffusion Monte-Carlo (AFDMC) method. The recent results of Ref. [6] for \\(v_{6}^{\\prime}\\) are reported in Fig. 2 as open squares. These calculations were performed for 14 neutrons in a cubic box, but using a continuous potential instead of the truncation of Ref. [12] and they therefore should automatically incorporate the largest part of finite size effects. As shown in Table 2, these results are close to the calculated UC-GFMC EoS, as well as to the BBG EoS in the whole considered density range. Unfortunately, the same method applied to neutron matter with the \\(v_{8}^{\\prime}\\) NN interaction gives an EoS (full squares in Fig. 2 and third column in Table 1) which differs from all other calculations. These findings indicate the difficulty of an accurate calculation of the spin-orbit contribution to neutron matter binding. The AFDMC method has been improved in Ref. [13] to properly deal with the spin-orbit interaction and correlation, by a suitable modification (backflow) of the trial wave function. With this modification the splitting between the two EoS, for the \\(v_{6}^{\\prime}\\) and \\(v_{8}^{\\prime}\\) NN potentials, increases, but it is still too small with respect to the UC-GFMC and BBG results. ## IV Conclusions We have presented calculations of neutron matter EoS for two different NN interactions, the Argonne \\(v_{6}^{\\prime}\\) and \\(v_{8}^{\\prime}\\). The comparison of the results obtained with the two interactions is expected to give an estimate of the correlation coming from spin-orbit interaction terms. We have found close agreement between the EoS calculated within the BBG expansion, extended up to three hole-line contributions, and the UC-GFMC calculations of Ref. [12], for density up to \\(0.24\\) fm\\({}^{-3}\\). The discrepancy between the correlation energy in the two schemes does not exceed 2%. Such an agreement suggests that the many-body problem for neutron matter with two-body NN interactions is well under control, at least for the considered density range. The splitting between the EoS calculated with the \\(v_{6}^{\\prime}\\) and \\(v_{8}^{\\prime}\\) potentials indicates that the spin-orbit correlation energy in neutron matter can be as large as 4-5 Mev/A and increases with density. The AFDMC methods seems to have some problems when dealing with the spin-orbit correlation. In all considered calculations only two-body NN forces have been considered. It is well known that three-body forces are needed in nuclear matter. It appears then relevant to perform a similar study including three-body forces. The latter are not so well known, and the ex \\begin{table} \\begin{tabular}{c c c c c} \\(\\rho\\) (fm\\({}^{-3}\\)) & BBG & AFDMC & UC-GFMC & VCS (box corr.) \\\\ \\hline 0.04 & 6.389 & – & 6.45 & 7.3 & (-0.3) \\\\ 0.08 & 9.668 & – & 9.54 & 10.8 & (-1.1) \\\\ 0.12 & 12.292 & 12.41 & – & – & – \\\\ 0.16 & 15.092 & 15.12 & 14.81 & 16.1 & (-5.1) \\\\ 0.20 & 18.011 & 17.86 & – & – & – \\\\ 0.24 & 21.262 & – & 20.65 & 22.1 & (-11.5) \\\\ 0.32 & 28.743 & 27.84 & – & – & – \\\\ 0.40 & 37.552 & 36.0 & – & – & – \\\\ \\end{tabular} \\end{table} Table 2: Same as Table 1, but for the Argonne \\(v_{6}^{\\prime}\\) NN interaction. UC-GFMC and VCS energies are obtained from the periodic box results of Ref. [12], by applying the same box correction (reported in the last column) as for the \\(v_{8}^{\\prime}\\) case \\begin{table} \\begin{tabular}{c c c c c} \\(\\rho\\) (fm\\({}^{-3}\\)) & BBG & AFDMC & UC-GFMC & VCS \\\\ \\hline 0.04 & 6.469 & – & 6.0 & 6.7 \\\\ 0.08 & 8.250 & – & 8.4 & 9.2 \\\\ 0.12 & 10.031 & 12.32 & – & – \\\\ 0.16 & 11.826 & 14.98 & 12.1 & 12.1 \\\\ 0.20 & 13.705 & 17.65 & – & – \\\\ 0.24 & 15.846 & – & 16.9 & 14.8 \\\\ 0.32 & 21.953 & 27.3 & – & – \\\\ 0.40 & 29.044 & 35.3 & – & – \\\\ \\end{tabular} \\end{table} Table 1: Neutron matter energy per particle (in MeV), for the Argonne \\(v_{8}^{\\prime}\\) NN interaction. BBG energies are calculated up to the three-hole line level. AFDMC results are taken from Ref. [6] and are calculated for 14 neutrons in a periodic box. UC-GFMC and VCS results are taken from Ref. [12]. VCS results are calculated for a Uniform Gas, while UC-GFMC results include box correction terms. trapolation from finite nuclei, where three-body forces are fitted, to nuclear matter seems not so obvious [6]. In any case, the comparison of the results obtained with different schemes, when three-body forces are included, could be a stringent test for the accuracy of the calculations, in particular for the spin-orbit contribution to correlation energy. This is left to future work, but the agreement obtained up to now between BBG and GFMC appears satisfactory and promising. ## References * (1) S. L. Shapiro and S. A. Teukolsky, _Black Holes, White Dwarfs, and Neutron Stars_ (John Wiley & Sons, New York, 1983). * (2) B. Alex Brown, Phys. Rev. Lett. **85**, 5296 (2000). * (3) A. Akmal, V.R. Pandharipande and D.G Ravenhall, Phys. Rev. **C 58**, 1804 (1998). * (4) S.C. Pieper and R.B. Wiringa Ann. Rev. Nucl. Part. Sci. **51**, 53 (2001). * (5) see for example _Nuclear Methods and the Nuclear Equation of State_, edited by M. Baldo, World Scientific, Singapore,1999. * (6) A. Sarsa, S. Fantoni, K.E. Schmidt and F. Pederiva, Phys. Rev. C **68**, 024308 (2003). * (7) B.D. Day, Rev. Mod. Phys. **39**, 719 (1967). * (8) R. Rajaraman and H. Bethe, Rev. Mod. Phys. **39**, 745 (1967). * (9) J.P. Jeukenne, A. Lejeunne and C. Mahaux, Phys. Rep. **25 C**, 83 (1976). * (10) M. Baldo, G. Giansiracusa, U. Lombardo and H.Q. Song, Phys. Lett. **473B**,1 (2000). * (11) B.S. Pudliner, V.R. Pandharipande, J. Carlson, S.C. Pieper, and R.B. Wiringa, Phys. Rev. C **56**, 1720 (1997). * (12) J. Carlson, J. Morales Jr., V.R. Pandharipande, D.G. Ravenhall, Phys. Rev. C **68**, 025802 (2003). * (13) L. Brualla, S. Fantoni, A. Sarsa, K.E. Schmidt and S.A. Vitiello, Phys. Rev. C **67**, 065806 (2003). * (14) The different density dependence of the UC-GFMC and VCS results when the spin-orbit interaction is included in the NN potential was observed, already in the periodic box results, in Ref.[12], where the possible origin of this behavior was supposed to be due either to an overestimation of spin-orbit contributions in the VCS method or to the employment of a too short imaginary time in the UC-GFMC calculation. The slightly better agreement between UC-GFMC and the trend of the BBG curves shown in Fig. 2 seems to favor the first hypothesis.
We study the relevance of the energy correlation produced by the two-body spin-orbit coupling present in realistic nucleon-nucleon interaction potentials. To this purpose, the neutron matter Equation of State (EoS) is calculated with the realistic two-body Argonne \\(v_{8}^{\\prime}\\) potential. The shift occurring in the EoS when spin-orbit terms are removed is taken as an estimate of the spin-orbit correlation energy. Results obtained within the Bethe-Brueckner-Goldstone expansion, extended up to three hole-line diagrams, are compared with other many-body calculations recently presented in the literature. In particular, excellent agreement is found with the Green's function Monte-Carlo method. This agreement indicates the present theoretical accuracy in the calculation of the neutron matter EoS. pacs: 26.60.+c, 21.65.+f, 24.10.Cn, 97.60.Jd
Give a concise overview of the text below.
arxiv-format/0312016v1.md
# Modelling of SARS for Hong Kong Pengliang Shi [email protected] Department of Electronic and Information Engineering, The Hong Kong Polytechnic University, Hung Hom, Kowloon, Hong Kong Michael Small Department of Electronic and Information Engineering, The Hong Kong Polytechnic University, Hung Hom, Kowloon, Hong Kong November 3, 2021 ###### pacs: 89.75.Hc, 87.23.Ge + Footnote †: preprint: APS/123-QED During 2003 SARS killed 916 and infected 8422 globally[1]. In Hong Kong (HK), one of the most severely affected regions, 1755 individuals were infected and 299 died[2]. SARS is caused by a coronavirus, which is more dangerous and tenacious than the AIDS virus because of its strong ability to survive in moist air and considerable potential to infect through close personal contact[3; 4; 5; 6]. Unlike other well-known epidemic diseases, such as AIDS, SARS spreads quickly. Although significant, its mortality rate is, fortunately, relatively low (approximately 11%)[1]. Researchers have decoded the genome of SARS coronavirus and developed prompt diagnostic tests and some medicines, a vaccine is still far from being developed and widely used[7; 8; 9]. The danger of a recurrence of SARS remains. Irrespective of pharmacological research, the epidemiology study of SARS will help to prevent possible spreading of similar future contagions. Generally, current epidemiological models are of two types. First, the well-known Susceptible-Infected-Recovered (SIR) model proposed in 1927[10; 11]. Second, the concept of Small-World (SW) networks[12]. Arousing a new wave of epidemiological research, the SW model has made some progress recently[13; 14; 15]. Our work aims to model SARS data for HK. Practical advice for a better control are drawn from both the SIR and SW models. In particular, a generalized method to evaluate control of an epidemic is promoted here based on the SIR model with fixed population. Using this method, measuring the spread and control of various epidemics among different countries becomes simple. Quick action in the early stage is highlighted for both government and individuals to prevent rapid propagation. ## I Susceptible-Infected-Recovered Model In the SIR model, the fixed population \\(N\\) is divided into three distinct groups: Susceptible \\(S\\), Infected \\(I\\) and Removed \\(R\\). Those at risk of the disease are susceptible, those that have it are infected and those that are either quarantined, dead, or have acquired immunity are removed. The following flow chart shows the basic recession of the SIR model [11]. \\[\\begin{array}{ccc}rSI&aI\\\\ S&\\rightarrow&I\\ \\rightarrow\\ R\\end{array} \\tag{1}\\] Here \\(r\\) and \\(a\\) are the infection coefficient and removal rate, respectively. A discrete model was adapted by Gani from the original SIR model through the coarse-graining process and was applied to successfully predict outbreak of influenza epidemics in England and Wales[16; 17]. \\[\\begin{array}{ccc}S_{i+1}&=&S_{i}\\left(1-rI_{i}\\right)\\\\ I_{i+1}&=&\\left[rS_{i}+\\left(1-a\\right)\\right]I_{i}\\\\ R_{i+1}&=&R_{i}+aI_{i}\\end{array} \\tag{2}\\] During the epidemic process, \\(N=S_{i}+I_{i}+R_{i}\\) is fixed. Initially, we examine the data for the first 15 days to estimate the parameters \\(r\\) and \\(a\\) of SARS for HK. The only data are new cases (removal \\(R\\)) announced everyday by HK Dept. of Health from March 12, 2003 followed by a revised version later[2]. To avoid inadvertently using future information we do not use the revised data at this stage. Population \\(N=7.3\\times 10^{6}\\); since \\(I+R\\ll N\\) it is reasonable to let \\(S_{i}=N\\) in right hand side of (2). \\(I_{1}=1\\) and \\(R_{1}=0\\) is set as the initial condition. \\(I_{i}\\) is replaced with \\(R_{i-1}\\) whereas \\(I_{i}\\) is uncertain. This assumption implies the incubation period is only one day, in spite of thefact that the true incubation period of the coronavirus is 2-7 days[8]. The parameters \\(r\\) and \\(a\\) are scanned for the best fit for the stage. For every \\((r,a)\\), a sequence of \\(R^{\\prime}_{i+1}\\) is obtained by numerical simulations. An Euclidean norm of \\(\\sum_{i=1}^{15}\\left(R_{i+1}-R^{\\prime}_{i+1}\\right)^{2}\\), which indicates a distance between the true and simulated data, is applied to measure the fit. In Fig. 1 \\((r,a)\\) of the lowest point is the best fit parameters for this stage and this value is used for the following prediction. We get \\(r=2.05\\times 10^{-7}\\) and \\(a=1.444\\). The method is applied to get parameters of \\(a\\) and \\(r\\) in Figs. 2 and 3. The prediction is available for the trend based on parameters \\(r\\) and \\(a\\) of this stage, the middle day (March 20, 2003) of the stage is applied as the first day and \\(\\overline{R}=\\frac{1}{15}\\sum_{i=1}^{15}R_{i+1}\\) is assumed as \\(\\overline{I}\\). A curve of squares is plotted in Fig. 2 for the first stage prediction. In the same way the best fit and prediction is applied for the next two stages and plotted in Fig. 2. The best fit is also processed weekly for detail discussion later. The curves for prediction clearly shows the 3 stages in Fig. 2. The first stage (March 12 - 27, 2003) exhibits dangerous exponential growth. It shows that the extremely infectious SARS coronavirus spread quickly in the public with few protections during this early stage. More seriously, it leads to a higher infection peak although in this stage the averaged number of new cases \\(\\overline{R}\\) is below 30. This prediction gets confirmation in the second stage (March 28 - April 11, 2003). The peak characterized by the Amoy Garden outbreak comes earlier and higher. It indicates appearance of a new transmission mode that differs from intimate contact route observed in the first stage. We name the new transmission mode \\(explosive\\) growth in the contrast to transmission in the first stage which we refer to as \\(burning\\) growth. Outbreaks at Prince Wales Hospital (PWH) (where SARS patients received treatment) and Amoy Gardens (a high-density housing estate in Hong Kong) represent infections of \\(burning\\) and \\(explosive\\) modes in the full epidemic, respectively. The revised data provided by HK Dept. of Health in Fig. 2 present a clearer indication for these two transmission modes. The outbreak of PWH sent the clear message that intimate contacts with SARS patient led to infection[18]. Detailed investigations of the propagation at Amoy Garden suggested that faulty of sewage pipes allowed droplets containing the coronavirus to enter neighboring units vertically in the building[19]. Furthermore, poor ventilation of lifts and rat infestation where also suggested as possible modes of contamination [20; 21; 22; 23]. However, control measures bring the spread of the disease under control in the second stage. Contrary to the increasing trend in the first stage, the prediction curve of the 2nd stage (circle) declines. And it anticipates that new cases drop below 10 before the 60th day (May 10, 2003). Also on April 12, 2003 we predicted number of whole infection cases would reach 1700. Up to April 11, 2003 there were 32 deaths and 169 recoveries[2]. We calculated the mortality of SARS as the ratio of death to sum of both deathes and recoveries, and it was 15.9%. Therefore we predicted approximately 270 fatalities in total. In the third stage (April 12 - 27, 2003), triangle curve in Fig. 2 refines results and gives more accurate prediction. This stage predicts that new cases per day drops to 5 before the 62th day, May 12, 2003. The travel warning for HK was cancelled by WHO because HK had kept new cases below 5 for 10 days since May 15, 2003. And, finally, we predict that Figure 1: The quantity \\(\\sum_{i=1}^{15}\\left(R_{i+1}-R^{\\prime}_{i+1}\\right)^{2}\\) vs. \\(r\\) and \\(a\\) is plotted as dots for the first 15 days SARS data for HK. The natural logarithm is applied. \\(R\\) and \\(R^{\\prime}\\) are original and simulated data, respectively. The thick red curve shows the bottom of the sharp valley clearly. The lowest point of the valley is according to the best fit: \\(r=2.05\\times 10^{-7}\\) and \\(a=1.444\\). Figure 2: SARS data announced daily by the Hong Kong Dept. of Heath, the epidemic curve is plot as column graph. The curves of squares (\\(r=2.05\\times 10^{-7}\\) and \\(a=1.444\\)), circles (\\(r=1.55\\times 10^{-7}\\) and \\(a=1.172\\)) and triangles (\\(r=1.09\\times 10^{-7}\\) and \\(a=0.868\\)) are prediction for the 3 stages described in the text. Each stage has 15 days. The curve of black dots is for revised data. the total cases reaches 1730 and nearly 287 deaths (up to April 27, 2003 there were 668 recovered and 133, the mortality increased to 16.6%). These numbers are very close to the true data[2]. Precisely the method drawn from the SIR model has been verified for prediction for full epidemic. However, the accuracy is only possible by the first dividing the epidemic into separate \"stages\". This problem of determining to what extent an epidemic is under control is of greater strategic significance. Information on the efficacy of epidemic control will help determine whether to apply more control policies or not, and balance cost and benefit from them. For each individual the same question will also inform the degree to which precautions are taken: i.e. wearing a surgical mask to prevent the spread (acquisition) of SARS. Obviously a way to estimate control level is required. Actually this is a difficult problem because of significant statistical fluctuations in the data. A quite simple method to evaluate control efficiency is discussed below. In the SIR model (2), if \\(\\Delta I=I_{i+1}-I_{i}\\leq 0\\) a disease is regards as being controlled as new cases will decrease. This inequality leads to a control criterion for some diseases in epidemiology research. Applying the approximation of \\(S_{i}=N\\) we get a threshold \\(Nr\\leq a\\) from (2). We rescale \\(rN\\to r\\) (\\(r\\) is called infection rate in place of infection coefficient now) and then get the threshold that is free from population \\(N\\). \\[r\\leq a \\tag{3}\\] This indicates that the removal rate \\(a\\) exceeds the infection rate \\(r\\). In Fig. 3 the circles show the 3 stages evaluations with dash line of \\(r=a\\). And the parameters \\(r\\) and \\(a\\) of SARS data for HK are estimated weekly as squares also in Fig. 3. The line of \\(r=a\\) is regarded as the critical line since number of infected cases increases when \\((r,a)\\) passes through it from below. It is possible to apply the diagram of \\(r\\) and \\(a\\) to compare control level for different countries and areas even for different diseases. This provides organizations like WHO with a simple and standard method to supervise infection level of any disease. The limitation of the method comes from the assumptions of the SIR model. More accurate models may provide better estimate of epidemic state and future behaviours. In summary, a discrete SIR model gives good predictions for epidemic of SARS for HK. Two distinct methods are described for the disease propagation dynamics. Particularly the \\(explosive\\) mode is much more hazardous than the \\(burning\\) mode. We have introduced a simple method to evaluate control levels. The method is generic and can be widely applied to various epidemiological data. ## II The small-world model Contrary to the long established SIR model, epidemiology research using Small-World (SW) network models is young and growing area. The concept of SW was imported from the study of social network into nature sciences in 1998[12]. However, it provides a novel insight for networks and arouses a lot of explorations in the brain, social networks and the Internet.[24; 25; 27; 28]. Some SW networks also exhibit a scale-free (power law) distribution: a node in this network has probability \\(P(k)\\sim k^{-\\gamma}\\) to connect \\(k\\) nodes, \\(\\gamma\\) is one basic character for the system[25; 26; 27; 28]. Most researches of \"virus\" spreading with the SW model, concern the propagation of computer viruses on the Internet, however, a few studies have been published relating to epidemiology[27; 28; 29; 30; 31]. The dynamics of SW models enriches our realization of epidemic and possibly provides better control policies[32]. From the first SARS patient to the last one, an epidemic chain is embedded on the scale free SW network of social contacts. It is of great importance to discover the underlying structure of the epidemic network because a successful quarantine of all possible candidates for infection will lead to a rapid termination of the epidemic. In HK e-SARS, an electronic database to capture on-line and in real time clinical and administrative details of all SARS patients, provided invaluable quarantine information by tracing contacts[2; 33]. Unfortunately a full epidemic network of SARS for HK is still unavailable. Because data representing the underlying network structure is currently not available, we have no choice other than numerical simulations. Therefore, our analysis of the SW model is largely theoretical. The only confirmation of our model we can offer is that the data appear to be realistic and Figure 3: The best fit parameters estimated week by week from SARS data of HK are plotted as squares. The circles are for 3-stage analysis mentioned in Fig. 2. In this panel the line of dots \\(r=a\\) is regarded as “alert line” or “critical line”. The parameters of \\(r\\) and \\(a\\) below it indicate that the epidemic is controlled, above the line indicates uncontrolled growth. The parameter \\(r\\) is rescaled by \\(rN\\to r\\). exhibit the same features as the true epidemic data. To simulate an epidemic chain, a simple model of social contacts is proposed. The model is established on a grid network weaved by \\(m\\) parallel and \\(m\\) vertical lines. Every node in the network represents a person. We set \\(m=2700\\) with population \\(N=m^{2}=7.29\\times 10^{6}\\). All nodes are initiated with a value of 0 (named _good_ nodes). Every node has 2, 3 or 4 nearest neighbors as short range contacts for corner, edge and center, respectively. For every node there are two long range contacts with 2 other nodes randomly selected in the whole system everyday. These linkages model the social contact between individuals (i.e. social contacts that are sufficiently intimate to bring individuals at risk of spreading the disease). One random node of the system is set to 1 (called the \\(bad\\) node), through its short and long range contacts, the value of the nodes linked with it turns into 1 according to probability of \\(p_{1}\\) and \\(p_{2}\\), respectively. An infection happens if a node changes its value from 0 to 1. This change is irreversible. During the full simulation process, the bad node is not removed from the whole system. We make this assumption because the number of deaths is small in comparision of the population, and there is no absolute quarantine--even the SARS patient in hospital can affect the medical workers. Moreover, the treatment period for SARS is relatively length, and during this time infected indviduals are highly infectious. To reflect the true variation in control strategy and individual behaviour, the control parameters \\(p_{1,2}\\) (\\(0\\leq p_{1,2}\\leq 1\\)) vary with time. In same model it would be interesting to compare with the complete infection epidemic. A small system with \\(100\\times 100\\) nodes is chosen. A fixed probability \\(p_{1}=p_{2}=0.05\\) leads to eventual infection of the entire population since all nodes are linked and the infected are not removed. The epidemic curve for this process is plotted in Fig. 4 a) and is typical of many plagues. In Fig. 4 b) various sizes of clusters with infected nodes (black dots) scatter over the geographical map. It contains all infection facts in first 45 days. Comparing to the true SARS infection distribution in HK only slight similarity is observed. The short and long range linkages give good infection dynamics as we expected. The epidemic chain is easily drawn by recording infection fact from the seed to last patient. However, during simulations there is a problem: for a _good_ node linked by more than one _bad_ nodes, which _bad_ node infects it? A widely accepted preferential attachment of _rich-get-richer_ is a good answer[25; 26; 35]. A linear preference function is applied here. With presence of both growth and preferential attachment, it is general to ask whether the chain is a scale-invariant SW network. We plot the distribution in Fig. 5 a) and b) in log-log and log-linear coordinates, respectively. The hollow circle is for the system with \\(100\\times 100\\) nodes. The scaling behaviour of it looks more like a piecewise linear (i.e. bilinear) in b) rather than a power law in a). To confirm this we tried a larger system with \\(1000\\times 1000\\) nodes and the same fixed \\(p_{1,2}\\) in the rest curves in Figs. 5. The solid square curve is distribution of the whole epidemic network. The linear fit for the solid square gives a correlation coefficient \\(R\\) of \\(-0.956\\) in Fig. 5 a). In b) piecewise linear fits have \\(R\\) of \\(-0.997\\) and \\(-0.994\\). These provide positive support for the original model. The ratio of the cases of first \\(30\\%\\) and \\(1\\%\\) of whole process for this bigger system are plotted as hollow triangle and solid circle curves in Figs. 5, respectively. The case of \\(1\\%\\), early stage of full infection epidemic, suggests a better fit for scale-free than the other cases since it has correlation coefficients \\(R\\) of \\(-0.983\\) and ratio \\(\\gamma=-2.89\\) for linear fit in a). For the solid circle curve piecewise linear fits give \\(R\\) of \\(-0.995\\) and \\(-0.977\\) in Fig. 5 b). So, what scaling behaviour is true during full infection process? With absence of rigorous proof this problem is hard to answer correctly in simulations. We may only draw conclusions based on which simulations most closely matches the qualitative features of the observed data. Let's return to the system with \\(2700\\times 2700\\) nodes for modelling SARS for HK. The probability \\(p_{1,2}\\) is fitted to the true epidemic data. However, it is fruitless to obtain an exact coincidence between the simulated results and the true data as the model evolution is highly random (moreover this would result in overfitting). The control parameters \\(p_{1,2}\\) (dots and dashes curves) and a simulated epidemic curve (black dots) with column diagram of SARS for HK is plotted in Figs. 6 a) and b), respectively. For the simulated data, the total number of cases is \\(1830\\) that has a \\(4.3\\%\\) deviation from the true data of \\(1755\\). Contrary to the above full infection with fixed parameters, \\(p_{1,2}\\) are believed to drop exponentially and lead to small part infection epidemic without quarantine or removal. In any case, the high probability of Figure 4: A plague is simulated in a geographical map with \\(100\\times 100\\) nodes with fixed infection probability \\(p_{1,2}=0.005\\) which eventually leads to a full infection. a). The full epidemic curve of the simulated epidemic. b). While time is \\(45\\), the infection cases clusteringly scatter in the map. The infection happens in full scale map because of random connections. The clusters show effect from the nearest neighbour connections. infection in the early stage is indicative of the critical ability of the SARS coronavirus to attack an individual without protection. If SARS returns, the same high initial infection level is likely to occur. The only hope to avoid a repeat of the SARS crisis of 2003 is to shorten the high infection stage by quick identification, wide protection and sufficient quarantining. In other words, the best time to eliminate possible epidemic is the moment that the first patient surfaces. Any delay may lead to a worsening crisis. The long range infections in the model and the world also imply an efficient mechanism to respond rapidly to any infectious disease is required to establish global control. Data on the geographical distribution of SARS cases in HK is much easier to collect than the full epidemic chain. Numerical simulations provide both simply. The full geographical map marked with all infected nodes (black dots) and an amplified window of a cluster is plotted in Figs. 7 a) and b), respectively. Similarity is expected and verified in cluster patterns of Figs. 4 b). The scaling behaviour of the epidemic chain is plotted in Figs. 8. The curve in log-log diagram exhibits a power-law coefficient of \\(\\gamma=-3.55\\) and gives the linear fit correlation coefficient \\(R=-0.989\\). The piecewise linear fit for the log-linear case gives correlation coefficients \\(R\\) of \\(-0.987\\) and \\(-0.959\\), respectively. For a SW network often few vertexes play more important roles than others[35]. SARS super-spreaders found in HK, Singapore and China are consistent with this[19; 36]. Data for the early spreading of SARS in Singapore[36] show definite SW structure with a small number of nodes with a large number of links. The average number of links per node also shows a scale-free structure [36], but the available data is extremely limited (the linear scaling can only be estimated from three observations). This character is also verified in our model. The first few nodes have a high chance of infecting a large number of individuals. In Fig. 8, a single node has 40 links. Clearly the index node has many long range linkages. It has been suggested that travelling in crowded public Figure 5: The scaling behaviour for a full infection epidemic network is plotted in log-log and log-linear coordinates in a) and b), respectively. The circle is for a full infection epidemic network simulated in a geographical map with \\(100\\times 100\\) nodes. The curves are for one with \\(1000\\times 1000\\) nodes. The curves of dots, triangles and squares are for the epidemic networks of 1%, 30% and 100% for full infection process. The two dash lines are piecewise linear fits for the full infection process with \\(1000\\times 1000\\) nodes. The ratio is \\(-0.31\\) and \\(-0.07\\) with \\(R=-0.997\\) and \\(-0.994\\), respectively. It clearly shows the curves of full epidemic has two piecewise linear parts in log-linear graph. In the early stage of a full infection, the scaling behaviour of 1% infected (dots) has a part which might be regards as a log-log linear. The ratio of it is \\(-2.89\\) with \\(R=-0.983\\). In other words, it has scale-free part with \\(\\gamma=-2.89\\). Figure 6: Control parameters and epidemic curve is plotted for the model with \\(2700\\times 2700\\) nodes. a) The short and long range linkages infection probability \\(p_{1,2}\\) generally declines exponentially. b) The simulated epidemic curve (black dots) in the model is plotted with the original SARS data (grey column) for HK. Figure 7: The distribution of infected nodes in simulations in two dimensional map. a). the whole geographical map; b). an amplified window of a cluster marked in circle of a). places (train, hospital, even an elevator) without suitable precautions can cause an ordinary SARS patient to infect a significant number of others. Again, this is an indication that to increase an individual's (especially a probable SARS patient's) personal protection is key to rapidly control an epidemic. Actually, in our model, if duration of the early stage with high probability \\(p_{1,2}\\) is reduced to less than 10, the infection scale decreases sharply. An engrossing phenomenon is the points of inflection in curves in log-linear diagrams of Figs. 8 and Figs. 5 b). All are located near linkages number of 6-7. On average a nodes has about 6 contacts (2-4 short plus 2 long range) every day, although in whole process there is no limitation for linkages. For a growing random network, a general problem always exists among its scaling behaviour, preferential attachment and dynamics, even embedding a geographical map[34; 37; 38; 39; 40]. More work is required to address this issue. In conclusion a SW epidemic network is simulated to model SARS spreading in HK. A comparison of the simulations with full infection data is presented. Our discussion of the infection probability and occurrence of super spreaders lead to the obvious conclusion: rapid response of an individual and government is a key to eliminating an epidemic with limited impact and at minimal cost. ###### Acknowledgements. This research is supported by Hong Kong University Grants Council CERG number B-Q709. ## References * (1) World Health Organization, _Consensus Document on the Epidemiology of SARS_, 17 Oct., 2003, [http://www.who.int/csr/sars/en/WHOconsensus.pdf](http://www.who.int/csr/sars/en/WHOconsensus.pdf). * (2)_Report of the Severe Acute Respiratory Syndrome Expert Committee_, 2 Oct., 2003, HK. [http://www.sars-expertcom.gov.hk/english/reports/reports.html](http://www.sars-expertcom.gov.hk/english/reports/reports.html). * (3) D. Lawrence, The Lancet **361**, 1712 (2003). * (4) T. G. Kisizeak, et al. New Engl. J. Med. **348**, 1953 (2003). * (5) C. Drosten, et al. New Engl. J. Med.: **348**, 1967 (2003). * (6) J. S. M. Peiris, et al. The Lancet **361**, 1319 (2003). * (7) M. A. Marra, et al. Science **300**, 1399 (2003). * (8) P. A. Rota, et al. Science **300**, 1394 (2003). * (9) K. Stadler, et al. Nature Rev. Microbio. **1**, 209 (2003). * (10) W. O. Kermack & A. G. McKerdrick, Proc. Roy. Soc. Lond. A **115**, 700 (1927). * (11) D. J. Daley, & J. Gani, _Epidemic Modelling: An Introduction_, (Cambridge Univ. press, Cambridge, 1999). * (12) D. J. Watts, & S. H. Strogatz, Nature **393**, 440 (1998). * (13) H. Z. Damian & K. Marcelo, Physica A **309**, 445 (2002). * (14) J. Holl, et al. Nature **423**, 605 (2003). * (15) C. J. Mode & C. K. Sleeman, Math. Biosc. **156**, 95 (1999). * (16) J. Gani, J. Roy. Statist. Soc. Ser. A **141**, 323 (1978). * (17) C. C. Spicer, Brit. Med. Bull. **35**, 23 (1979). * (18) B. Tomlinson, & C. Cockram, The Lancet **361**, 1486 (2003). * (19) S. Riley, et al. Science **300**, 1961 (2003). * (20) D. Normile, Science **300**, 714 (2003). * (21) S. K C. Ng, The Lancet **362**, 570 (2003). * (22) C. A. Donnelly, et al. The Lancet **361**, 1761 (2003). * (23) P. Helen, Nature, 15 April (2003). [http://www.nature.com/nsu/030414/030414-5.html](http://www.nature.com/nsu/030414/030414-5.html). * (24) S. B. Laughlin, & T. J. Sejnowski, Science **301**, 1870 (2003). * (25) R. Albert, & A.-L. Barabasi, Rev. Mod. Phys. **74**, 47 (2002). * (26) M. Newman, Phys. Rev. E **64**, 025102 (2001). * (27) H. Ebel, et al. Phys. Rev. E **66**, 035103 (2002). * (28) F. Liljeros, et al. Nature **411**, 907 (2001). * (29) S. H. Strogatz, Nature **410**, 268 (2001) * (30) M. Kuperman, & G. Abramson, Phys. Rev. Lett. **86**, 2909 (2001). * (31) R. Huerta, & L. S. Tsimring, Phys. Rev. E **66**, 056115 (2002). * (32) O. Miramontes, & B. Luque, Physica D **168-169**, 379 (2002). * (33) V. Brower, EMBO Reports **4**, 649 (2003). * (34) R. Cohen, & S. Havlin, Phys. Rev. Lett. **90**, 058701 (2003). * (35) D. J. Watts, _Small Worlds: The Dynamics of Networks between Order and Randomness_, (Princeton Univ. press, Princeton, 1999). * (36) Y. S. Leo, MMWR Weekly **52**, 405 (2003). * (37) C. P. Warren, et al. Phys. Rev. E **66**, 056105 (2002). * (38) P. L. Krapivsky, et al. Phys. Rev. Lett. **85**, 4629 (2000). * (39) S. N. Dorogovtsev, & J. F. F. Mendes, Phys. Rev. E **63**, 056125 (2001). * (40) C. Moore, & M. E. J. Newman, Phys. Rev. E **62**, 7059 (2000). Figure 8: The scaling behaviour for the simulated epidemic network. In log-log coordinates, the scaling \\(P(k)\\sim k^{-\\gamma}\\), \\(\\gamma=-3.55\\), where the linear fit exhibits a correlation coefficient \\(R=-0.989\\).
A simplified susceptible-infected-recovered (SIR) epidemic model and a small-world model are applied to analyse the spread and control of Severe Acute Respiratory Syndrome (SARS) for Hong Kong in early 2003. From data available in mid April 2003, we predict that SARS would be controlled by June and nearly 1700 persons would be infected based on the SIR model. This is consistent with the known data. A simple way to evaluate the development and efficacy of control is described and shown to provide a useful measure for the future evolution of an epidemic. This may contribute to improve strategic response from the government. The evaluation process here is universal and therefore applicable to many similar homogeneous epidemic diseases within a fixed population. A novel model consisting of map systems involving the Small-World network principle is also described. We find that this model reproduces qualitative features of the random disease propagation observed in the true data. Unlike traditional deterministic models, scale-free phenomena are observed in the epidemic network. The numerical simulations provide theoretical support for current strategies and achieve more efficient control of some epidemic diseases, including SARS.
Condense the content of the following passage.
arxiv-format/0312034v1.md
## 1 Introduction The standard model of elementary particle physics is remarkably successful in describing physics up to a scale of the order of several hundred GeV. Still it faces a number of shortcomings, such as the abundance of parameters and their origin, which become particularly prominent in flavor physics or neutrino physics. In addition, there are two problems that appear to point more strongly to the fact that the standard model cannot be a fundamental theory valid for arbitrarily short distance scales: first, the problem of triviality of the scalar and abelian gauge sector, and second, the gauge hierarchy problem. Let us start with the hierarchy problem. If one assumes that the standard model is valid up to some high scale \\(\\Lambda_{\\rm UV}\\) (e.g., a scale of grand unification, \\({\\rm M}_{\\rm GUT}\\sim 10^{16}\\)GeV, or even the Planck scale), one is immediately confronted with two immensely different scales in the theory - the electroweak symmetry-breaking scale \\({\\rm M}_{\\rm EW}\\sim 100\\)GeV and \\(\\Lambda_{\\rm UV}\\). A realization of this enormous hierarchy in the context of the standard model requires a highly exceptional choice of parameters (\" fine-tuning\"). This can be seen from the quadratic renormalization of the scalar mass term in the Higgs sector which naively receives corrections of the order \\(\\Lambda_{\\rm UV}^{2}\\gg m_{\\rm Higgs}^{2}\\) when quantum fluctuations between \\(\\Lambda_{\\rm UV}\\) and \\({\\rm M}_{\\rm EW}\\) are integrated out. Thus \\(m_{\\rm Higgs}\\) at the UV scale must be extremely fine-tuned in order to cancel most of the quantum corrections and produce a Higgs which is much lighter than \\(\\Lambda_{\\rm UV}\\). It should be stressed that the hierarchy problem is not a problem of principle, but rather a problem of point of view, our systems closely resemble those with topquark condensation. However, we focus on the UV behavior, which is an inherently nonperturbative problem. At first sight, it seems that fermionic self-interactions even worsen the theoretical objections, since such couplings are not perturbatively renormalizable in \\(D=4\\) dimensional spacetime. This means that a quantum theory cannot consistently be constructed by an expansion around zero coupling (Gaussian fixed point). Nevertheless, perturbative renormalizability is not a necessary criterion for constructing an interacting field theory. Also perturbatively nonrenormalizable theories can be fundamental and mathematically consistent down to arbitrarily small length scales, as proposed in Weinberg's \"asymptotic safety\" scenario [7]. This scenario assumes the existence of a non-Gaussian (=nonzero) UV fixed point under the renormalization group (RG) operation at which the continuum limit can be taken. The theory is \"nonperturbatively renormalizable\" in Wilson's sense. If the non-Gaussian fixed point is IR repulsive only for a finite number of renormalized couplings, the RG trajectories along which the theory can flow for \\(M_{\\rm EW}/\\Lambda_{\\rm UV}\\to 0\\) are labeled by only a finite number of physical parameters. Then the theory is as predictive as any perturbatively renormalizable theory, and high-energy physics can be well separated from low-energy physics without tuning a large number of parameters. Finally, the triviality problem is absent by construction. The issue of the gauge hierarchy problem is related to the relevant couplings and the associated anomalous dimensions. For definiteness, let us consider one particular small deviation1\\(v\\) from the fixed point which depends on the renormalization scale \\(k\\) according to a generalized \"anomalous dimensions\" \\(\\Theta\\), Footnote 1: More precisely, \\(v\\) is an eigenvector of the stability matrix. We take \\(v\\) to be a dimensionless renormalized parameter. In the case of a mass \\(M\\), this corresponds to \\(M/k\\). \\[\\partial_{t}v=k\\partial_{k}v=-\\Theta\\,v. \\tag{1}\\] The solution \\(v\\sim k^{-\\Theta}\\) implies for large positive \\(\\Theta\\) that \\(v\\) must be tiny at \\(\\Lambda_{\\rm UV}\\) if it is of order one for \\(k=M_{\\rm EW}\\). This is the fine-tuning problem. For positive \\(\\Theta\\) (not very close to zero) \\(v\\) is called relevant parameter and we conclude that a fine-tuning problem is connected to every relevant parameter. (We note that for a perturbative expansion the exact location of the fixed point may depend on the order of the approximation. Also \\(\\Theta\\) may be expressed in a perturbative series and the right-hand side may contain higher powers of \\(v\\). All this does not affect the conclusion that precisely one small parameter \\(v(\\Lambda_{\\rm UV})\\) is needed (one fine-tuning) for every relevant coupling.) On the other hand, if \\(\\Theta\\) is very close to zero or vanishes (in this case \\(v\\) may depend logarithmically on \\(k\\)), \\(v\\) is called a marginal coupling. (An example is the gauge coupling \\(g\\) near a fixed point with \\(g=0\\).) No extreme fine-tuning is needed for marginal couplings, since \\(v(M_{\\rm EW})\\) and \\(v(\\Lambda_{\\rm UV})\\) are of a similar order of magnitude. In consequence, a model with only marginal couplings has no gauge hierarchy problem. Within the standard model, this type of solution for the gauge hierarchy problem has been proposed in [8]. Known examples for the quantum field theories with only marginal couplings (besides the irrelevant ones) are four-dimensional pure non-abelian gauge theories near the Gaussian fixed point or two-dimensional scalar theories with global U(1) symmetry near the fixed point corresponding to the Kosterlitz-Thouless phase transition with essential scaling [9, 10]. In statistical physics, a fixed point with only marginal directions can be associated to \"self-organized criticality\". In this work, we analyze a class of models with four-fermion self-interactions with gauge and flavor symmetry. In order to quantize the systems in a nonperturbative framework, we employ the exact renormalization group formulated in terms of a flow equation for the effective average action \\(\\Gamma_{k}\\)[11], \\[\\partial_{t}\\Gamma_{k}=\\frac{1}{2}\\,{\\rm STr}\\,\\partial_{t}R_{k}\\,(\\Gamma_{k} ^{(2)}+R_{k})^{-1},\\quad t=\\ln\\frac{k}{\\Lambda_{\\rm UV}}. \\tag{2}\\] The latter is a free-energy functional that interpolates between the bare action \\(\\Gamma_{k=\\Lambda_{\\rm UV}}=S\\) and the full quantum effective action \\(\\Gamma_{k=0}\\). Here, \\(R_{k}\\) denotes a to some extent arbitrary regulator function that specifies the details of the momentum-shell integrations. With the flow equation (2), it is possible to analyze the space of action functionals and its fixed-point structure, in order to look for quantizable and renormalizable theories. These correspond to zeros of the right-hand side of Eq. (2) with a suitable finite number of relevant parameters, characterizing the small deviations from the fixed point. Equation (2) is a functional differential equation, since the right-hand side contains the second functional derivative \\(\\Gamma_{k}^{(2)}\\) (the full inverse propagator). Approximate solutions can be found by suitable truncations of the space of actions. Within the approximation of point-like four-fermion interactions, we indeed find a variety of non-Gaussian fixed points that give rise to new universality classes of interacting quantum field theories and solve the triviality problem in the Higgs sector. Concerning the hierarchy problem, however, we show that these fermionic theories are at least as IR unstable as a scalar Higgs sector, so that a very precise choice of initial conditions remains necessary (fine-tuning). Upon the inclusion of gauge field dynamics, the picture does not change qualitatively as long as the gauge couplings remain perturbatively small. The influence of the gauge fields on the fermionic sector is subdominant. Moreover, the fermionic self-interactions do not modify the leading-order running of the gauge couplings, as we show with the aid of modified Ward-Takahashi identities. As a consequence, fermionic self-interactions cannot cure the triviality problem of the abelian U(1) sector, e.g., by rendering this gauge coupling asymptotically free. While we have not found a solution to the gauge hierarchy problem so far, we reveal instructive general aspects of the structure and influence of fermionic self-interactions in models with flavor and gauge symmetry. Our findings may be taken as a hint toward possible directions in the search for a satisfactory renormalizable standard model without a fundamental Higgs scalar. As a rather speculative example, we demonstrate in the appendix how the possible existence of a non-Gaussian fixed point in the U(1) gauge coupling could strongly influence and even stabilize the RG behavior of a fermionic Higgs sector towards the infrared. ## 2 Toy Modeling the Standard Model In the present work, we consider a theory of fermions with self-interactions as well as gauge-field interactions. Let us start with all possible interactions that are compatible with a \\(\\rm U(1)\\times SU(N_{c})\\) gauge symmetry and a chiral \\(\\rm SU(N_{f})_{L}\\times SU(N_{f})_{R}\\) flavor symmetry for \\(\\rm N_{f}\\) fermion species. The \\(\\rm SU(N_{c})\\) simulates the non-abelian, asymptotically free part of the standard model gauge group, while the \\(\\rm U(1)\\) models the abelian part with its triviality problem. For simplicity, the gauge fields are assumed to couple to left- and right-handed fermions in the same way with the same charges. In comparison to the standard model, we have neglected the electroweak \\(\\rm SU(2)_{L}\\) chiral gauge interactions and the differences between the hyper charges under the \\(\\rm U(1)\\). The reason for concentrating on the strong \\(\\rm SU(N_{c})\\) gauge group instead of the electroweak \\(\\rm SU(2)_{L}\\) is the following: if an IR stabilized system exists, we expect the strongest gauge interaction to be a good candidate for a destabilizing influence near the Fermi scale. Even though symmetry breaking first affects the electroweak sector, it may be caused by fermionic self-interactions in combination with the strong gauge sector. An inclusion of the remaining standard model building blocks is, in principle, straightforward: one should add the weak gauge interactions and consider additional possible four-fermion interactions which are consistent with the reduced flavor symmetries and the gauge symmetries. The number of possible four-fermion interactions increases very substantially, however, owing to the lack of parity symmetry and reduced flavor symmetry. For \\(\\rm N_{c}=3\\), \\(\\rm N_{f}=6\\), \\(\\overline{e}=0\\) our model corresponds to the standard model in the limit of vanishing weak and hypercharge gauge couplings. We concentrate on a simple truncation with point-like couplings and include all possible four-fermion interactions obeying an \\(\\rm SU(N_{c})\\times U(1)\\) gauge symmetry and an \\(\\rm SU(N_{f})_{L}\\times SU(N_{f})_{R}\\) flavor symmetry2, Footnote 2: We note that only the four-fermion interactions are manifestly invariant under local gauge transformations for all possible choices of the couplings. Gauge invariance of the remaining terms is governed by (modified) Ward-Takahashi identities as discussed in Sect. 4. \\[\\Gamma_{k}= \\int\\overline{\\psi}({\\rm i}Z_{\\psi}\\partial\\!\\!\\!/+Z_{1} \\overline{g}A\\!\\!\\!/+Z_{1}^{\\rm B}\\overline{e}B\\!\\!\\!/)\\psi+\\frac{Z_{\\rm F}}{4 }F_{z}^{\\mu\ u}F_{\\mu\ u}^{z}+\\frac{Z_{\\rm B}}{4}B^{\\mu\ u}B_{\\mu\ u}+\\frac{( \\partial_{\\mu}A^{\\mu})^{2}}{2\\alpha}+\\frac{(\\partial_{\\mu}B^{\\mu})^{2}}{2 \\alpha_{B}} \\tag{3}\\] \\[+\\frac{1}{2}\\Big{[}Z_{-}\\overline{\\lambda}_{-}(\\mbox{V--A})+Z_{+} \\overline{\\lambda}_{+}(\\mbox{V+A})+Z_{\\sigma}\\overline{\\lambda}_{\\sigma}( \\mbox{S--P})+Z_{\\rm VA}\\overline{\\lambda}_{\\rm VA}[2(\\mbox{V--A})^{\\rm adj}+( 1/\\mbox{N}_{c})(\\mbox{V--A})]\\!\\!\\Big{]}.\\] Here \\(A_{\\mu}=A^{z}T^{z}\\), \\(F_{\\mu\ u}=F_{\\mu\ u}^{z}T^{z}\\) denotes the nonabelian gauge potential and field strength, and \\(B_{\\mu}\\), \\(B_{\\mu\ u}\\) the abelian ones. The gauge-field kinetic terms are accompanied by wave-function renormalizations \\(Z_{\\rm F}\\) and \\(Z_{\\rm B}\\), the fermionic one by \\(Z_{\\psi}\\). Similarly, \\(Z_{1}\\), \\(Z_{1}^{\\rm B}\\), \\(Z_{+}\\), \\(Z_{-}\\), \\(Z_{\\rm f}\\) and \\(Z_{\\rm VA}\\) are the vertex renormalizations, whereas \\(\\overline{e}\\), \\(\\overline{g}\\), \\(\\overline{\\lambda}\\) denote the bare couplings. The renormalized (dimensionless) couplings are defined as \\[g=\\frac{\\overline{g}Z_{1}}{Z_{\\rm B}^{1/2}Z_{\\psi}},\\quad e=\\frac{\\overline{e} Z_{1}^{\\rm B}}{Z_{\\rm F}^{1/2}Z_{\\psi}},\\quad\\hat{\\lambda}=\\frac{Z_{\\lambda}k^{2} \\overline{\\lambda}}{Z_{\\psi}^{2}}. \\tag{4}\\] We work in the Landau gauge, \\(\\alpha=\\alpha_{B}=0\\), which is known to be a fixed point ofthe renormalization group [12] and has the additional advantage that the fermionic wave function is not renormalized in our truncation, such that we can choose \\(Z_{\\psi}=1\\). The four-fermion interactions can be classified according to their color and flavor structure. Color and flavor singlets are \\[\\mbox{(V--A)} = (\\overline{\\psi}\\gamma_{\\mu}\\psi)^{2}+(\\overline{\\psi}\\gamma_{\\mu} \\gamma_{5}\\psi)^{2},\\] \\[\\mbox{(V+A)} = (\\overline{\\psi}\\gamma_{\\mu}\\psi)^{2}-(\\overline{\\psi}\\gamma_{\\mu }\\gamma_{5}\\psi)^{2},\\] where color \\((i,j,\\dots)\\) and flavor \\((a,b,\\dots)\\) indices are contracted pairwise, e.g., \\((\\overline{\\psi}\\psi)\\equiv(\\overline{\\psi}^{a}_{i}\\psi^{a}_{i})\\). The operators of non-trivial color or flavor structure are denoted by \\[\\mbox{(S--P)} = (\\overline{\\psi}^{a}\\psi^{b})^{2}-(\\overline{\\psi}^{a}\\gamma_{5} \\psi^{b})^{2}\\equiv(\\overline{\\psi}^{a}_{i}\\psi^{b}_{i})^{2}-(\\overline{\\psi}^ {a}_{i}\\gamma_{5}\\psi^{b}_{i})^{2},\\] \\[\\mbox{(V--A)}^{\\rm adj} = (\\overline{\\psi}\\gamma_{\\mu}T^{z}\\psi)^{2}+(\\overline{\\psi} \\gamma_{\\mu}\\gamma_{5}T^{z}\\psi)^{2}, \\tag{5}\\] where we define \\((\\overline{\\psi}^{a}\\psi^{b})^{2}\\equiv\\overline{\\psi}^{a}\\psi^{b}\\overline{ \\psi}^{b}\\psi^{a}\\), etc., and \\((T^{z})_{ij}\\) denotes the generators of the gauge group in the fundamental representation. Owing to Fierz identities, the last four-fermion structure in Eq. (3) can also be written as \\[\\mbox{[2(V--A)}^{\\rm adj}\\!+(1/\\mbox{N}_{\\rm c})\\mbox{(V--A)}]=(\\overline{\\psi }_{i}\\gamma_{\\mu}\\psi_{j})^{2}+(\\overline{\\psi}_{i}\\gamma_{\\mu}\\gamma_{5}\\psi_ {j})^{2}\\equiv(\\overline{\\psi}^{a}\\gamma_{\\mu}\\psi^{b})^{2}+(\\overline{\\psi}^ {a}\\gamma_{\\mu}\\gamma_{5}\\psi^{b})^{2}. \\tag{6}\\] The set of four different fermionic self-interactions occurring in Eq. (3) forms a complete basis. Any other point-like four-fermion interaction invariant under \\(\\mbox{SU(N}_{\\rm c})\\times\\mbox{U(1)}\\) gauge symmetry and \\(\\mbox{SU(N}_{\\rm f})_{\\rm L}\\times\\mbox{SU(N}_{\\rm f})_{\\rm R}\\) flavor symmetry can be decomposed into these base elements by means of Fierz transformations. Evaluating the RG flow equation in the limit of point-like interactions and projecting the result onto our truncation (3), we obtain the following \\(\\beta\\) functions for the dimensionless couplings \\(\\hat{\\lambda}\\): \\[\\partial_{t}\\hat{\\lambda}_{-}\\!=\\beta_{-} = 2\\hat{\\lambda}_{-}-4v_{4}l_{1,1}^{\\rm(FB),4}\\left[\\left(\\frac{3} {\\mbox{N}_{\\rm c}}g^{2}-3e^{2}\\right)\\hat{\\lambda}_{-}-3g^{2}\\hat{\\lambda}_{ \\rm VA}\\right]\\] \\[-\\frac{1}{8}v_{4}l_{1,2}^{\\rm(FB),4}\\left[\\frac{12+9\\mbox{N}_{\\rm c }^{2}}{\\mbox{N}_{\\rm c}^{2}}g^{4}+48e^{4}-\\frac{48}{\\mbox{N}_{\\rm c}}e^{2}g^{2}\\right]\\] \\[-8v_{4}l_{1}^{\\rm(F),4}\\Big{\\{}-\\mbox{N}_{\\rm f}\\mbox{N}_{\\rm c}( \\hat{\\lambda}_{-}^{2}+\\hat{\\lambda}_{+}^{2})+\\hat{\\lambda}_{-}^{2}-2(\\mbox{N} _{\\rm c}+\\mbox{N}_{\\rm f})\\hat{\\lambda}_{-}\\hat{\\lambda}_{\\rm VA}+\\mbox{N}_{ \\rm f}\\hat{\\lambda}_{+}\\hat{\\lambda}_{\\sigma}+2\\hat{\\lambda}_{\\rm VA}^{2} \\Big{\\}},\\] \\[\\partial_{t}\\hat{\\lambda}_{+}\\!=\\beta_{+} = 2\\hat{\\lambda}_{+}-4v_{4}l_{1,1}^{\\rm(FB),4}\\left[\\left(-\\frac{3} {\\mbox{N}_{\\rm c}}g^{2}+3e^{2}\\right)\\hat{\\lambda}_{+}\\right]\\] \\[-\\frac{1}{8}v_{4}l_{1,2}^{\\rm(FB),4}\\left[-\\frac{12+3\\mbox{N}_{ \\rm c}^{2}}{\\mbox{N}_{\\rm c}^{2}}g^{4}-48e^{4}+\\frac{48}{\\mbox{N}_{\\rm c}}e^{2 }g^{2}\\right]\\] \\[-8v_{4}l_{1}^{\\rm(F),4}\\Big{\\{}-3\\hat{\\lambda}_{+}^{2}-2\\mbox{N} _{\\rm c}\\mbox{N}_{\\rm f}\\hat{\\lambda}_{-}\\hat{\\lambda}_{+}-2\\hat{\\lambda}_{+} (\\hat{\\lambda}_{-}+(\\mbox{N}_{\\rm c}+\\mbox{N}_{\\rm f})\\hat{\\lambda}_{\\rm VA})\\] \\[+\\mbox{N}_{\\rm f}\\hat{\\lambda}_{-}\\hat{\\lambda}_{\\sigma}+\\hat{ \\lambda}_{\\rm VA}\\hat{\\lambda}_{\\sigma}+\\frac{1}{4}\\hat{\\lambda}_{\\sigma}{}^{2} \\Big{\\}},\\]\\[\\partial_{t}\\hat{\\lambda}_{\\sigma}\\!=\\beta_{\\sigma} = 2\\hat{\\lambda}_{\\sigma}-4v_{4}l_{1,1}^{\\rm(FB),4}\\left[(6C_{2}({\\rm N _{c}})\\,g^{2}+3e^{2})\\hat{\\lambda}_{\\sigma}-6g^{2}\\hat{\\lambda}_{+}\\right]\\] \\[-\\frac{1}{4}v_{4}l_{1,2}^{\\rm(FB),4}\\Big{[}-\\frac{24-9{\\rm N}_{c} ^{2}}{{\\rm N_{c}}}\\,g^{4}+48e^{2}g^{2}\\Big{]}\\] \\[-8v_{4}l_{1}^{\\rm(F),4}\\Big{\\{}2{\\rm N_{c}}\\hat{\\lambda}_{\\sigma }{}^{2}-2\\hat{\\lambda}_{-}\\hat{\\lambda}_{\\sigma}-2{\\rm N_{f}}\\hat{\\lambda}_{ \\sigma}\\hat{\\lambda}_{\\rm VA}-6\\hat{\\lambda}_{+}\\hat{\\lambda}_{\\sigma}\\Big{\\}},\\] \\[\\partial_{t}\\hat{\\lambda}_{\\rm VA}\\!=\\beta_{\\rm VA} = 2\\hat{\\lambda}_{\\rm VA}-4v_{4}l_{1,1}^{\\rm(FB),4}\\left[\\left( \\frac{3}{{\\rm N_{c}}}g^{2}-3e^{2}\\right)\\hat{\\lambda}_{\\rm VA}-3g^{2}\\hat{ \\lambda}_{-}\\right]\\] \\[-\\frac{1}{8}v_{4}l_{1,2}^{\\rm(FB),4}\\left[-\\frac{24-3{\\rm N}_{c} ^{2}}{{\\rm N_{c}}}g^{4}+48e^{2}g^{2}\\right]\\] \\[-8v_{4}l_{1}^{\\rm(F),4}\\Big{\\{}-({\\rm N_{c}}+{\\rm N_{f}})\\hat{ \\lambda}_{\\rm VA}^{2}+4\\hat{\\lambda}_{-}\\hat{\\lambda}_{\\rm VA}-\\frac{1}{4}{\\rm N _{f}}\\hat{\\lambda}_{\\sigma}{}^{2}\\Big{\\}}.\\] Here \\(C_{2}({\\rm N_{c}})=({\\rm N_{c}^{2}}-1)/(2{\\rm N_{c}})\\) is a Casimir operator of the gauge group, and \\(v_{4}=1/(32\\pi^{2})\\). The \\(l\\) quantities are positive constant numbers of \\({\\cal O}(1)\\) that characterize the regulator dependence [13]. For better readability, we have written all gauge-coupling-dependent terms in square brackets, whereas fermionic self-interactions are grouped inside curly brackets. For small gauge couplings, the running of \\(g\\) and \\(e\\) is governed by their standard perturbative \\(\\beta\\)-functions - this will be discussed in more Detail in sect. 4. ## 3 Fixed points for purely fermionic models A fixed point corresponds to a simultaneous zero of all \\(\\beta\\)-functions. Each fixed point defines a (nonperturbatively) renormalizable theory within our truncation. Each fixed point furthermore constitutes its own universality class. Let us first analyze the RG flow of the fermionic couplings \\(\\hat{\\lambda}_{i}\\) given above in the simplified context of vanishing gauge couplings, \\(g^{2},e^{2}\\to 0\\). Then, in the point-like approximation the \\(\\beta\\) functions are all of the same form: \\[\\partial_{t}\\hat{\\lambda}_{i}=\\beta_{i}(\\hat{\\lambda})=(d-2)\\lambda_{i}+\\hat{ \\lambda}_{k}A_{i}^{kl}\\hat{\\lambda}_{l}, \\tag{11}\\] where \\(A_{i}^{kl}\\) are constant matrices which are symmetric in the upper indices, and we generalize the right-hand side formally to \\(d\\) dimensional spacetime. For fixed \\(\\hat{\\lambda}_{j\ eq i}\\), the \\(\\beta\\) function for \\(\\hat{\\lambda}_{i}\\) corresponds graphically to a parabola, such that the fixed-point equation \\(\\partial_{t}\\hat{\\lambda}_{i}=\\beta_{i}(\\hat{\\lambda}_{j\ eq i};\\hat{\\lambda}_ {i})=0\\) has exactly two (possibly complex or degenerate) solutions for \\(\\hat{\\lambda}_{i}\\). Since our truncation Eq. (3) has 4 fermionic couplings, we expect up to \\(2^{4}=16\\) different fixed points. A computer-algebraical inspection of Eq. (11) indeed reveals these 16 fixed points, all of which are real and therefore physically acceptable3; we do not find degeneracies. Footnote 3: This is a particularity of the \\({\\rm SU(N_{f})_{L}}\\times{\\rm SU(N_{f})_{R}}\\) flavor symmetry. For instance, a less restrictive \\({\\rm SU(N_{f})_{V}}\\) flavor symmetry does allow for 6 different four-fermion couplings, implying \\(2^{6}=64\\) fixed points, only 44 of which are real and physically acceptable. Consequently, in this framework each fixed point serves as a possibility to define a fundamental renormalizable quantum system in which the continuum limit can be taken. The triviality problem is thus absent; however, the hierarchy problem remains in these purely fermionic systems in the point-like limit. This can be shown by studying the stability matrix \\(B_{i}{}^{j}\\), defined by the derivatives of the \\(\\beta\\) functions at the fixed point \\[B_{i}{}^{j}=\\left.\\frac{\\partial\\beta_{i}}{\\partial\\hat{\\lambda}_{j}}\\right|_{ \\hat{\\lambda}_{*}}=(d-2)\\delta_{i}^{j}+2\\hat{\\lambda}_{*k}A_{i}^{kj}. \\tag{12}\\] The eigenvalues of the stability matrix and the associated eigenvectors \\(v\\) govern the evolution of small deviations from the fixed point according to \\(Bv=-\\Theta v\\). (We denote by \\(\\Theta\\) the negative of the eigenvalues). In turn, this determines the running of the couplings in the fixed-point regime, \\((\\hat{\\lambda}-\\hat{\\lambda}_{*})\\sim(\\Lambda_{\\rm UV}/k)^{\\Theta}\\). Therefore, large positive \\(\\Theta\\) implies a rapid growth of the couplings towards the infrared and corresponds to a strongly relevant RG direction, indicating IR instability such as a scalar mass term. In the present problem, a large positive eigenvalue \\(\\Theta=(d-2)\\) of the stability matrix exists for each fixed point \\(\\hat{\\lambda}_{*}\\). This can been seen from the following argument: let \\(\\hat{\\lambda}_{*}\\) be a solution of the fixed point equation, \\[\\partial_{t}\\hat{\\lambda}_{*i}=(d-2)\\hat{\\lambda}_{*i}+\\hat{\\lambda}_{*k}A_{i} ^{kl}\\hat{\\lambda}_{*l}=0\\quad,\\forall i. \\tag{13}\\] Acting with \\(B_{i}{}^{j}\\) on \\(\\hat{\\lambda}_{*j}\ eq 0\\) we have \\[B_{i}{}^{j}\\,\\hat{\\lambda}_{*j} = (d-2)\\hat{\\lambda}_{*i}+2\\hat{\\lambda}_{*j}A_{i}^{jk}\\hat{\\lambda} _{*k} \\tag{14}\\] \\[= -(d-2)\\hat{\\lambda}_{*i}+2((d-2)\\hat{\\lambda}_{*i}+\\hat{\\lambda} _{*j}A_{i}^{jk}\\hat{\\lambda}_{*k})\\] \\[= -(d-2)\\hat{\\lambda}_{*i},\\] where we have used the fixed-point equation in the last step. This shows that \\(\\hat{\\lambda}_{*}\\) itself is an eigenvector of the stability matrix with the eigenvalue \\(-(d-2)\\), hence \\(\\Theta=(d-2)\\). For \\(d=4\\) we have therefore at least one \"quadratically renormalizing\" relevant direction in the fixed-point regime, which is the same RG behavior as for a system with a fundamental Higgs scalar. A very precise choice for the initial conditions at the high scale (GUT scale) is required in order to separate the high scale from the symmetry-breaking scale (Fermi scale). The second lesson to be learned from Eq. (14) is that there cannot be a purely infrared attractive fixed point besides \\(\\hat{\\lambda}=0\\) in this truncation. In other words, all remaining 15 fixed points can be used for defining an interacting continuum limit, which requires at least one IR repulsive (or marginal) direction. The fixed points can be classified further according to their number of relevant and irrelevant directions, constituting the number of physical parameters. In our case, the number of fixed points with \\(j\\) relevant directions turns out to be equal to the binomial coefficient \\(\\left(\\begin{array}{c}4\\\\ j\\end{array}\\right)\\). There is one IR stable fixed point (only irrelevant directions) which is the Gaussian fixed point, and also exactly one fixed point with only relevant directions which is IR repulsive in all directions in this truncation. All other fixed points have relevant and irrelevant directions. This is illustrated in the left panel of Fig. 1 using the subsystem \\(\\hat{\\lambda}_{+}\\), \\(\\hat{\\lambda}_{-}\\) as a simple example (further parameters: \\(d=4\\), \\(\\rm N_{c}=3\\), \\(\\rm N_{f}=6\\), and linear cutoff functions [14] (\\(l_{1}^{(F),4}=1/2,l_{1,1}^{(FB),4}=1,l_{1,2}^{(FB),4}=3/2\\))). The right panel of Fig. 1 displays all 16 fixed points projected onto the \\((\\hat{\\lambda}_{+},\\hat{\\lambda}_{-},\\hat{\\lambda}_{\\sigma})\\) subspace. ## 4 Gauge interactions Let us now include the gauge interactions in our considerations. For this, we need the running of the gauge couplings which we derive from the fermion-gauge-field vertex \\(\\Gamma_{\\mu}\\). For instance, in the abelian case, the general form of this vertex is \\[\\Gamma_{\\overline{\\psi}\\psi B}=\\overline{e}\\int_{q_{1},q_{2}}\\overline{\\psi}( q_{2})\\,\\Gamma_{\\mu}(q_{2},q_{1})\\,B_{\\mu}(q_{2}-q_{1})\\,\\psi(q_{1}), \\tag{15}\\] from which we define the renormalized coupling \\(e\\) in the Thompson limit, \\[\\lim_{p\\to 0}\\Gamma_{\\mu}(q,q+p)=Z_{\\psi}\\,\\frac{e}{e}\\,Z_{\\rm F}^{1/2}\\,\\gamma_ {\\mu}=Z_{1}^{\\rm B}\\gamma_{\\mu}, \\tag{16}\\] and similarly for the nonabelian coupling \\(g\\). (Here we included the fermion wave-function renormalization \\(Z_{\\psi}\\) for full generality, but, as we already mentioned, with Landau gauge we have \\(Z_{\\psi}=1\\) in our truncation.) Within our truncation, the flow equation for the vertex results in the \\(\\beta\\) functions \\[\\partial_{t}g^{2} = \\eta_{\\rm F}\\,g^{2}-8v_{4}l_{1}^{\\rm(F),4}\\Big{[}\\hat{\\lambda}_{ \\sigma}-2\\hat{\\lambda}_{-}+{\\rm N}_{\\rm f}\\hat{\\lambda}_{\\sigma}-2{\\rm N}_{\\rm f }\\hat{\\lambda}_{\\rm VA}\\Big{]}g^{2}, \\tag{17}\\] \\[\\partial_{t}e^{2} = \\eta_{\\rm B}\\,e^{2}-8v_{4}l_{1}^{\\rm(F),4}\\Big{[}\\hat{\\lambda}_{ \\sigma}-2\\hat{\\lambda}_{-}-2{\\rm N}_{\\rm f}{\\rm N}_{\\rm c}(\\hat{\\lambda}_{+}+ \\hat{\\lambda}_{-})\\] (18) \\[+({\\rm N}_{\\rm c}\\hat{\\lambda}_{\\sigma}^{ \\rm c}+{\\rm N}_{\\rm f}\\hat{\\lambda}_{\\sigma})-2({\\rm N}_{\\rm c}+{\\rm N}_{\\rm f })\\hat{\\lambda}_{\\rm VA}\\Big{]}e^{2},\\] where the standard one-loop coefficients are contained in the anomalous dimensions of the gauge field, \\[\\eta_{\\rm F}=-\\frac{1}{Z_{\\rm F}}\\partial_{t}Z_{\\rm F}=-4v_{4}b_ {0}^{g^{2}}\\,g^{2},\\hskip 28.452756ptb_{0}^{g^{2}}=\\frac{11}{3}\\,{\\rm N}_{ \\rm c}-\\frac{2}{3}\\,{\\rm N}_{\\rm f},\\] \\[\\eta_{\\rm B}=-\\frac{1}{Z_{\\rm B}}\\partial_{t}Z_{\\rm B}=-4v_{4}b_ {0}^{e^{2}}\\,e^{2},\\hskip 28.452756ptb_{0}^{e^{2}}=-\\frac{4}{3}\\,{\\rm N}_{ \\rm f}{\\rm N}_{\\rm c}. \\tag{19}\\] The additional \\(\\hat{\\lambda}\\)-dependent terms in Eqs. (17), (18) arise from diagrams of the form shown in Fig. 2. At first sight, it seems that these additional terms offer a new and rich structure for the possible UV behavior of the system. For instance, for a given non-Gaussian \\(\\hat{\\lambda}\\) fixed point \\(\\hat{\\lambda}_{*}\\), these terms are dominant in the small gauge coupling limit. If the factor in square brackets in these terms is positive for a given \\(\\hat{\\lambda}_{*}\\), the corresponding gauge coupling seems to be asymptotically free. This would offer a solution to the triviality problem of the U(1) sector. Moreover, the interplay of both terms on the right-hand side of these \\(\\beta\\) functions can produce non-Gaussian fixed points in the gauge couplings. This would not only be a possible solution of triviality, but could also lead to a sizeable reduction of the critical exponents and the hierarchy problem by circumventing the argument of Eq. (14). However, these hopes can unfortunately not be confirmed, as quantum field theory tells us in the following interesting way. Equations (17) and (18) are not the only source of information about the vertex that we can obtain from the flow-equation formalism. The requirement of gauge invariance is encoded in a constraint for the effective action: the Ward-Takahashi identity (WTI). Since the regulator of the flow equation formalism is not manifestly gauge invariant, it also contributes to the constraint, leading to a modified Ward-Takahashi identity (mWTI) [15]. For simplicity, let us analyze the mWTI arising from abelian gauge symmetry. Employing the generator \\({\\cal G}\\) of an infinitesimal gauge transformation (in momentum space), Figure 2: Correction to the gauge-boson–fermion vertex (fermions solid with arrow, gauge boson wiggled) in the presence of a four-fermion interaction. \\[{\\cal G}(p)=ip_{\\mu}\\frac{\\delta}{\\delta B_{\\mu}(-p)}-i\\overline{e}\\left[\\int_{q} \\psi(q)\\frac{\\delta}{\\delta\\psi(q-p)}-\\int_{q}\\overline{\\psi}(q)\\frac{\\delta}{ \\delta\\overline{\\psi}(q+p)}\\right], \\tag{20}\\] the mWTI can be written as \\[{\\cal G}(p)\\,\\Gamma_{k}-\\frac{i}{\\alpha_{B}}\\,p^{2}p_{\\mu}B_{\\mu}(p)=-i \\overline{e}\\,{\\rm tr}\\int_{q}\\big{[}R_{k}^{\\psi}(q+p)\\,G_{\\psi\\overline{\\psi} }(q+p,q)-R_{k}^{\\psi\\,T}(q+p)\\,G_{\\overline{\\psi}^{T}\\psi^{T}}(q+p,q)\\big{]}, \\tag{21}\\] where, e.g., \\(G_{\\psi\\overline{\\psi}}=(\\Gamma^{(2)}+R_{k})_{\\psi\\overline{\\psi}}^{-1}\\) denotes the \\((\\psi\\overline{\\psi})\\) component of the propagator, and the \\(T\\) symbol indicates transposition in Dirac space. For vanishing regulator \\(R_{k}\\to 0\\), the right-hand side of the mWTI is zero and we rediscover the standard WTI. However, since we are dealing with a perturbatively nonrenormalizable theory, the presence of the regulator is essential in order to define the quantum theory, i.e., the Schwinger functional. Therefore, the right-hand side of the mWTI should not only be viewed as a technical complication, but as important information about the structure of the theory. Owing to its similar structure, the mWTI can be evaluated with the same technology as the flow equation; the right-hand side, e.g., is again of one-loop form with an exact propagator in the loop. Information about the running gauge coupling can be found by projecting the mWTI onto the operator \\(\\sim\\overline{\\psi}\\psi\\), yielding \\[e=\\overline{e}\\,Z_{\\rm B}^{-1/2}\\,\\left(1-2v_{4}l_{1}^{({\\rm F}),4}\\sum c_{i} ^{e}\\hat{\\lambda}_{i}\\right), \\tag{22}\\] i.e., \\[\\frac{Z_{1}^{\\rm B}}{Z_{\\psi}}=\\left(1-2v_{4}l_{1}^{({\\rm F}),4}\\sum c_{i}^{e }\\hat{\\lambda}_{i}\\right), \\tag{23}\\] where we abbreviated the combination \\[\\sum c_{i}^{e}\\hat{\\lambda}_{i}:=\\hat{\\lambda}_{\\sigma}-2\\hat{\\lambda}_{-}-2{ \\rm N}_{\\rm f}{\\rm N}_{\\rm c}(\\hat{\\lambda}_{+}+\\hat{\\lambda}_{-})+({\\rm N}_{ \\rm c}\\hat{\\lambda}_{\\sigma}^{\\rm c}+{\\rm N}_{\\rm f}\\hat{\\lambda}_{\\sigma})-2 ({\\rm N}_{\\rm c}+{\\rm N}_{\\rm f})\\hat{\\lambda}_{\\rm VA}, \\tag{24}\\] as it also occurs in Eq. (18). In ordinary QED, the term \\(\\sim\\hat{\\lambda}\\) is not present and we end up with the standard result that the running of the coupling corresponds to the running of the gauge-field wave-function renormalization4, \\(e=\\overline{e}Z_{\\rm B}^{-1/2}\\). The presence of the fermionic interactions replaces this simple relation by Eq. (22). Footnote 4: With Eq. (23), this corresponds to the standard result \\(Z_{1}^{\\rm B}=Z_{\\psi}\\). We remark that from Eq. (23) alone only the ratio of \\(Z_{\\psi}\\) and \\(Z^{\\rm B}\\) is accessible (and physically relevant). Still, in approximations it can make a difference if \\(Z_{\\psi}\\) or \\(Z_{1}^{\\rm B}\\) is running. Consistent with our choice of the Landau gauge, we set \\(Z_{\\psi}=1\\). By differentiation, we obtain the \\(\\beta\\) function \\[\\partial_{t}e^{2}{=}\\eta_{\\rm B}\\,e^{2}-4v_{4}l_{1}^{({\\rm F}),4}\\,\\frac{e^{2} }{1-2v_{4}l_{1}^{({\\rm F}),4}\\sum c_{i}^{e}\\hat{\\lambda}_{i}}\\,\\partial_{t} \\sum c_{i}^{e}\\hat{\\lambda}_{i}. \\tag{25}\\] Here \\(\\partial_{t}\\sum c_{i}^{e}\\hat{\\lambda}_{i}\\) means that we have to insert the complete \\(\\beta\\) functions for the \\(\\hat{\\lambda}_{i}\\) into Eq. (25). Since \\(\\partial_{t}\\sum c_{i}^{e}\\hat{\\lambda}_{i}=2\\sum c_{i}^{e}\\hat{\\lambda}_{i}+\\dots\\) in a small-coupling expansion, we rediscover the result of the flow equation (18) in this limit. In other words, flow equation and mW1 agree within the order of our truncation, as they should. The mWTI, however, contains considerably more information, since the non-monomial appearance of the \\(\\hat{\\lambda}\\)'s suggests that the mWTI represents a resummation of a larger class of diagrams. This has important consequences for the RG behavior of the gauge coupling, if compared to the possibilities that are offered by the simpler form of Eq. (18). At the non-Gaussian \\(\\hat{\\lambda}\\) fixed points, the \\(\\hat{\\lambda}\\) flow vanishes, such that \\(\\partial_{t}\\sum c_{i}^{\\varepsilon}\\hat{\\lambda}_{i}\\to 0\\). This implies that the running of the gauge coupling in the vicinity of its Gaussian fixed point is determined by the standard one-loop term \\(\\eta_{\\rm B}e^{2}\\) alone. The fermionic self-interactions contribute only to higher order.5 The abelian gauge coupling can therefore not be rendered asymptotically free by the influence of the four-fermion interactions. By a similar argument, the additional term in Eq. (25) does not facilitate the existence of non-Gaussian fixed points, \\(e_{*}\ eq 0\\), in the abelian gauge coupling within this truncation. Again, if the \\(\\hat{\\lambda}\\)'s approach a fixed point, \\(\\partial_{t}\\sum c_{i}^{\\varepsilon}\\hat{\\lambda}_{i}\\to 0\\) and we are left with the standard running only, for which no non-Gaussian fixed point is known. Footnote 5: A careful analysis reveals that the additional term in Eq. (25) is of the same order as the two-loop running of the gauge coupling for small perturbations around the fixed point. To summarize, the four-fermion contributions to the running of the abelian gauge coupling are not capable of solving the triviality problem in our truncation. Let us finally comment on the RG flow of the running SU(N\\({}_{\\rm c}\\)) coupling \\(g^{2}\\). Although the mWTI for the nonabelian sector has a more complex structure, the result for the four-fermion contribution to the running gauge coupling has the same form as in Eq. (25). Near the Gaussian fixed point, the standard one-loop running holds and the non-abelian gauge sector remains asymptotically free. Actually, this finding is in line with another argument: we could equally well define the running of the nonabelian gauge coupling by the three-gauge-boson vertex. At one-loop order, there is no contribution to the renormalization of this vertex from the four-fermion couplings \\(\\hat{\\lambda}\\). As a consequence, the usual one-loop \\(\\beta\\) function governs the flow of this three-gluon vertex. ## 5 Spontaneous symmetry breaking In order to illustrate the flow of the system from a non-Gaussian fixed point towards the regime of spontaneous symmetry breaking, we select a fixed point with only one relevant direction (for instance, the fixed point on the right-hand side or left-hand side in the left panel of Fig. 1). At the high scale \\(\\Lambda_{\\rm UV}\\), we specify initial conditions such that all couplings in our truncation are roughly in the vicinity of their fixed point values; no fine-tuning is needed for this step. If we now compute the RG flow towards the infrared, the system either relaxes towards the Gaussian fixed point with \\(\\hat{\\lambda}\\to 0\\) as long as the gauge couplings are weak; or the system rapidly approaches the regime of spontaneous symmetry breaking (SSB) which is signaled by diverging four-fermion couplings. In the former case, the system is in the universality class of SU(N\\({}_{\\rm c}\\))\\(\\times\\)U(1) gauge theory with N\\({}_{\\rm f}\\) chiral fermions: in the latter case we are dealing with a universality class characterized by SSB. Finally, we fine-tune only one of the parameters of the initial conditions such that the system is very close to the phase boundary on the SSB side. This fine-tuning corresponds effectively to a determination of the scale \\(k_{\\rm SSB}\\) at which the system runs into the SSB regime. In Fig. 3, we display a particular solution to the flow equation obtained in the aforementioned way for \\({\\rm N_{c}}=3\\), \\({\\rm N_{f}}=6\\) and a linear regulator [14]. We have adjusted the gauge couplings roughly to their standard model values and fixed the four-fermion couplings to the value of one of the four non-Gaussian fixed points with one relevant direction. Finally, we have fine-tuned \\(\\hat{\\lambda}_{\\rm VA}\\) so that the SSB scale corresponds to the Fermi scale \\(\\sim 10^{2}\\) GeV. On the left panel, the running of the four-fermion couplings from \\(k=10^{9}\\) GeV down to the Fermi scale is depicted. Over a wide range of scales, these couplings remain close to their fixed-point values with a slight modulation induced by the logarithmic increase of the gauge couplings. Near the Fermi scale, the running induced by the relevant direction becomes fast and the four-fermion couplings diverge, which signals the onset of SSB. With the present truncation, however, we cannot enter the SSB regime where the dynamics is governed by composite bosonic fluctuations on top of bosonic condensates \\(\\sim\\langle\\overline{\\psi}\\psi\\rangle\\). A suitable description can be given by means of partial bosonization under the RG flow [16]. In the present case, this would relate, e.g., the coupling \\(\\hat{\\lambda}_{\\sigma}\\) to a Yukawa coupling \\(h_{\\sigma}\\) and a mass term \\(m_{\\sigma}\\) for the composite scalar, \\(\\hat{\\lambda}_{\\sigma}\\sim h_{\\sigma}^{2}/m_{\\sigma}^{2}\\). In this sense, the increase of \\(\\lambda_{\\sigma}\\) is associated with a decrease of the scalar mass term, which eventually drops below zero and thus gives rise to SSB. Using partial bosonization, one can moreover study the nature of the condensate, whereas in the present purely fermionic description we cannot distinguish the behavior of the various fermionic interaction channels. Owing to the nonlinear interplay of the flow equations for the couplings, all diverge simultaneously in this truncation. Even the gauge couplings can be affected, as is the case in our example for the abelian gauge coupling (see Figure 3: Running couplings as a function of the renormalization scale \\(k=10^{t_{10}}\\) GeV. Left panel: flow of the four-fermion couplings \\(\\hat{\\lambda}_{\\sigma}\\), \\(\\hat{\\lambda}_{\\rm VA}\\), \\(\\hat{\\lambda}_{-}\\), \\(\\hat{\\lambda}_{+}\\) (from top to bottom); the divergence of the couplings near the Fermi scale \\(10^{t_{10}}\\sim 10^{2}\\) signals the approach to SSB. Right panel: flow of the gauge couplings; while the \\(\\hat{\\lambda}_{i}\\) are close to their fixed-point values, the gauge couplings run according to standard perturbation theory. The rapid behavior near the Fermi scale is likely to be an artefact of the truncation (dashed lines). right panel of Fig. 3). This is clearly an artefact of the present truncation, and we expect a well-controllable flow once the threshold behavior is accounted for by using the techniques of [16],[17]. ## 6 Conclusions In this letter, we have analyzed the RG behavior of a standard-model-like system with purely fermionic matter content. The Higgs sector is replaced by fermionic self-interactions which are responsible for spontaneous symmetry breaking. Whereas the low-energy side of our models is reminiscent, and in the spirit, of topquark-condensation scenarios, we here concentrate on the UV behavior of such systems, investigating their renormalizability and RG stability in the framework of RG flow equations. Within our truncation of point-like four-fermion interactions, we have identified a large number of non-Gaussian fixed points in the fermionic interactions, each of which constitutes an independent universality class with the given gauge and flavor symmetries at hand. From the structure of the flow equations, we deduce that, for \\(n\\) independent four-fermion interactions, there exist up to \\(2^{n}\\) fixed points. Each fixed point can serve to define an interacting continuum limit. We find no sign of triviality in our truncation in this fermionic Higgs sector. Contrary to the standard scalar Higgs sector, the fermionic systems have the potential to be valid down to arbitrarily small distances. Furthermore, we have analyzed the RG stability of the model towards the IR. As a result, all fixed points with non-vanishing four-fermion interaction exhibit one RG relevant direction with critical exponent 2, i.e., renormalizing quadratically, similar to a fundamental scalar. Therefore, our fermionic models suffer from the same hierarchy problem as the conventional Higgs sector. These findings are not modified by the inclusion of weakly coupled gauge interactions which only induce small anomalous dimensions. In turn, the fermionic self-interactions do not modify the leading-order running of the gauge couplings, as we have shown with the help of modified Ward-Takahashi identities. This inhibits a solution of the triviality problem in the abelian gauge sector at the Gaussian fixed point. In summary, we find that a fermionic Higgs sector has the potential to be a truly renormalizable theory, removing the triviality problem of a fundamental scalar Higgs. However, we have not been able to identify a consistent resolution of the hierarchy problem within our truncation. In a realistic scenario comprising the full standard-model phenomenology, the number of physical parameters in our model would be comparable to that of the standard model. The precise number will depend on the particular choice of the non-Gaussian \\(\\lambda_{*}\\) fixed point and its number of RG-relevant directions. Whether the number of physical parameters can even drop below that of the standard model then depends on the universality class associated with the chosen fixed point. A determination of the physically acceptable universality classes requires an analysis of their symmetry-breaking properties. This is beyond the scope of the present truncation in which all couplings diverge at the symmetry-breaking scale. These low-energy properties can, however, easily be derived from an analysis of the condensing bilinear fermion channels using partial bosonization under the flow as described in [16], [17]. At this point, let us comment on the stability of our results under a change of the truncation. We have checked that higher fermionic self-interactions do not modify our results in the point-like limit. They neither remove the \\(\\lambda_{*}\\) fixed points nor represent nonperturbatively renormalizable couplings themselves (the latter would increase the number of physical parameters). Concerning the gauge sector, we have studied a number of non-minimal fermion-gauge-field couplings. None of them turns out to influence the leading-order running of the gauge couplings in the weak-coupling regime; the argument proceeds similarly to the one given in Sect. 4 based on the mWTI. The UV fixed points that we find for the fermionic couplings may be viewed as a generalization of those UV fixed points that are known from large-\\(\\rm N_{f}\\) studies of simple four-fermion interactions in \\(d=2+1\\) dimensions [18]. In the latter case, four-fermion interactions can be renormalized order by order in a \\(1/\\rm N_{f}\\) expansion despite its seeming perturbative nonrenormalizability. Our truncation exhibits these fixed points in all dimensions \\(d>2\\). However, we suspect that at least far beyond four dimensions the fixed points may be an artefact of the truncation, since here even the induced Yukawa couplings between fermions and composite bosonic fluctuations become RG irrelevant by power-counting arguments. In \\(d=4\\), these Yukawa couplings are RG marginal by power-counting: hence \\(d=4\\) appears to be the critical dimension. Large-\\(\\rm N_{f}\\) arguments are indeed in favor of logarithmic triviality in \\(d=4\\), a picture that receives some support from lattice simulations with staggered fermions for a simple NJL model [19]. However, the large-\\(\\rm N_{f}\\) approximation neglects the anomalous dimension of the fermion which, even if tiny, can have a large effect by changing marginal-irrelevant into marginal-relevant operators. Since gauge interactions also contribute to the fermionic anomalous dimension, purely fermionic lattice studies cannot be conclusive for the models considered in the present work. Therefore, \\(d=4\\) lattice investigations with four-fermion as well as gauge interactions would be desirable and may indeed be accessible with recently developed algorithms [20]. In order to pursue this question further within the flow equation framework, we have to go beyond the point-like limit. If the qualitative picture developed so far in this simple truncation turns out to be incomplete, we expect that strong modifications might arise from the full momentum structure of the interaction. It may well be that our non-Gaussian \\(\\lambda_{*}\\) fixed points are only a projection of a more general momentum-dependent interaction onto the point-like limit. If so, it is natural to speculate that a strongly momentum-dependent wave-function renormalization of the fermions could even induce a large fermionic anomalous dimension. Then it would be conceivable that the latter stabilizes the fermionic flow towards the infrared. This would pave the way for a possible solution of the hierarchy problem in models with purely fermionic matter content. Concerning the hierarchy problem, another speculative alternative based on the assumed existence of a non-Gaussian fixed point for the gauge couplings has been investigated in the appendix. Whether or not one of these scenarios can indeed be realized is subject to further nonperturbative studies for which RG flow equations offer an appropriate framework. ## Acknowledgment The authors are grateful to C.S. Fischer and J.M. Pawlowski for useful discussions. H.G. and J.J. acknowledge financial support by the Deutsche Forschungsgemeinschaft under contract Gi 328/1-2. ## Appendix A Non-Gaussian gauge systems Up to this point, our analysis reveals that a construction of standard-model-like theories based on non-Gaussian \\(\\hat{\\lambda}\\) fixed points and Gaussian gauge fixed points still suffers from a hierarchy problem in the four-fermion sector as well as triviality of the abelian gauge sector - only triviality in the Higgs-like sector would be avoided in this scenario. In the following, we would like to demonstrate that the existence of a non-Gaussian fixed point in the abelian gauge coupling has the potential to solve both problems simultaneously. Although the \\(\\beta\\) function for the abelian coupling \\(e^{2}\\) as derived from the mWTI does not furnish a non-Gaussian fixed point via the direct four-fermion contribution, such a fixed point might be induced by the strong-coupling behavior of the gauge interactions or a combination of strong gauge and four-fermion interactions. This would manifest itself in a second zero of the anomalous dimension \\(\\eta_{\\rm B}(e_{*})=0\\) at nonzero \\(e_{*}\ eq 0\\). The search for such a non-Gaussian fixed point has a long tradition in the literature. Lattice studies of non-compact abelian gauge systems using staggered fermions [21] have not found such a fixed point; on the contrary, numerical data is compatible with logarithmic triviality. However, lattice results for field theories with both gauge and four-fermion couplings are not yet available, although perhaps accessible with recently developed algorithms [20]. Indications for the existence of such a fixed point in a gauged NJL model have been collected using Dyson-Schwinger equations [22]. Of course, the UV behavior of system as complex as the ones considered in this work is a completely open problem. Therefore, let us from now on assume that such a fixed point \\(e_{*}>0\\) in the abelian gauge coupling exists, and study its consequences. Since the \\(\\beta_{e^{2}}\\) function for \\(e^{2}\\) is positive for small coupling, this non-Gaussian fixed point is necessarily UV stable, which solves the triviality problem. In order to study the hierarchy problem, we have to compute the dependence of the critical exponents for the \\(\\hat{\\lambda}\\)'s on \\(e_{*}\\). For a first impression, we perform the same numerical analysis as in Sect. 3, but now with nonzero \\(e=e_{*}\\). Thereby, we neglect any possible influence of the contributions of the unknown \\(\\beta_{e^{2}}\\) function on the off-diagonal elements of the stability matrix; this is justified if the \\(\\hat{\\lambda}\\) couplings do not play a dominant role for \\(\\beta_{e^{2}}\\) near \\(e=e_{*}\\). In Fig. 4, we display the dependence of the critical exponents on the hypothetical value of the fixed point \\(e_{*}\\). Each line in these plots represents the maximal (most unstable) eigenvalue for a given non-Gaussian fixed point. As can be seen from the left panel 4(a), there is one non-Gaussian fixed point whose maximal eigenvalue decreases with increasing \\(e_{*}\\) and can become close to zero if \\(\\alpha_{*}\\lesssim\\alpha_{\\rm cr}\\simeq 1.3\\). At \\(\\alpha_{\\rm cr}\\), this fixed point annihilates with the Gaussian fixed point and disappears from the physically acceptable set. In general, Fig. 4 does not represent the complete situation, since four-fermion couplings and gauge couplings mix nontrivially at the non-Gaussian fixed points. Technically speaking, we should not neglect the \\(\\partial(\\partial_{t}e^{2})/\\partial\\lambda\\) contributions to the stability matrix, nor \\(\\partial(\\partial_{t}(e^{2},g^{2}))/\\partial(e^{2},g^{2})\\). Whereas we can read off the former from the mWTI (25), nothing is known about the latter near the speculative fixed point \\(e_{*}\\). Since we do not want to introduce a fine-tuning problem through the backdoor in this sector, it is natural to assume that these entries in the stability matrix are small. If this assumption is not valid, a large maximal eigenvalue will probably arise from this sector, and the present speculation is meaningless. Therefore, we simply set the pure gauge entries to zero and study the evolution of the eigenvalues including the \\(\\partial(\\partial_{t}e^{2})/\\partial\\lambda\\) terms. The result is shown on the right panel of Fig. 4. Obviously, the mixing between the couplings exerts a strong quantitative influence on the eigenvalues. We find a whole range of possible \\(e_{*}\\) fixed-point values for which the maximal eigenvalue of the stability matrix is small. The existence of such a non-Gaussian gauge fixed point therefore has the potential to stabilize the flow towards the IR significantly. The running towards the Fermi scale would then proceed with a small power or even almost logarithmically as for a system with marginal couplings only. In such a scenario, the hierarchy problem would be absent. ## References * [1] C. Wetterich, Phys. Lett. B **140** (1984) 215. * [2] H. P. Nilles, Phys. Rept. **110**, 1 (1984); S. Weinberg, \"The Quantum Theory Of Fields. Vol. 3: Supersymmetry,\" Cambridge, UK, (2000) * [3] E. Farhi and L. Susskind, Phys. Rept. **74**, 277 (1981); K. Lane, \"Two lectures on technicolor,\" hep-ph/0202255. Figure 4: Maximal eigenvalue \\(\\Theta_{\\rm max}\\) of the various fixed points of the four-fermion coupling depending on the assumed fixed-point value \\(\\alpha_{e*}=\\frac{e_{*}^{2}}{4\\pi}\\) for the abelian gauge coupling (at \\(\\alpha_{g*}=\\frac{g_{*}^{2}}{4\\pi}=0\\)). The left panel shows the eigenvalue of the submatrix in the pure four-fermion sector while the right panel depicts the eigenvalue for the full matrix but with the unknown matrix elements in the pure gauge sector equal to zero. * [4] N. Arkani-Hamed, A. G. Cohen and H. Georgi, Phys. Lett. B **513** (2001) 232 [hep-ph/0105239]; N. Arkani-Hamed, A. G. Cohen, T. Gregoire and J. G. Wacker, JHEP **0208** (2002) 020 [hep-ph/0202089]; N. Arkani-Hamed, A. G. Cohen, E. Katz, A. E. Nelson, T. Gregoire and J. G. Wacker, JHEP **0208** (2002) 021 [hep-ph/0206020]; I. Low, W. Skiba and D. Smith, Phys. Rev. D **66** (2002) 072001 [hep-ph/0207243]; M. Schmaltz, Nucl. Phys. Proc. Suppl. **117** (2003) 40 [hep-ph/0210415]. * [5] Y. Nambu and G. Jona-Lasinio, Phys. Rev. **122**, 345 (1961); _ibid._**124**, 246 (1961). * [6] V. A. Miransky, M. Tanabashi and K. Yamawaki, Phys. Lett. B **221**, 177 (1989); Mod. Phys. Lett. A **4**, 1043 (1989); W. A. Bardeen, C. T. Hill and M. Lindner, Phys. Rev. D **41**, 1647 (1990); for a review, see G. Cvetic, Rev. Mod. Phys. **71**, 513 (1999) [hep-ph/9702381]. * [7] S. Weinberg, in _C76-07-23.1_ HUTP-76/160, Erice Subnucl. Phys., 1, (1976). * [8] C. Wetterich, Phys. Lett. B **104** (1981) 269, S. Bornholdt and C. Wetterich, Phys. Lett. B **282**, 399 (1992). * [9] J. M. Kosterlitz and D. J. Thouless, J. Phys. CC **6** (1973) 1181. * [10] G. Von Gersdorff and C. Wetterich, Phys. Rev. B **64**, 054513 (2001) [hep-th/0008114]. * [11] C. Wetterich, Phys. Lett. B **301**, 90 (1993); Nucl. Phys. B **352**, 529 (1991); Z. Phys. C **48**, 693 (1990). * [12] U. Ellwanger, M. Hirsch and A. Weber, Z. Phys. C **69**, 687 (1996) [hep-th/9506019]; D. F. Litim and J. M. Pawlowski, Phys. Lett. B **435**, 181 (1998) [hep-th/9802064]. * [13] J. Berges, N. Tetradis and C. Wetterich, Phys. Rept. **363**, 223 (2002) [hep-ph/0005122]. * [14] D. F. Litim, Phys. Lett. B **486**, 92 (2000) [hep-th/0005245]; Phys. Rev. D **64**, 105007 (2001) [hep-th/0103195]. * [15] U. Ellwanger, Phys. Lett. B **335**, 364 (1994) [hep-th/9402077]; M. Reuter and C. Wetterich, Nucl. Phys. B **417**, 181 (1994); F. Freire, D. F. Litim and J. M. Pawlowski, Phys. Lett. B **495**, 256 (2000) [hep-th/0009110]. * [16] H. Gies and C. Wetterich, Phys. Rev. D **65**, 065001 (2002) [hep-th/0107221]; Acta Phys. Slov. **52**, 215 (2002) [hep-ph/0205226]; hep-th/0209183. * [17] J. Jackel and C. Wetterich, Phys. Rev. D **68**, 025020 (2003) [hep-ph/0207094]; J. Jackel, hep-ph/0309090. * [18] B. Rosenstein, B. J. Warr and S. H. Park, Phys. Rev. Lett. **62**, 1433 (1989); K. Gawedzki and A. Kupiainen, Phys. Rev. Lett. **55**, 363 (1985); C. de Calan, P. A. Faria da Veiga, J. Magnen and R. Seneor, Phys. Rev. Lett. **66**, 3233 (1991). * [19] S. Kim, A. Kocic and J. B. Kogut, Nucl. Phys. B **429**, 407 (1994) [hep-lat/9402016]. * [20] S. Kim, J. B. Kogut and M. P. Lombardo, Phys. Rev. D **65**, 054015 (2002) [hep-lat/0112009]. * [21] M. Gockeler, R. Horsley, V. Linke, P. Rakow, G. Schierholz and H. Stuben, Phys. Rev. Lett. **80**, 4119 (1998) [hep-th/9712244]. * [22] M. Reenders, Phys. Rev. D **62**, 025001 (2000) [hep-th/9908158].
We investigate the possibility of constructing a renormalizable standard model with purely fermionic matter content. The Higgs scalar is replaced by point-like fermionic self-interactions with couplings growing large at the Fermi scale. An analysis of the UV behavior in the point-like approximation reveals a variety of non-Gaussian fixed points for the fermion couplings. If real, such fixed points would imply nonperturbative renormalizability and evade triviality of the Higgs sector. For point-like fermionic self-interactions and weak gauge couplings, one encounters a hierarchy problem similar to the one for a fundamental Higgs scalar. HD-THEP-03-60 **Towards a renormalizable standard model** **without fundamental Higgs scalar** Holger Gies\\({}^{1}\\), Joerg Jaeckel\\({}^{2}\\) and Christof Wetterich\\({}^{3}\\) _Institut fur theoretische Physik, Universitat Heidelberg,_ _Philosophenweg 16, D-69120 Heidelberg, Germany_ \\({}^{1}\\) _E-mail: [email protected]_ \\({}^{2}\\) _E-mail: [email protected]_ \\({}^{3}\\) _E-mail: [email protected]_
Give a concise overview of the text below.
arxiv-format/0312112v2.md
# Stellar matter in the Quark-Meson-Coupling Model with neutrino trapping P.K. Panda and D.P. Menezes Depto de Fisica-CFM, Universidade Federal de Santa Catarina, CP. 476, 88040-900 Florianopolis-SC, Brazil C. Providencia Centro de Fisica Teorica - Dep. de Fisica, Universidade de Coimbra, P-3004 - 516 Coimbra, Portugal ###### PACS number(s): 95.30.Tg, 21.65.+f, 12.39.Ba, 21.30.-x During the early stage of a proto-neutron star neutrinos get trapped in it when their mean-free path is smaller than the star radius. The presence of neutrinos generally gives rise to a stiffer EoS. This may have important consequences on the evolution of the star, namely, it can happen that during the cooling process the star decays into a low-mass black hole [1]. It is worth mentioning that over 20 years ago the importance of neutrino trapping was already pointed out in [2], where the authors claimed that another important effect of including trapped neutrinos is that the collapse is gentler in its presence than it would be without it. In the present paper we are interested in building the neutrino trapped EoS for mixed matter of quark and hadron phases. We employ the QMC model (QMC) [3; 4; 5] in order to describe the hadron phase. For the quark phase we use two distinct models, the unpaired quark model (UQM), which is given by the simple MIT bag model [6] and the color-flavor locked phase (CFL) [7; 8] in which quarks are paired near the Fermi surface forming a superconducting phase [9]. In a previous work [10] we have used the same formalism in order to study the properties of hybrid stars at \\(T=0\\) MeV. In this work, we verify the effect of including trapped neutrinos in the same spirit as done in [11], where we have seen that the EoS changes considerably and the mixed phase appears at higher energy densities than in the EoS built without the inclusion of neutrinos in accordance with what was seen in other works, as in [1] for instance. We have also verified that trapping keeps the electron population high so that dense matter contains more protons (and depending on the parametrization used, other positively charged particles) than matter without neutrinos. This fact was also discussed in [12]. Proto-neutron stars with a certain baryonic mass at birth keep this mass during its evolution until the final neutrino free star because most of its matter is accreted in the early stages after birth. Hence, stars with trapped neutrinos and a baryonic mass larger than the corresponding ones after deleptonization collapse to a black hole. In [11] we have described the hybrid star within a non-linear Walecka model (NLWM) [13] including the baryonic octet and a phase transition to a quark phase. Within this description the maximum baryonic masses supported by neutrino trapped EoSs are larger than the corresponding ones found in neutrino free EoSs, a fact which occurs in other EoS which include the strangeness degree of freedom [1; 12]. In what follows we investigate whether this behavior is also present in the framework of the QMC model. Because of its importance in understanding the evolution process in a star, we calculate the baryonic masses for the hybrid stars studied in the present work. We also compare the properties of the stars obtained within the UQM model and the CFL model. We perform all the calculations at \\(T=0\\) MeV although the neutrino trapped phase occurs for temperatures in the interior of the star which can vary between 20-40 MeV [1; 11]. However, it was shown in [11] that the effect of trapping is much stronger than the finite temperature effect. Therefore, we believe that the main conclusions drawn in the present work are still valid for a finite temperature calculation. On the other hand, the calculation with the CFL phase will give us only an upper limit, since at finite temperature there is a phase transition from the color superconducting state to a normal phase, which, according to [16], is a second order transition with a BCS critical temperature \\(T_{c}\\sim 0.57\\Delta\\), \\(\\Delta\\) being the gap parameter. Since most of the analytical calculations and formulae used in the present work have already been given in [10] they will be omitted here. As mentioned above, we have used the QMC model with the inclusion of hyperons for the hadron phase. In this model, the nucleon in nuclear medium is assumed to be a static spherical MIT bag in which quarks interact with the scalar and vector fields, \\(\\sigma\\), \\(\\omega\\) and \\(\\rho\\) and these fields are treated as classical fields in the meanfield approximation. The quark field, \\(\\psi_{q}(x)\\), inside the bag then satisfies the equation of motion: \\[\\left[i\\,\\partial\\!\\!\\!/-(m_{q}^{0}-g_{\\sigma}^{q}\\,\\sigma)-g_{ \\omega}^{q}\\,\\omega\\,\\gamma^{0}+\\frac{1}{2}g_{\\rho}^{q}\\tau_{z}\\rho_{03}\\right] \\psi_{q}(x)=0\\] \\[q=u,d,s, \\tag{1}\\] where \\(m_{q}^{0}\\) is the current quark mass, and \\(g_{\\sigma}^{q}\\), \\(g_{\\omega}^{q}\\) and \\(g_{\\rho}^{q}\\) denote the quark-meson coupling constants. After enforcing the boundary condition at the bag surface, the transcendental equation for the ground state solution of the quark (in \\(s\\)-state) is \\(j_{0}(x_{q})=\\beta_{q}j_{1}(x_{q})\\) which determines the bag eigenfrequency \\(x_{q}\\), where \\(\\beta_{q}=\\sqrt{(\\Omega_{q}-R_{B}m_{q}^{*})/(\\Omega_{q}+R_{B}m_{q}^{*})}\\), with \\(\\Omega_{q}=(x_{q}^{2}+R^{2}m_{q}^{*2})^{1/2}\\); \\(m_{q}^{*}=m_{q}^{0}-g_{\\sigma}^{q}\\sigma_{0}\\), is the effective quark mass. The energy of the nucleon bag is \\(M_{B}^{*}=3\\frac{\\Omega_{q}}{R_{B}}-\\frac{Z_{B}}{R_{B}}+\\frac{4}{3}\\pi R_{B}^ {3}B_{B}\\), where \\(B_{B}\\) is the bag constant and \\(Z_{B}\\) parameterizes the sum of the center-of-mass motion and the gluonic corrections. The bag radius, \\(R_{B}\\), is then obtained through the stability condition for the bag. An interesting fact related to the QMC model is that the bag volume changes in the medium through the mean value of the \\(\\sigma\\)-field. This also implies that the bag eigenvalues are also modified. The onset of hyperons depends on the conditions of chemical equilibrium and charge neutrality discussed below and also on the meson-hyperon coupling constants for which we have chosen the hyperon coupling constants constrained by the binding of the \\(\\Lambda\\) hyperon in nuclear matter, hyper-nuclear levels and neutron star masses (\\(x_{\\sigma}=0.7\\) and \\(x_{\\omega}=x_{\\rho}=0.783\\)) and assumed that the couplings to the \\(\\Sigma\\) and \\(\\Xi\\) are equal to those of the \\(\\Lambda\\) hyperon [13]. The leptons either in the hadron or in the quark phase are considered as free Fermi gases and they do not couple with the hadrons, with the mesons or with the quarks. The condition of chemical equilibrium is imposed through the two independent chemical potentials for neutrinos \\(\\mu_{n}\\) and electrons \\(\\mu_{e}\\) and it implies that the chemical potential of baryon \\(B_{i}\\) is \\(\\mu_{B_{i}}=Q_{i}^{B}\\mu_{n}-Q_{i}^{e}\\mu_{e}\\), where \\(Q_{i}^{e}\\) and \\(Q_{i}^{B}\\) are, respectively, the electric and baryonic charge of baryon or quark \\(i\\). Charge neutrality implies \\(\\sum_{B_{i}}Q_{i}^{e}\\rho_{B_{i}}+\\sum_{l}q_{l}\\rho_{l}=0\\) where \\(q_{l}\\) stands for the electric charges of leptons. If neutrino trapping is imposed to the system, the beta equilibrium condition is altered to \\(\\mu_{B_{i}}=Q_{i}^{B}\\mu_{n}-Q_{i}^{e}(\\mu_{e}-\\mu_{\ u_{e}})\\). In this work we have not included trapped muon neutrinos. Because of the imposition of trapping the total leptonic number is conserved, i.e., \\(Y_{L}=Y_{e}+Y_{\ u_{e}}=0.4\\). For the quark phase we consider two models. First of all, we take the quark matter EoS as in [6] in which \\(u\\),\\(d\\) and \\(s\\) quark degrees of freedom are included in addition to electrons. Up and down quark masses are set to zero and the strange quark mass is taken to be either 150 or 200 MeV so that we are able check the effect of the \\(s\\)-quark mass. In chemical equilibrium \\(\\mu_{d}=\\mu_{s}=\\mu_{u}+\\mu_{e}\\). In terms of neutron and electric charge chemical potentials \\(\\mu_{n}\\) and \\(\\mu_{e}\\), one has \\(\\mu_{u}=\\frac{1}{3}\\mu_{n}-\\frac{2}{3}\\mu_{e},\\quad\\mu_{d}=\\frac{1}{3}\\mu_{n }+\\frac{1}{3}\\mu_{e},\\quad\\mu_{s}=\\frac{1}{3}\\mu_{n}+\\frac{1}{3}\\mu_{e}.\\) In the energy density for the quark matter EoS a term \\(+B\\) and in the pressure a term \\(-B\\) are inserted. This term is responsible for the simulation of confinement. For the Bag model, we have taken B\\({}^{1/4}\\)=190 and 200 MeV. In the EoS taking into consideration a CFL quark paired phase, the quark matter is treated as a Fermi sea of free quarks with an additional contribution to the pressure arising from the formation of the CFL condensates. The density of the three types of quarks are identical and the electron density is zero, as shown in [7]. The expressions for the energy density and pressure depend on a gap parameter \\(\\Delta\\) which is taken to be 100 MeV [8]. Once the hadron and quark phases are well established, we have to construct the mixed phase, imposing charge neutrality globally, \\(\\chi\\,\\rho_{c}^{QP}+(1-\\chi)\\rho_{c}^{HP}+\\rho_{c}^{l}=0\\), where \\(\\rho_{c}^{iP}\\) is the charge density of the phase \\(i,\\,\\chi\\) is the volume fraction occupied by the quark phase and \\(\\rho_{c}^{l}\\) is the electric charge density of leptons. We consider a uniform background of leptons in the mixed phase since Coulomb interaction has not been taken into account. According to the Gibbs conditions for phase coexistence, the baryon chemical potentials, temperatures and pressures have to be identical in both phases, i.e., \\(\\mu_{HP,n}=\\mu_{QP,n}=\\mu_{n},\\quad\\mu_{HP,e}=\\mu_{QP,e}=\\mu_{e},\\quad T_{HP}=T _{QP},\\quad P_{HP}(\\mu_{n},\\mu_{e}T)=P_{QP}(\\mu_{n},\\mu_{e},T)\\), reflecting the needs of chemical, thermal and mechanical equilibrium, respectively. In fig. 1, the EoSs obtained with both quark models are displayed for different \\(B\\) values and two strange quark masses with neutrino trapping (\\(Y_{L}=0.4\\)). For the sake of comparison we have also plotted one EoS with no neutrinos (\\(Y_{\ u_{e}}=0\\)), and included the EoS obtained within a NLWM formalism for the hadronic phase [11] and an UQM with \\(m_{s}=150\\) MeV and \\(B^{1/4}=190\\)MeV, with and without neutrino trapping. As already discussed in [1; 12] the EoSs are harder if neutrino trapping is imposed, independently of the model used. A larger \\(s\\)-quark mass and a larger \\(B\\) parameter make the quark EoSs harder in the mixed phase, a fact that manifests itself on the maximum mass stellar configuration. The main differences between the QMC formalism and the NLWM are: a) the NLWM EoS is harder at low densities and softer at intermediate densities due to the presence of hyperons; b) the transition to a pure quark phase occurs at lower densities in the NWLM. This behavior has consequences on the properties of the corresponding families of stars. In order to better understand the importance of the neutrinos when neutrino trapping is imposed, in fig. 2 the fraction of neutrinos is shown. The behavior encountered for the neutrino fraction if the UQM is used resembles the one shown in [11]: the population of neutrinos decreases in the hadron phase and, contrary to [11], only increases in the mixed and quark phases. The highest yields are of the order of 0.16. In this model, for smaller values of the strange quark mass, the neutrino population at high densities is greater. This occurs because a smaller \\(s\\)-quark mass gives larger \\(s\\)-quark densities and therefore, a smaller electron fraction. A fixed lepton fraction then implies a larger \\(\ u_{e}\\) fraction. If the CFL is chosen for the quark phase, the population of neutrino is higher in the mixed and quark phases. This is due to the fact that in the CFL phase no electrons are present since the number of \\(u\\), \\(d\\) and \\(s\\) quarks are equal, therefore the lepton fraction is kept constant only by the presence of neutrinos. This implies a much greater neutrino flux during deleptonization. However, in a self-consistent calculation at finite temperature we do not expect such a strong effect since pairing will be weaker. The amount of neutrinos depends on the fraction of charged hyperons and quarks present in each phase, which are determined by the models used and consequently by the bag pressure and the strange quark mass. In the present approach with the CFL phase or UQM for \\(B^{1/4}=190\\) MeV the onset of hyperons, if any, occurs for \\(\\rho>10\\rho_{0}\\). With the UQM and \\(B^{1/4}=200\\) MeV the hyperon onset occurs at \\(\\sim 4\\rho_{0}\\) but the charged hyperon fraction, namely \\(\\Sigma^{-}\\), is never larger than \\(0.004\\). Hybrid neutron star profiles can be obtained from all the EoS studied by solving the Tolman-Oppenheimer-Volkoff equations. Even at finite temperature the conditions of hydrostatic equilibrium are nearly fulfilled [15]. In Table 1 we show the values obtained for the maximum gravitational and baryonic masses and radii of neutron stars as function of the central density for the EoSs studied in this work. For a fixed bag constant, the stellar and baryonic masses of the most massive stable stars are higher for higher strange quark masses in both models. For the UQM, these critical masses are always higher than for the CFL model, because it also corresponds to harder EoS. The radii and central energy density depend on the model and on the strange quark mass. The radii values are larger for the UQM model and the central energy density are larger for the CFL model, again due to the fact that the UQM EoSs are harder. Comparing the results of table 1 with the ones presented in [10], where neutrino trapping was not considered, one can see that the inclusion of trapping makes the gravitational masses reasonably higher. The same conclusion was drawn in [11] where the calculations were performed with different relativistic models. In the Table 1 we have also included properties of maximum mass stars obtained within the NLWM for the hadronic phase [11]. Two conclusions are in order: the maximum baryonic masses obtained within the NLWM are larger and the difference of maximum masses for trapped and untrapped matter is smaller for the QMC (\\(\\sim 0.2M_{\\odot}\\)) than for the NLWM (\\(\\sim 0.4M_{\\odot}\\)). This means that the number of stars that would decay into a blackhole is much smaller in the QMC model and is probably due to the fact that no hyperons are formed in the interior of stars obtained with QMC for \\(m_{s}=150\\) MeV and \\(B^{1/4}=190\\) MeV contrary to the NLWM case. If the quark phase is a CFL state the baryonic mass difference between the neutrino rich stars and neutrino poor is greater than in the UQM, (\\(\\sim 0.35M_{\\odot}\\)). This is understood because the greater flux of neutrinos carries out more energy. However for a finite temperature calculation we expect a smaller effect. A similar analysis was done in [14], where the authors used a derivative coupling model with hyperons for the hadron phase and the Bag model for the quark matter. They did not obtain any mixed phase for bag values larger than \\(B^{1/4}=190\\) MeV in contrast with the present work. Moreover, the maximum masses shown in [14] (\\(\\sim 1.6M_{\\odot}\\)) and the differences between maximum masses in neutrino rich and neutrino poor stelar matter are lower than in our calculations. In fig. 3 we display the baryonic masses versus the gravitational masses for both models. It is seen that neutrino trapped EoSs give rise to greater gravitational masses for the same baryonic mass. The mass difference reflects the binding energy released during the deleptonization and cooling stage [1]. The maximum baryonic mass of the neutrino rich EoS are larger than the neutrino poor. This will lead to a blackhole formation during the leptonization period for stars with baryonic masses greater than the maximum baryonic mass of the neutrino poor EoS. This behavior has been encountered in other EoS which include strange matter, namely hyperon and/or quark matter [11; 12]. In summary, we have investigated the effects of neutrino trapping in the properties of neutron stars within the QMC framework, including the possibility of hyperon formation and a transition to an unpaired quark-phase or a CFL phase. We have concluded that within the QMC model with hyperons for the hadronic matter, either the hyperons only occur at very high densities or in very small amounts at lower densities (e.g. UQM). Another important point is that the maximum mass of a neutrino rich neutron star decreases after neutrino diffusion leading to the formation of a low mass black-hole. This mass reduction is smaller for a quark phase described within an unpaired quark phase than a CFL phase. If the quark phase is in a CFL state a large fraction of neutrinos is expected in the mixed and quark phases which will carry away more energy as they diffuse out. We also point out that the amount of neutrinos present in the CFL phase is almost the double in comparison with the amount found in the UQM phase. At finite temperature the effect will not be so strong and a self-consistent finite temperature calculation should be performed. We have also seen that the mass reduction of the maximum mass stars during to neutrino diffusion is smaller within a QMC formalism than in a NLWM formalism. This work was partially supported by CNPq (Brazil), CAPES(Brazil), GRICES(Portugal) under project 100/03 and FEDER/FCT (Portugal) under the project POCTI/35308/FIS/2000. ## References * (1) M. Prakash, I. Bombaci, M. Prakash, P. J. Ellis, J. M. Lattimer and R. Knorren, Phys. Rep. 280, 1 (1997). * (2) E.H. Gudmundsson and J.R. Buchler, Astrophys.J. **238** 717 (1980). * (3) P. A. M. Guichon, Phys. Lett. **B 200**, 235 (1988). * (4) K. Saito and A.W. Thomas, Phys. Lett. B **327**, 9 (1994); K. Saito, K. Tsushima, and A.W. Thomas, Phys. Lett. B **406**, 287 (1997) P.K. Panda, A. Mishra, J.M. Eisenberg, W. Greiner, Phys. Rev. C **56**, 3134 (1997). * (5) P.K. Panda, R. Sahu, C. Das, Phys. Rev. C **60**, 38801 (1999); S. Pal, M. Hanauske, I. Jakout, H. Stocker, and W. Greiner, Phys. Rev. C **60**, 015802 (1999). * (6) A. Chodos, R.L. Jaffe, K. Johnson, C.B. Thorne and V.F. Weisskopf, Phys. Rev. **D 9** (1974) 3471; E. Farhi and R.L. Jaffe, Phys. Rev. D **30**, 2379 (1984). * (7) M. Alford, K. Rajagopal, S. Reddy, F. Wilezeck, Phys. Rev. D **64**, 074017 (2001). * (8) M. Alford and S. Reddy, Phys. Rev. D **67**, 074024 (2003). * (9) D. Bailin and A, Love, Phys. Rep. **107**, 325 (1984); R. Rapp, T. Schafer, E.V. Shuryak, and M. Velkovsky, Phys. Rev. Lett **81**, 53 (1998); M. Alford, K. Rajagopal, and F. Wilcek, Nucl. Phys. **B537**, 443 (1999). * (10) P.K. Panda, D.P. Menezes and C. Providencia, Phys. Rev. **C69**, 025207 (2004). * (11) D.P. Menezes and C. Providencia, Phys. Rev. C (2004) (press), nucl-th/0312050. * (12) I. Vidana, I. Bombaci, A. Polls, A. Ramos, Astron. Astroph. **399** 687 (2003). * (13) N. K. Glendenning, Compact Stars, Springer-Verlag, New-York, 2000. * (14) M. Prakash, J.R. Cooke and J.M. Lattimer, Phys. Rev **D52**, 661 (1995). * (15) A. Burrows and J.M Lattimer, Astrophys. J. **307**, 178 (1986). * (16) R. D. Pisarski and D. H. Rischke, Phys. Rev **D61**, 074017 (2000). Figure 1: Equation of state obtained with the QMC model plus (a) UQM (b) CFL. Figure 3: Baryonic mass versus gravitational mass obtained from the EoS with QMC plus (a) UQM (b) CFL. Figure 2: Neutrino fraction for the EoS obtained with the QMC model plus (a) UQM (b) CFL.
The properties of hybrid stars formed by hadronic and quark matter in \\(\\beta\\)-equilibrium are described by appropriate equations of state (EoS) in the framework of the quark meson coupling (QMC) model. In the present work we include the possibility of trapped neutrinos in the equation of state and obtain the properties of the related hybrid stars. We use the quark meson coupling model for the hadron matter and two possibilities for the quark matter phase, namely, the unpaired quark phase and the color-flavor locked phase. The differences are discussed and a comparison with other relativistic EoS is done.
Summarize the following text.
arxiv-format/0401035v1.md
# A Didactic Approach to Linear Waves in the Ocean F. J. Beron-Vera [email protected] RSMAS/AMP, University of Miami, Miami, FL 33149 November 6, 2021 ## I Introduction The goal of this educational work is to show how the various types of linear waves in the ocean (acoustic, inertia-gravity or Poincare, and planetary or Rossby waves) can be obtained from a general dispersion relation in an approximate (asymptotic) sense. Knowledge of the theory of partial differential equations, and basic classical and fluid mechanics are only needed for the reader to understand the material presented here, which could be taught as a special topic in a course of fluid mechanics for physicists. The exposition starts by presenting the general equations of motion for ocean dynamics in Sec. II. This presentation is not intended to be rigorous, but rather conceptual. Accordingly, the equations of motion are simplified as much as possible for didactic purposes. The general dispersion relation for the waves supported by the (inviscid, unforced) linearized dynamics with respect to a quiescent state is then derived in Sec. III. This is done by performing a separation of variables between the vertical direction, on one side, and the horizontal position and time, on the other side. Sec. IV discusses the implications of the most common approximations made in oceanography (namely incompressibility, Boussinesq, hydrostatic, and quasigeostrophic) in the reduction of degrees of freedom (number of independent dynamical fields or prognostic equations) of, and compatible waves with, the linearized governing equations. Particular emphasis is made on this important issue, which is vaguely covered in standard textbooks (e.g. Refs. 2,3,5). Some problems have been interspersed within the text to help the reader to assimilate the material presented. The solutions to some of these problems are outlined in App. A. ## II General equations of motion Let \\(\\mathbf{x}:=(x,y)\\) be the horizontal position, i.e. tangential to the Earth's surface, with \\(x\\) and \\(y\\) its eastward and northward components, respectively; let \\(z\\) be the upward coordinate; and let \\(t\\) be time. Unless otherwise stated all variables are functions of \\((\\mathbf{x},z,t)\\) in this paper. The _thermodynamic state_ of the ocean is determined by three variables, typically \\(S\\) (salinity), \\(T\\) (temperature), and \\(p\\) (pressure, referred to one atmosphere). Seawater density, \\(\\rho\\), is a function of these three variables, i.e. \\(\\rho=\\rho(S,T,p)\\), known as the state equation of seawater. In particular, \\[\\rho^{-1}\\mathrm{D}\\rho=\\alpha_{S}\\mathrm{D}S-\\alpha_{T}\\mathrm{D}T+\\alpha_{ p}\\mathrm{D}p. \\tag{1}\\] Here, \\(\\mathrm{D}:=\\partial_{t}+\\mathbf{u}\\cdot\\mathbf{\ abla}+w\\partial_{z}\\) is the substantial or material derivative, where \\(\\mathbf{u}\\) and \\(w\\) are the horizontal and vertical components of the velocity field, respectively, and \\(\\mathbf{\ abla}\\) denotes the horizontal gradient; \\(\\alpha_{S}:=\\rho^{-1}\\left(\\partial_{S}\\rho\\right)_{T,p}\\) and \\(\\alpha_{T}:=\\rho^{-1}\\left(\\partial_{T}\\rho\\right)_{S,p}\\) are the haline contraction and thermal expansion coefficients, respectively; and \\(\\alpha_{p}:=\\rho^{-1}\\left(\\partial_{p}\\rho\\right)_{T,S}=\\alpha_{T}\\Gamma+\\rho ^{-1}c_{\\mathrm{s}}^{-2}\\), where \\(\\Gamma\\) is the adiabatic gradient and \\(c_{\\mathrm{s}}\\) is the speed of sound, which characterize the compressibility of seawater. The _physical state_ of the ocean is determined at every instant by the above three variables \\((S,T,p)\\) and the three components of the velocity field \\((\\mathbf{u},w)\\), i.e. _six_ independent scalar variables. The evolution of these variables are controlled by \\[\\mathrm{D}S =F_{S}, \\tag{2a}\\] \\[\\mathrm{D}T =\\Gamma\\mathrm{D}p+F_{T},\\] (2b) \\[\\mathrm{D}p =-\\alpha_{p}^{-1}\\left(\\mathbf{\ abla}\\cdot\\mathbf{u}+\\partial_{z}w +\\alpha_{T}F_{T}-\\alpha_{S}F_{S}\\right),\\] (2c) \\[\\mathrm{D}\\mathbf{u} =-\\rho^{-1}\\mathbf{\ abla}p+\\mathbf{F}_{\\mathbf{u}},\\] (2d) \\[\\mathrm{D}w =-\\rho^{-1}\\partial_{z}p-g+F_{w}. \\tag{2e}\\] In Newton's horizontal equation (2d), the term \\(\\mathbf{F}_{\\mathbf{u}}\\) represents the acceleration due to the horizontalcomponents of the Coriolis and frictional forces. In Newton's vertical equation (2d), the term \\(F_{w}\\) represents the acceleration due to the vertical component of the Coriolis and frictional forces, and \\(g\\) is the (constant) acceleration due to gravity. The term \\(F_{S}\\) in the salinity equation (2a) represents diffusive processes. The term \\(F_{T}\\) in the thermal energy equation (2b), which follows from the first principle of thermodynamics, represents the exchange of heat by conduction and radiation, as well as heating by change of phase, chemical reactions or viscous dissipation. The pressure or continuity equation (2c) follows from (1). **Problem 1**: Investigate why (2d,e) do not include the centrifugal force which would also be needed to describe the dynamics in a noninertial reference frame such as one attached to the Earth. Since adiabatic compression does not have important dynamical effects, in physical oceanography it is commonly neglected. This is accomplished upon introducing the potential temperature \\(\\theta\\), which satisfies \\(\\alpha_{\\theta}\\mathrm{D}\\theta=\\alpha_{T}(\\mathrm{D}T-\\Gamma\\mathrm{D}p)\\), so that \\(\\rho=\\rho(S,\\theta,p)\\) and (1) is consistently replaced by \\[\\rho^{-1}\\mathrm{D}\\rho=\\alpha_{S}\\mathrm{D}S-\\alpha_{\\theta}\\mathrm{D}\\theta+ \\rho^{-1}c_{\\mathrm{s}}^{-2}\\mathrm{D}p, \\tag{3}\\] where here it must be understood that \\(\\alpha_{S}=(\\partial_{S}\\rho)_{\\theta,p}\\) and \\(\\alpha_{T}=(\\partial_{\\theta}\\rho)_{S,p}.\\) Equations (2b,c) then are replaced, respectively, by \\[\\mathrm{D}\\theta =F_{\\theta}, \\tag{4a}\\] \\[\\mathrm{D}p =-\\rho c_{\\mathrm{s}}^{2}\\left(\\mathbf{\ abla}\\cdot\\mathbf{u}+ \\partial_{z}w+\\alpha_{\\theta}F_{\\theta}-\\alpha_{S}F_{S}\\right). \\tag{4b}\\] As our interest is in the waves sustained by the linearized dynamics, we do not need to consider either diffusive processes or allow the motion to depart from isentropic. Hence, we will set \\(F_{S}\\equiv 0\\equiv F_{\\theta}\\) so that equations (2a) and (4) can be substituted, respectively, by \\[\\mathrm{D}\\zeta =w, \\tag{5a}\\] \\[\\mathrm{D}p =-\\rho c_{\\mathrm{s}}^{2}\\left(\\mathbf{\ abla}\\cdot\\mathbf{u}+ \\partial_{z}w\\right), \\tag{5b}\\] where \\(\\zeta\\) is the vertical displacement of an isopycnal which is defined such that \\(\\rho=\\rho_{\\mathrm{r}}(z-\\zeta)\\). **Problem 2**: Show that equations (2a) and (4a) certainly lead to (5a) when \\(F_{S}\\equiv 0\\equiv F_{\\theta}\\). We will also neglect frictional effects, so that equations (2d,e) are seen to be nothing but Euler equations of (ideal) fluid mechanics with the addition of the Coriolis force. The latter will be further considered as due solely to the vertical component of the Earth rotation. Thus, the following simplified form of equations (2d,e) will be considered: \\[\\mathrm{D}\\mathbf{u} =-\\rho^{-1}\\mathbf{\ abla}p-f\\mathbf{\\hat{z}}\\times\\mathbf{u}, \\tag{6a}\\] \\[\\mathrm{D}w =-\\rho^{-1}\\partial_{z}p-g. \\tag{6b}\\] Here, \\(\\mathbf{\\hat{z}}\\) is the upward unit vector and \\(f:=2\\Omega\\sin\\vartheta\\), where \\(\\Omega\\) is the (assumed constant) spinning rate of the Earth around its axis and \\(\\vartheta\\) is the geographical latitude, is the Coriolis parameter. For simplicity, we will avoid working in full spherical geometry. Instead, we will consider \\(f=f_{0}+\\beta y\\), where \\(f_{0}:=2\\Omega\\sin\\vartheta_{0}\\) and \\(\\beta:=2\\Omega R^{-1}\\cos\\vartheta_{0}\\) with \\(\\vartheta_{0}\\) a fixed latitude and \\(R\\) the mean radius of the planet, and \\(\\mathbf{\ abla}=(\\partial_{x},\\partial_{y})\\), which is known as the \\(\\beta\\)-plane approximation. It should remain clear, however, that a consistent \\(\\beta\\)-plane approximation must include some geometric (non-Cartesian) terms.[7] Neither these terms nor those of the Coriolis force due to the horizontal component of the Earth's rotation contribute to add waves to the linearized equations of motion. Their neglection is thus well justified for the purposes of this paper. ## III Waves of the linearized dynamics Consider a state of rest (\\(\\mathbf{u}=\\mathbf{0}\\), \\(w=0\\)) characterized by \\(\\mathrm{d}p_{\\mathrm{r}}/\\mathrm{d}z=-\\rho_{\\mathrm{r}}g\\), where \\(p_{\\mathrm{r}}(z)\\) and \\(\\rho_{\\mathrm{r}}(z)\\) are reference profiles of pressure and density, respectively. Equations (4) and (6), linearized with respect to that state, can be written as \\[\\partial_{t}\\zeta^{\\prime} =w^{\\prime}, \\tag{7a}\\] \\[\\partial_{t}p^{\\prime} =-\\rho_{\\mathrm{r}}c_{\\mathrm{s}}^{2}\\left(\\mathbf{\ abla}\\cdot \\mathbf{u}^{\\prime}+\\partial_{z}^{-}w^{\\prime}\\right),\\] (7b) \\[\\partial_{t}\\mathbf{u}^{\\prime} =-\\rho_{\\mathrm{r}}^{-1}\\mathbf{\ abla}p^{\\prime}-f\\mathbf{\\hat{z}} \\times\\mathbf{u}^{\\prime},\\] (7c) \\[\\partial_{t}w^{\\prime} =-\\rho_{\\mathrm{r}}^{-1}\\partial_{z}^{+}p^{\\prime}-N^{2}\\zeta^{ \\prime}. \\tag{7d}\\] Here, primed quantities denote perturbations with respect to the state of rest; \\(\\partial_{z}^{\\pm}:=\\partial_{z}\\pm gc_{\\mathrm{s}}^{-2}\\); \\(N^{2}(z):=-g(\\rho_{\\mathrm{r}}^{-1}\\mathrm{d}\\rho_{\\mathrm{r}}/\\mathrm{d}z+gc_{ \\mathrm{s}}^{-2})\\) is the square of the reference Brunt-Vaisala frequency; and \\(c_{\\mathrm{s}}\\) is assumed constant. In addition to the above equations, it is clear that \\[\\partial_{t}S^{\\prime} =-w\\,\\mathrm{d}S_{\\mathrm{r}}/\\mathrm{d}z, \\tag{8a}\\] \\[\\partial_{t}\\theta^{\\prime} =-w\\,\\mathrm{d}\\theta_{\\mathrm{r}}/\\mathrm{d}z, \\tag{8b}\\] where \\(S_{\\mathrm{r}}(z)\\) and \\(\\theta_{\\mathrm{r}}(z)\\) are reference salinity and potential density profiles, respectively. **Problem 3**: Work out the linearization of the equations of motion. ### Zero Frequency Mode The linearized dynamics supports a solution with vanishing frequency (\\(\\partial_{t}\\equiv 0\\)) such that \\[\\zeta^{\\prime}\\equiv 0,\\quad p^{\\prime}\\equiv 0,\\quad\\mathbf{u}^{\\prime}\\equiv \\mathbf{0},\\quad w^{\\prime}\\equiv 0, \\tag{9}\\] as it follows from (7), but with \\[S^{\\prime}\ eq 0,\\quad\\theta^{\\prime}\ eq 0, \\tag{10}\\]as can be inferred from (8). Namely for this solution the salinity and temperature fields vary without changing the density of the fluid. More precisely, one has, on one hand, \\(\\rho^{\\prime}=\\rho_{\\mathrm{r}}(g^{-1}N^{2}+gc_{\\mathrm{s}}^{-2})\\zeta^{\\prime}\\), and, on the other, \\(\\rho^{\\prime}=\\rho_{\\mathrm{r}}(\\alpha_{S}S^{\\prime}-\\alpha_{\\theta}\\theta^{ \\prime}+\\alpha_{\\mathrm{p}}p^{\\prime})\\), where the \\(\\alpha\\)'s are evaluated at the reference state. By virtue of (9) then it follows that \\[\\alpha_{S}S^{\\prime}-\\alpha_{\\theta}\\theta^{\\prime}=0. \\tag{11}\\] This so-called _buoyancy mode_ describes small scale processes in the ocean such as double diffusion. ### Nonzero Frequency Modes Upon eliminating \\(w\\) between (7a) and (7d), and proposing a separation of variables between \\(z\\), on one side, and \\((\\mathbf{x},t)\\), on the other side, for the horizontal velocity and pressure fields in the form \\[\\mathbf{u}^{\\prime}=\\mathbf{u}^{c}(\\mathbf{x},t)\\,\\partial_{z}^{-}F(z),\\quad p ^{\\prime}=\\rho_{\\mathrm{r}}(z)\\,p^{c}(\\mathbf{x},t)\\,\\partial_{z}^{-}F(z), \\tag{12}\\] it follows that \\[c_{\\mathrm{s}}^{-2}\\partial_{z}^{-}F\\partial_{t}p^{c}+\\left( \\partial_{-}^{-}F\\mathbf{\ abla}\\cdot\\mathbf{u}^{c}+\\partial_{zt}^{-}\\zeta^{ \\prime}\\right) =0, \\tag{13a}\\] \\[\\partial_{t}\\mathbf{u}^{c}+f\\mathbf{\\hat{z}}\\times\\mathbf{u}^{c }+\\mathbf{\ abla}p^{c} =\\mathbf{0},\\] (13b) \\[\\partial_{tt}\\zeta^{\\prime}+\\rho_{\\mathrm{r}}^{-1}\\partial_{z}^ {+}\\left(\\rho\\partial_{z}^{-}F\\right)p^{c}+N^{2}\\zeta^{\\prime} =0. \\tag{13c}\\] Now, assuming a common temporal dependence of the form \\(\\mathrm{e}^{-\\mathrm{i}\\omega t}\\), from (13c) one obtains \\[\\zeta^{\\prime}=-\\frac{p^{c}\\partial_{z}^{+}\\left(\\rho_{\\mathrm{r}}\\partial_{ z}^{-}F\\right)}{\\rho_{\\mathrm{r}}\\left(N^{2}-\\omega^{2}\\right)}. \\tag{14}\\] Then, upon substituting (14) in (13a) it follows that \\[c_{\\mathrm{s}}^{-2}-\\frac{1}{\\partial_{z}^{-}F}\\partial_{z}^{-}\\left[\\frac{ \\partial_{z}^{+}\\left(\\rho_{\\mathrm{r}}\\partial_{z}^{-}F\\right)}{\\rho_{ \\mathrm{r}}\\left(N^{2}-\\omega^{2}\\right)}\\right]=-\\frac{\\mathbf{\ abla}\\cdot \\mathbf{u}^{c}}{\\partial_{t}p^{c}}=c^{-2}, \\tag{15}\\] where \\(c\\) is a constant known as the _separation constant_. Clearly, \\(c_{\\mathrm{s}}\\) must be chosen as a constant in order for the separation of variables to be possible. From (15) it follows, on one hand, that \\[\\partial_{z}^{+}\\left(\\rho_{\\mathrm{r}}\\partial_{z}^{-}F\\right)+\\rho_{\\mathrm{ r}}\\left(N^{2}-\\omega^{2}\\right)\\left(c^{-2}-c_{\\mathrm{s}}^{-2}\\right)F=0, \\tag{16}\\] and taking into account (13b), it follows, on the other hand, that \\[\\partial_{t}p^{c}+c^{2}\\mathbf{\ abla}\\cdot\\mathbf{u}^{c} =0, \\tag{17a}\\] \\[\\partial_{t}\\mathbf{u}^{c}+f\\mathbf{\\hat{z}}\\times\\mathbf{u}^{c}+ \\mathbf{\ abla}p^{c} =\\mathbf{0}. \\tag{17b}\\] Equation (16) governs the _vertical structure_ of the perturbations, whereas system (17) controls the evolution of their _horizontal structure_. **Problem 4**: Show that the alternate separation of variables which uses \\(F(z)\\) instead of \\(\\partial_{z}^{-}F(z)\\) leads to a vertical structure equation with a singularity where \\(\\omega^{2}\\equiv N^{2}\\). Equation (16) can be presented in different forms according to the approximation performed. Under the _incompressibility approximation_, which consists of making the replacement \\(\\partial_{t}p^{\\prime}+gw^{\\prime}\\mapsto 0\\) in the continuity equation (7b), equation (16) takes the form \\[\\partial_{z}^{+}\\left(\\rho_{\\mathrm{r}}\\partial_{z}F\\right)+\\rho_{\\mathrm{r}}c^ {-2}\\left(N^{2}-\\omega^{2}\\right)F=0. \\tag{18}\\] This approximation corresponds formally to taking the limit \\(c_{\\mathrm{s}}^{-2}\\to 0.\\) The _hydrostatic approximation_, in turn, consists of making the replacement \\(\\partial_{t}w^{\\prime}\\mapsto 0\\) in Newton's vertical equation (7d). This way, without the need of assuming any particular temporal dependence, it follows that \\(\\zeta^{\\prime}=-p^{c}\\partial_{z}^{+}\\left(\\rho_{\\mathrm{r}}\\partial_{z}^{-}F \\right)/(\\rho_{\\mathrm{r}}N^{2}).\\) Consequently, equation (16) reduces to \\[\\partial_{z}^{+}\\left(\\rho_{\\mathrm{r}}\\partial_{z}^{-}F\\right)+\\rho_{\\mathrm{r} }\\left(c^{-2}-c_{\\mathrm{s}}^{-2}\\right)N^{2}F=0. \\tag{19}\\] This approximation is valid for \\(\\omega^{2}\\ll N^{2}\\), i.e. periods exceeding the local buoyancy period which typically is of about \\(1\\) h. (Of course, this approximation implies that of incompressibility as it filters out the acoustic modes whose frequencies are much higher than the Brunt-Vaisala frequency.) Another common approximation is the _Boussinesq approximation_, which consists of making the replacements \\(\\rho_{\\mathrm{r}}\\mapsto\\bar{\\rho}=\\mathrm{const.}\\) and \\(\\partial_{z}^{\\pm}\\mapsto\\partial_{z}\\) in (7). Under this approximation, equation takes the simpler form \\[\\mathrm{d}^{2}F/\\mathrm{d}z^{2}+\\left(c^{-2}-c_{\\mathrm{s}}^{-2}\\right)\\left(N^ {2}-\\omega^{2}\\right)F=0. \\tag{20}\\] **Problem 5**: Show that the Boussinesq approximation is very good for the ocean but not so for the atmosphere. Hint: This approximation requires \\(c_{\\mathrm{s}}^{2}\\gg gH,\\) where \\(H\\) is a typical vertical length scale. #### ii.1.1 Horizontal Structure To describe these waves is convenient to introduce a potential \\(\\varphi(\\mathbf{x},t)\\) such that [9; 6] \\[p^{c} =-c^{2}\\left(\\partial_{ty}+f\\partial_{x}\\right)\\varphi, \\tag{21a}\\] \\[u^{c} =\\left(c^{2}\\partial_{xy}+f\\partial_{t}\\right)\\varphi,\\] (21b) \\[v^{c} =\\left(\\partial_{tt}-c^{2}\\partial_{xx}\\right)\\varphi, \\tag{21c}\\] which allows one to reduce system (17) to a single equation in one variable: \\[\\mathcal{L}\\varphi:=\\left\\{\\partial_{t}\\left[\\partial_{tt}+f^{2}(y)-c^{2}\ abla ^{2}\\right]-\\beta c^{2}\\partial_{x}\\right\\}\\varphi=0. \\tag{22}\\] The linear differential operator \\(\\mathcal{L}\\) contains a variable coefficient and, hence, a solution to (22) must be of the form \\[\\varphi=\\Phi(y)\\mathrm{e}^{\\mathrm{i}(kx-\\omega t)} \\tag{23}\\] with \\(\\Phi(y)\\) satisfying \\[\\mathrm{d}^{2}\\Phi/\\mathrm{d}y^{2}+l^{2}(y)\\Phi=0 \\tag{24}\\]where \\[l^{2}(y):=-k^{2}-\\beta\\frac{k}{\\omega}+\\frac{\\omega^{2}-f^{2}(y)}{c^{2}}. \\tag{25}\\] Now, if \\(l^{2}(y)\\) is positive and sufficiently large, then \\(\\Phi(y)\\) oscillates like \\[\\Phi(y)\\sim\\mathrm{e}^{\\pm\\mathrm{i}\\int^{y}\\mathrm{d}y\\,l(y)}. \\tag{26}\\] This is known as the WKB approximation (cf. e.g. Ref. [4]), where \\(l(y)\\) defines a local meridional wavenumber in the approximate (asymptotic) dispersion relation \\[\\omega^{2}-(f^{2}+c^{2}\\mathbf{k}^{2})-\\beta\\frac{k}{\\omega}=0, \\tag{27}\\] where \\(\\mathbf{k}:=(k,l)\\) is the horizontal wavenumber. System (17) also supports a type of nondispersive waves called Kelvin waves. These waves have \\(v^{c}\\equiv 0\\) and thus are seen to satisfy \\[\\partial_{t}p^{c}+c^{2}\\partial_{x}u^{c} = 0, \\tag{28a}\\] \\[\\partial_{t}u^{c}+\\partial_{x}p^{c} = 0,\\] (28b) \\[fu^{c}+\\partial_{y}p^{c} = 0. \\tag{28c}\\] Clearly, these waves propagate as nondispersive waves in the zonal (east-west) direction--as if it were \\(f\\equiv 0\\)--and are in geostrophic balance between the Coriolis and pressure gradient forces in the meridional (south-north) direction. From (28a, b) it follows that \\[p^{c}=A(y)K(x-ct)\\equiv cu^{c}, \\tag{29}\\] where \\(K(\\cdot)\\) is an arbitrary function. By virtue of (28c) then it follows \\(\\mathrm{d}A/\\mathrm{d}y+fA/c=0\\), whose solution is \\[A(y)\\propto\\mathrm{e}^{-\\int^{y}\\mathrm{d}y\\,\\,f(y)/c}=\\mathrm{e}^{-\\left(f_{ 0}y+\\frac{1}{2}\\beta y^{2}\\right)/c},\\] which requires, except there where \\(f_{0}\\equiv 0\\) (i.e. the equator), the presence of a zonal coast to be physically meaningful. **Problem 6**: Consider the Kelvin waves in the so-called \\(f\\) plane, i.e. with \\(\\beta\\equiv 0\\). #### ii.2.2 Vertical Structure Under the Boussinesq approximation the five fields of system (7) remain independent, thereby removing no wave solutions. We can thus safely consider the vertical structure equation (20), which we rewrite in the form \\[\\mathrm{d}^{2}F/\\mathrm{d}F^{2}+m^{2}(z)F=0 \\tag{30}\\] where \\[m^{2}(z):=\\left[N^{2}(z)-\\omega^{2}\\right]\\left(c^{-2}-c_{\\mathrm{s}}^{-2} \\right). \\tag{31}\\] Equation (30) can be understood in two different senses. Within the realm of the WKB approximation, (31) defines a local vertical wavenumber, and a solution to (30) oscillates like \\[F(z)\\sim\\mathrm{e}^{\\pm\\mathrm{i}\\int^{z}\\mathrm{d}z\\,m(z)}. \\tag{32}\\] The other sense is that of _vertical normal modes_, in which (30) is solved in the whole water column with boundary conditions \\[F(-H) = 0, \\tag{33a}\\] \\[gF(0) = c^{2}\\mathrm{d}F(0)/\\mathrm{d}z. \\tag{33b}\\] Condition (33a) comes from imposing \\(w^{\\prime}=0\\) at \\(z=-H\\) where \\(H\\), which must be a constant, is the depth of the fluid in the reference state. Condition (33b) comes from the fact that \\(p^{\\prime}=g\\zeta^{\\prime}\\) at \\(z=0\\), which means that the surface is isopynic (i.e. the density does not change on the surface). This way one is left with a classic Sturm-Liouville problem. Making the incompressibility approximation and assuming a uniform stratification in the reference state, namely \\(N=\\bar{N}=\\mathrm{const.}\\), it follows that \\[\\omega^{2}=\\bar{N}^{2}-mg\\tan mH. \\tag{34}\\] (Notice that to obtain \\(m\\) is necessary to fix a value of \\(\\omega\\).) In the hydrostatic limit, \\(\\omega^{2}\\ll\\bar{N}^{2}\\), it follows that the vertical normal modes result from \\[\\tan mH=s/(mH), \\tag{35}\\] where \\(s:=\\bar{N}^{2}H/g\\), which is a measure of the stratification, is such that \\(0<s<\\infty\\) by static stability. In the ocean \\(s\\) is typically very small, so for \\(s\\ll 1\\) from (35) it follows that \\[m_{i}=\\left\\{\\begin{array}{ll}\\bar{N}/\\sqrt{gH}&\\mbox{if $i=0$},\\\\ i\\pi/H&\\mbox{if $i=1,2,\\cdots$}.\\end{array}\\right. \\tag{36}\\] The first mode is called the _external_ or _barotropic_ mode; the rest of the modes are termed the _internal_ or _baroclinic_ modes, which are well separated from the latter in what length scale respects. More precisely, the _Rossby radii of deformation_ are defined by \\(R_{i}:=\\bar{N}/(m_{j}\\,|f_{0}|)\\); for the barotropic mode \\(R_{0}=\\sqrt{gH}/\\left|f_{0}\\right|\\) whereas for the baroclinic modes \\(R_{i}=\\bar{N}H/(i\\pi\\,|f_{0}|)\\equiv\\sqrt{s}R_{0}/(i\\pi)\\ll R_{0}\\). Finally, the _rigid lid approximation_ consists of making \\(w^{\\prime}=0\\) at \\(z=0\\), which formally corresponds to take the limit \\(g\\to\\infty\\) in (33b). This approximation filters out the barotropic mode since it leads to \\(\\tan mH=0\\). **Problem 7**: Demonstrate that \\(p^{\\prime}=g\\zeta^{\\prime}\\) at \\(z=0\\). #### ii.2.3 General Dispersion Relation Upon eliminating \\(c\\) between (25) and (31) it follows that \\[\\boxed{\\frac{\\mathbf{k}^{2}+\\beta k/\\omega}{\\omega^{2}-f^{2}}=\\frac{m^{2}}{N^{ 2}-\\omega^{2}}+c_{\\mathrm{s}}^{-2}}, \\tag{37}\\]which is a fifth-order polynomial in \\(\\omega\\) that constitutes the general dispersion relation for linear ocean waves in an asymptotic WKB sense. This is the main result of this paper. Approximate roots of (37) are: \\[\\text{acoustic}:\\omega^{2}=(\\mathbf{k}^{2}+m^{2})\\mathbf{c}_{\\text{s}}^{2}, \\tag{38}\\] which holds for \\(\\omega^{2}\\gg N^{2}\\) (i.e. very high frequencies); \\[\\text{Poincar\\'{e}}:\\omega^{2}=\\frac{\\mathbf{k}^{2}N^{2}+m^{2}f^{2}}{\\mathbf{ k}^{2}+m^{2}}, \\tag{39}\\] which follows upon taking the limit \\(c_{\\text{s}}^{-2}\\to 0\\) and is valid for frequencies in the range \\(f^{2}<\\omega^{2}<N^{2}\\); and \\[\\text{Rossby}:\\omega=-\\frac{\\beta k}{\\mathbf{k}^{2}+(m^{2}/N^{2})\\,f^{2}}, \\tag{40}\\] which also follows in the limit \\(c_{\\text{s}}^{-2}\\to 0\\) but is valid for \\(\\omega^{2}\\ll f^{2}\\) (i.e. very low frequencies). **Problem 8** Demonstrate that the classical dispersion relations for Poincare waves, \\(\\omega^{2}=f^{2}+c^{2}\\mathbf{k}^{2}\\), and surface gravity waves, \\(\\omega^{2}=g\\left|\\mathbf{k}\\right|\\tanh\\left|\\mathbf{k}\\right|H\\), are limiting cases of (39). ## IV Discussion The inviscid, unforced linearized equations of motion (7) have _five_ prognostic equations for _five_ independent dynamical fields. As a consequence, _five_ is the number of waves sustained by (7) which satisfy the general dispersion relation (37) in an asymptotic WKB sense. In proper limits, two acoustic waves (AW), two Poincare waves (PW), and one Rossby wave (RW) can be identified. The fact that the number of waves supported by the linearized dynamics equals the number of independent dynamical fields or prognostic equations, i.e. the degrees of freedom of (7), means that the waves constitute a complete set of solutions of the linearized dynamics (cf. Table 1). The number of possible eigensolutions can be reduced is approximations that eliminate some of the prognostic equations, or independent dynamical fields, of the system are performed. The Boussinesq approximation, which is very appropriate for the ocean, does not eliminate prognostic equations and has the virtue of reducing the mathematical complexity of the governing equations considerably. The incompressibility approximation, in turn, removes two degrees of freedom: the vertical velocity is diagnosed by the horizontal velocity, \\[\\partial_{z}w^{\\prime}=-\\boldsymbol{\ abla}\\cdot\\mathbf{u}^{\\prime}, \\tag{41}\\] and the latter along with the density diagnose the pressure field through the three-dimensional Poisson equation \\[(\ abla^{2}+\\partial_{zz})p^{\\prime}=-\\boldsymbol{\ abla}\\cdot f\\mathbf{\\hat {z}}\\times\\mathbf{u}^{\\prime}-\\partial_{z}(N^{2}\\zeta^{\\prime}). \\tag{42}\\] As a consequence, the two AW are filtered out and one is left with the two PW and the RW. With these two approximations the Euler equations (7) reduces to the so called _Euler-Boussinesq equations_. Performing in addition the hydrostatic approximation, which corresponds to neglecting \\(\\partial_{t}w^{\\prime}\\) in Newton's vertical equation, does not amount to a reduction of degrees of freedom because the vertical velocity is already diagnosed by the horizontal velocity. In this case, the density field diagnoses the pressure field through \\[\\partial_{z}p^{\\prime}=-N^{2}\\zeta^{\\prime}. \\tag{43}\\] With this approximation (which implies that of incompressibility) and the Boussinesq approximation, system (7) reduces to what is known in geophysical fluid dynamics as the _primitive equations_. Finally, one approximation that eliminates independent fields is the _quasigeostrophic approximation_, which is often used to study low frequency motions in the ocean, and the Earth and planetary atmospheres. In this approximation the density diagnoses the horizontal velocity through the \"thermal wind balance,\" \\[\\partial_{z}\\mathbf{u}^{\\prime}=\\frac{N^{2}}{f_{0}}\\mathbf{\\hat{z}}\\times \\boldsymbol{\ abla}\\zeta^{\\prime}, \\tag{44}\\] thereby removing two degrees of freedom and leaving only one RW. \\begin{table} \\begin{tabular}{l} \\(\\partial_{t}\\zeta^{\\prime}=\\cdots\\) \\\\ \\(\\partial_{t}p^{\\prime}=\\cdots\\) \\\\ \\(\\partial_{t}\\mathbf{u}^{\\prime}=\\cdots\\) \\\\ \\(\\partial_{t}w^{\\prime}=\\cdots\\) \\\\ \\end{tabular} \\(:\\)5 independent fields \\(\\leftrightarrow\\) 5 waves \\(:\\left\\{\\begin{array}{ll}2\\text{ AW}\\\\ 2\\text{ PW}\\\\ 1\\text{ RW}\\end{array}\\right.\\) \\\\ **incompressibility**\\(\\downarrow\\)\\(\\begin{array}{ll}\\partial_{z}w^{\\prime}=-\\boldsymbol{\ abla}\\cdot\\mathbf{u}^{ \\prime}\\\\ \\Delta p^{\\prime}=-\\boldsymbol{\ abla}\\cdot f\\mathbf{\\hat{z}}\\times\\mathbf{u}^{ \\prime}-\\partial_{z}(N^{2}\\zeta^{\\prime})\\end{array}\\) \\\\ \\(\\partial_{t}\\zeta^{\\prime}=\\cdots\\) \\\\ \\(\\partial_{t}\\mathbf{u}^{\\prime}=\\cdots\\) \\\\ \\end{tabular} \\(:\\)3 independent fields \\(\\leftrightarrow\\) 3 waves \\(:\\left\\{\\begin{array}{ll}2\\text{ PW}\\\\ 1\\text{ RW}\\end{array}\\right.\\) \\\\ **quasigeostrophy**\\(\\downarrow\\)\\(\\partial_{z}\\mathbf{u}^{\\prime}=\\frac{N^{2}}{f_{0}}\\mathbf{\\hat{z}}\\times \\boldsymbol{\ abla}\\zeta^{\\prime}\\) \\\\ \\(\\partial_{t}q^{\\prime}=\\cdots:1\\) independent field \\(\\leftrightarrow\\) 1 wave \\(:\\) 1 RW \\\\ \\end{tabular} \\end{table} Table 1: Reduction of independent fields (and, hence, prognostic equations) by the incompressibility and quasigeostrophic approximations. Here, AW, PW, and RW stand for acoustic waves, Poincaré waves, and Rossby wave, respectively; \\(\\Delta:=\ abla^{2}+\\partial_{zz}\\) is the three-dimensional Laplacian; and \\(q^{\\prime}:=f+\ abla^{2}p^{\\prime}/f_{0}+\\partial_{z}(f_{0}N^{-2}\\partial_{z}p^ {\\prime})\\) is the so-called quasigeostrophic potential vorticity. ###### Acknowledgements. The author has imparted lectures based on the present material to students of the doctoral program in physical oceanography at CICESE (Ensenada, Baja California, Mexico). Part of this material is inspired on a seminal homework assigned by the late Professor Pedro Ripa. To his memory this article is dedicated. ## Appendix A Solutions to some of the problems **Problem 1**: To describe the dynamics in a noninertial reference frame such as one tied to the rotating Earth, two forces must be included: the Coriolis and centrifugal forces. However, Laplace[1] showed that if the upward coordinate \\(z\\) is chosen not to lie in the direction of the gravitational attraction, but rather to be slightly tilted toward the nearest pole, the centrifugal and gravitational forces can be made to balance one another in a horizontal plane (cf. also Ref. [8]). With this choice the Coriolis force is the only one needed to describe the dynamics. Notice that the absence of the centrifugal force in a system fixed to the Earth is what actually makes rotation effects real: they cannot be removed by a change of coordinates. **Problem 2**: In the absence of diffusive processes, the isopycnal \\(z=\\zeta\\) is a material surface, i.e. \\([{\\bf u}+\\hat{\\bf z}\\,w-({\\bf u}_{\\zeta}+\\hat{\\bf z}\\,w_{\\zeta})]\\cdot[{\\mathbf{ \ abla}}\\zeta-\\hat{\\bf z}\\,(1-\\partial_{z}\\zeta)]=0.\\) Here, \\({\\bf u}_{\\zeta}+\\hat{\\bf z}\\,w_{\\zeta}\\) denotes the velocity of _some_ point on the surface [the velocity of a surface is not defined and it only makes sense to speak of the velocity in a given direction, e.g. the normal direction, in whose case it is \\(\\hat{\\bf z}\\,(1-\\partial_{z}\\zeta)-\ abla\\zeta\\)]. From the trivial relation \\(z-\\zeta=0\\) then it follows that \\(({\\bf u}_{\\zeta}+\\hat{\\bf z}\\,w_{\\zeta})\\cdot[{\\mathbf{\ abla}}\\zeta-\\hat{\\bf z} \\,(1-\\partial_{z}\\zeta)]=-\\partial_{t}\\zeta\\) and, hence, \\({\\rm D}\\zeta=w\\) at \\(z=\\zeta\\). **Problem 3**: To perform the linearization of the equations of motion, we write \\[({\\bf u},w,\\zeta) = \\quad\\quad\\quad\\quad\\quad\\quad({\\bf u}^{\\prime},w^{\\prime},\\zeta ^{\\prime})\\ +\\ \\cdots,\\] \\[(p,\\rho) = (\\rho_{\\rm r},p_{\\rm r})\\ +\\quad(\\rho^{\\prime},p^{\\prime})\\ +\\ \\cdots, \\tag{10}\\] \\[O : 1\\quad\\quad\\quad\\quad\\quad a\\quad\\quad\\quad a^{2}\\] where \\(a\\) is an infinitesimal amplitude. The \\(O(a)\\) continuity equation (7b) readily follows upon noticing that, up to \\(O(a)\\), \\(c_{\\rm s}^{-2}{\\rm D}p=c_{\\rm s}^{-2}(\\partial_{t}p^{\\prime}-\\rho_{\\rm r}gw^{ \\prime}).\\) Up to \\(O(a)\\), \\({\\rm D}\\rho-c_{\\rm s}^{-2}{\\rm D}p=\\partial_{t}\\rho^{\\prime}-g^{-1}\\rho_{\\rm r }N^{2}w^{\\prime}-c_{\\rm s}^{-2}\\partial_{t}p^{\\prime}\\) and \\({\\rm D}\\zeta-w=\\partial_{t}\\zeta^{\\prime}-w^{\\prime}.\\) Then from the relationships \\({\\rm D}\\rho-c_{\\rm s}^{-2}{\\rm D}p=0\\) and \\({\\rm D}\\zeta-w=0\\) it follows that \\(\\rho^{\\prime}=c_{\\rm s}^{-2}p^{\\prime}+g^{-1}\\rho_{\\rm r}N^{2}\\zeta^{\\prime}.\\) Bearing in mind the latter relation and the fact that \\({\\rm d}p_{\\rm r}/{\\rm d}z=-g\\rho_{\\rm r}\\), the \\(O(a)\\) vertical Newton's equation (7d) then follows. **Problem 4**: For the ocean \\(c_{\\rm s}\\sim 1500\\) m \\({\\rm s}^{-1}\\gg\\sqrt{gH}\\sim 200\\) m \\({\\rm s}^{-1}\\); by contrast, for the atmosphere \\(c_{\\rm s}\\sim 350\\) m \\({\\rm s}^{-1}\\sim\\sqrt{gH}\\) with \\(H\\sim 12\\) km, which is the typical height of the troposphere. **Problem 7**: At the surface \\(z=\\eta\\) it is \\(w=\\partial_{t}\\eta+{\\bf u}\\cdot{\\mathbf{\ abla}}\\eta\\) and \\(p=0\\) (here, \\(p\\) is a kinematic pressure, i.e. divided by a constant reference density \\(\\bar{\\rho}\\)). Writing \\(\\eta=\\eta^{\\prime}+O(a^{2})\\) and Taylor expanding about \\(z=0\\) it follows, on one hand, \\[w^{\\prime}+\\eta^{\\prime}\\partial_{z}w^{\\prime}+O(a^{3})=\\partial_{t}\\eta^{ \\prime}+{\\bf u}^{\\prime}\\cdot{\\mathbf{\ abla}}\\eta^{\\prime}+O(a^{3}) \\tag{11}\\] at \\(z=0\\), and, on the other hand, \\[p_{\\rm r}+(p_{\\rm r}+{\\rm d}p_{\\rm r}/{\\rm d}z)\\eta^{\\prime}+\\eta^{\\prime} \\partial_{z}p^{\\prime}+O(a^{3})=0 \\tag{12}\\] at \\(z=0.\\) From (11) it follows, to the lowest order, \\(w^{\\prime}=\\partial_{t}\\eta^{\\prime}\\) at \\(z=0\\). Since \\(w^{\\prime}=\\partial_{t}\\zeta^{\\prime}\\), for a wave (i.e. \\(\\partial_{t}\ eq 0\\)) then it follows that \\(\\eta^{\\prime}=\\zeta^{\\prime}\\) at \\(z=0.\\) Taking into account the latter and choosing \\(\\bar{\\rho}=\\rho_{\\rm r}(0)\\), from (12) it follows, to the lowest order, \\(p^{\\prime}=g\\eta^{\\prime}\\equiv g\\zeta^{{}^{\\prime}}\\) at \\(z=0\\) since \\(p_{\\rm r}=0\\) and \\({\\rm d}p_{\\rm r}/{\\rm d}z=-g\\rho_{\\rm r}/\\bar{\\rho}\\equiv-g\\) at \\(z=0\\). **Problem 8**: The classical dispersion relation for Poincare waves corresponds to the hydrostatic limit, which requires \\(m^{2}\\gg{\\bf k}^{2}\\) (i.e. that the vertical length scales be shorter than the horizontal length scales). Under this conditions, \\(\\omega^{2}=f^{2}+{\\bf k}^{2}N^{2}/m^{2}=f^{2}+c^{2}{\\bf k}^{2}.\\) To obtain the dispersion relation for surface gravity waves one needs to take into account boundary conditions (33): making \\(N^{2}\\equiv 0\\equiv f^{2}\\) it follows, on one hand, that \\(m^{2}=-{\\bf k}^{2}\\), and, on other, that \\(\\omega^{2}=-mg\\tan mH.\\) The dispersion relation \\(\\omega^{2}=g\\left|{\\bf k}\\right|\\tanh\\left|{\\bf k}\\right|H\\) then readily follows. ## References * (1) De La Place, M. 1775. \"Recherches sur plusieurs points du systeme du monde.\" _Mem. de l'Acad. R. des Sc._ pp. 75-182. * (2) Gill, A. E. 1982. _Atmosphere-Ocean Dynamics_. Academic. * (3) Le Blond, P. H. and L. A. Mysak. 1978. _Waves in the Ocean_. Vol. 20 of _Elsevier Oceanography Series_. Elsevier Science. * (4) Olver, F. W. J. 1974. _Asymptotics and Special Functions_. Academic. * (5) Pedlosky, J. 1987. _Geophysical Fluid Dynamics_. Second Edition, Springer. * (6) Ripa, P. 1994. \"Horizontal wave propagation in the equatorial waveguide.\" _J. Fluid Mech._ 271:267-284. * (7) Ripa, P. 1997\\(a\\). \"'Inertial\" Oscillations and the \\(\\beta\\)-Plane Approximation(s).\" _J. Phys. Oceanogr._ 27:633-647. * (8) Ripa, P. 1997\\(b\\). _La incredible historia de la malentendida_fuerza de Coriolis_ (_The Incredible Story of the Misunder-stood Coriolis Force_). Fondo de Cultura Economica. * (9) Ripa, P. M. 1997\\(c\\). Ondas y Dinamica Oceanica (Waves and Ocean Dynamics). In _Oceanografia Fisica en Mexico_, ed. M. F. Lavin. Monografia Fisica No. 3, Union Geofisica Mexicana, Mexico pp. 45-72.
The general equations of motion for ocean dynamics are presented and the waves supported by the (inviscid, unforced) linearized system with respect to a state of rest are derived. The linearized dynamics sustains one zero frequency mode (called buoyancy mode) in which salinity and temperature rearrange in such a way that seawater density does not change. Five nonzero frequency modes (two acoustic modes, two inertia-gravity or Poincare modes, and one planetary or Rossby mode) are also sustained by the linearized dynamics, which satisfy an asymptotic general dispersion relation. The most usual approximations made in physical oceanography (namely incompressibility, Boussinesq, hydrostatic, and quasigeostrophic) are also consider, and their implications in the reduction of degrees of freedom (number of independent dynamical fields or prognostic equations) of, and compatible waves with, the linearized governing equations are particularly discussed and emphasized. pacs: 43.30.Bp, 43.30.Cq, 43.30.Ft
Condense the content of the following passage.
arxiv-format/0401066v1.md
# Statistical Physics in Meteorology M. Ausloos SUPRATECS 1 and GRASP2, Institute of Physics, B5, University of L'ege, B-4000 Liege, Belgium Footnote 1: SUPRATECS = Services Universitaires Pour la Recherche et les Applications Technologies de materiaux Electroceramiques, Composites et Supraconduceurs Footnote 2: GRASP = Group for Research in Applied Statistical Physics November 4, 2021 ## I Introduction and Foreword This contribution to the 18th Max Born Symposium Proceedings, cannot be seen as an extensive review of the connection between meteorology and various aspects of modern statistical physics. Space and time (and weather)limit its content. Much of what is found here can rather be considered to result from a biased view point or limited understanding of a frustrated new researcher unsatisfied by the present status of the field. Yet only to be found is a set of basic considerations and reflections expecting to give lines for various investigations, in the spirit of modern statistical physics ideas. The author came into this subject starting from previous work in econophysics, when he observed that some \"weather derivatives\" were in use, and some sort of game initiated by the Frankfurt Deutsche Borse[1] in order to attract customers which could predict the temperature in various cities within a certain lapse of time, and win some prize thereafter. This subject was similar to predicting the S&P500 or other financial index values at a certain future time. Whence various techniques which were used in econophysics, like the detrended fluctuation analysis, the multifractals, the moving average crossing techniques, etc. could be attempted from scratch. Beside the weather (temperature) derivatives other effects are of interest. Much is said and written about e.g. the ozone layer and the Kyoto \"agreement\". The El Ni\\(\\tilde{n}\\)o system is a great challenge to scientists. Since there is some data available under the form of time series, like the Southern Oscillation Index, it is of interest to look for trends, coherent structures, periods, correlations in noise, etc. in order to bring some knowledge, if possible basic parameters, to this meteorological field and expect to import some modern statistical physics ideas into such climatological phenomena. It appeared that other data are also available, like those obtained under various experiments, put into force by various agencies, like the Atlantic Stratocumulus Transition Experiment (ASTEX) for ocean surfaces or those of the Atmospheric Radiation Measurement Program[2, 3] (ARM), among others. However it appeared that the data is sometimes of rather limited value because of the lack of precision, or are biased because the raw data is already transformed through models, and arbitrarily averaged (\"filtered\") whence even sometimes lacking the meaning it should contain. Therefore a great challenge comes through in order to sort out the wheat from the chaff in order to develop meaningful studies. I will mention most of the work to which I have contributed, being aware that I am failing to acknowledge many more important reports than those, - for what I truly apologize. There are very interesting lecture notes on the web for basic modules on meteorological training courses, e.g. one available through ECMWF website[4]. In Sect.2, I will briefly comment on the history of meteorology. The notion of clouds, in Sect. 3, allows for bringing up the geometrical notion of fractals for meteorology work, thus scaling laws, and modern data analysis techniques. Simple technical and useful approaches, based on standard statistical physics techniques and ideas, in particular based on the scaling hypothesis for phase transitions and percolation theory features will be found in Sect. 4. # Introduction From the beginning of times, the earth, sky, weather have been of great concern. As soon as agriculture, commerce, travelling on land and sea prevailed, men have wished to predict the weather. Later on airborne machines need atmosphere knowledge and weather predictions for best flying. Nowadays there is much money spent on weather predictions for sport activities. It is known how the knowledge of weather (temperature, wind, humidity,..) is relevant, (even \\(fundamental\\)!), e.g. in sailing races or in Formula 1 and car rally races. Let it be recalled the importance of knowing and predicting the wind (strength and directions), pressure and temperature at high altitude for the (recent) no-stop balloon round the world trip. The first to draw sea wind maps was Halley[5], an admirer of Breslau administration. That followedthe \"classical\" isobaths and isoheights (these are geometrical measures!!) for sailors needing to go through channels. I am very pleased to point out that Heinrich Wilhelm Brandes(1777-1834), Professor of Mathematics and Physics at the University of Breslau was the first\\({}^{5}\\) who had the idea of displaying weather data (temperature, air pressure, a.s.o.) on geographical maps\\({}^{1}\\). Later von Humboldt (1769-1859) had the idea to connect points in order to draw isotherms\\({}^{5}\\). It is well known nowadays that various algorithms will give various isotherms, starting from the same temperature data and coordinate table. In fact the maximum or minimum temperature as defined in meteorology\\({}^{6,7}\\) are far from the ones acceptable in physics laboratories. Note that displayed isotherms connect data points which values are obtained at different times! No need to say that it seems essential to concentrate on predicting the uncertainty in forecast models of weather and climate as emphasized elsewhere\\({}^{8}\\). ## III Climate and Weather. The Role of Clouds Earth's climate is clearly determined by complex interactions between sun, oceans, atmosphere, land and biosphere\\({}^{9,10}\\). The composition of the atmosphere is particularly important because certain gases, including water vapor, carbon dioxide, etc., absorb heat radiated from Earth's surface. As the atmosphere warms up, it in turn radiates heat back to the surface that increases the earth's \"mean surface temperature\". Much attention has been paid recently\\({}^{11,12}\\) to the importance of the main components of the atmosphere, in particular clouds\\({}^{13}\\), in the water three forms-- vapor, liquid and solid, for buffering the global temperature against reduced or increased solar heating [14]. This leads to efforts to improve not only models of the earth's climate but also predictions of climate change [15], as understood over long time intervals, in contrast to shorter time scales for weather forecast. In fact, with respect to climatology the situation is very complicated because one does not even know what the evolution equations are. Since controlled experiments cannot be performed on the climate system, one relies on using ad hoc models to identify cause-and-effect relationships. Nowadays there are several climate models belonging to many different centers [16]. Their web sites not only carry sometimes the model output used to make images but also provide the source code. It seems relevant to point out here that the stochastic resonance idea was proposed to describe climatology evolution [17]. It should be remembered that solutions of Navier-Stokes equations forcefully depend on the initial conditions, and steps of integrations. Therefore a great precision on the temperature, wind velocity, etc. cannot be expected and the solution(s) are only looking like a mess after a few numerical steps [18]. The Monte Carlo technique suggests to introduce successively a set of initial conditions, perform the integration of the differential equations and make an average thereafter [18]. It is hereby time to mention Lorenz's [19] work who simplified Navier-Stokes equations searching for some prediciability. However, predicting the outcome of such a set of equations with complex nonlinear interactions taking place in an open system is a difficult task [20]. The turbulent character in the atmospheric boundary layer (ABL) is one of its most important features. Turbulence can be caused by a variety of processes, like thermal convection, or mechanically generated by wind shear, or following interactions influenced by the rotation of the Earth [21, 22]. This complexity of physical processes and interactions between them create a variety of atmospheric formations. In particular, in a cloudy ABL the radiative fluxesproduce local sources of heating or cooling within the mixed-layer and therefore can greatly influence its turbulent structure and dynamics, especially in the cloud base. Two practical cases, the marine ABL and the continental ABL have been investigated for their scaling properties[23, 24, 25] Yet, let it be emphasized that the first modern ideas of statistical physics implemented on cloud studies through fractal \\(geometry\\) are due to Lovejoy who looked at the perimeter-area relationship of rain and cloud areas[26], fractal dimension of their shape or ground projection. He discovered the statistical self-similarity of cloud boundaries through area-perimeter analyses of the geometry of satellites,fractal scaling of the cloud perimeter in the horizontal plane. He found the fractal dimension \\(D_{p}\\simeq 4/3\\) over a spectrum of 4 orders of magnitude in size, for small fair weather cumuli (\\(\\sim 1021\\) km) up to huge stratus fields (\\(\\sim 103\\) km). Cloud size distributions have also been studied from a scaling point of view[27, 28, 29, 30]. Rain has also received much attenion[31, 32, 33, 34, 35, 36, 37]. ## IV Modern statistical physics approaches Due to the nonlinear physics laws governing the phenomena in the atmosphere, the time series of the atmospheric quantities are usually non-stationary[38, 39] as revealed by Fourier spectral analysis, - whih is usually the first technique to use. Recently, new techniques have been developed that can systematically eliminate trends and cycles in the data and thus reveal intrinsic dynamical properties such as correlations that are very often masked by nonstationarities,[40, 41]. Whence many studies reveal long-range power-law correlations in geophysics time series[39, 42] in particular in meteorology[43, 44, 45, 46, 47, 48, 49, 50]. Multi-affine properties[51, 52, 53, 54, 55, 56, 57, 58, 59] can also be identified, using singular spectrum or/and wavelets. There are different levels of essential interest for sorting out correlationsfrom data, in order to increase the confidence in predictability[60]. There are investigations based on long-, medium-, and short-range horizons. The \\(i\\)-diagram variability (\\(iVD\\)) method allows to sort out some short range correlations. The technique has been used on a liquid water cloud content data set taken from the Atlantic Stratocumulus Transition Experiment (ASTEX) 92 field program[61]. It has also been shown that the random matrix approach can be applied to the empirical correlation matrices obtained from the analysis of the basic atmospheric parameters that characterize the state of atmosphere[62]. The principal component analysis technique is a standard technique[63] in meteorology and climate studies. The Fokker-Planck equation for describing the liquid water path[64] is also of interest. See also some tentative search for power law correlations in the Southern Oscillation Index fluctuations characterizing El Ni\\(\\tilde{n}\\)o[65]. But there are many other works of interest[66]. ### Ice in cirrus clouds In clouds, ice appears in a variety of forms, shapes, depending on the formation mechanism and the atmospheric conditions[22, 51, 67, 68]. The cloud inner structure, content, temperature, life time,.. can be studied. In cirrus clouds, at temperatures colder than about \\(-40^{\\circ}\\) C ice crystals form. Because of the vertical extent, ca. from about 4 to 14 km and higher, and the layered structure of such clouds one way of obtaining some information about their properties is mainly by using ground-based remote sensing instruments[69, 70, 71, 72]. Attention can be focussed[50] on correlations in the fluctuations of radar signals obtained at isodepths of \\(winter\\) and \\(fall\\) cirrus clouds giving (i) the backscattering cross-section, (ii) the Doppler velocity and (iii) the Doppler spectral width of the ice crystals. They correspond to the physical coefficients used in Navier Stokes equations to describe flows, i.e. bulk modulus, viscosity, and thermal conductivity. It was found that power-law time correlations exist with a crossover between regimes at about 3 to 5 min, but also \\(1/f\\) behavior, characterizing the top and the bottom layers and the bulk of the clouds. The underlying mechanisms for such correlations likely originate in ice nucleation and crystal growth processes. ### Stratus clouds In stratus clouds, long-range power-law correlations\\({}^{45,49}\\) and multi-affine properties\\({}^{24,25,57}\\) have reported for the liquid water fluctuations, beside the spectral density\\({}^{73}\\). Interestingly, stratus cloud data retrieved from the radiance, recorded as brightness temperature,\\({}^{2}\\) at the Southern Great Plains central facility and operated in the vertically pointing mode\\({}^{74}\\) indicated a Fourier spectrum, \\(S(f)\\ \\sim\\ f^{-\\beta}\\), \\(\\beta\\) exponent equal to \\(1.56\\pm 0.03\\) pointing to a nonstationary time series. The detrended fluctuation analysis (DFA) method applied on the stratus cloud brightness microwave recording\\({}^{45,75}\\) indicates the existence of long-range power-law correlations over a two hour time. Contrasts in behaviors, depending on seasons can be pointed out. The DFA analysis of liquid water path data measured in April 1998 gives a scaling exponent \\(\\alpha=0.34\\pm 0.01\\) holding from 3 to 60 minutes. This scaling range is shorter than the 150 min scaling range\\({}^{45}\\) for a stratus cloud in January 1998 at the same site. For longer correlation times a crossover to \\(\\alpha=0.50\\pm 0.01\\) is seen up to about 2 h, after which the statistics of the DFA function is not reliable. However a change in regime from Gaussian to non-Gaussian fluctuation regimes has been clearly defined for the cloud structure changes using a finite size (time) interval window. It has been shown that the DFA exponent turns from a low value (about 0.3) to 0.5 before the cloud breaks. This indicates that the stability of the cloud, represented by antipersistent fluctuations is (for some unknown reason at this level) turning into a system for which the fluctuations are similar to a pure random walk. The same type of finding was observed for the so called Liquid Water Path3. Footnote 3: The liquid water path (LWP) is the amount of liquid water in a vertical column of the atmosphere; it is measured in cm\\({}^{-3}\\); sometimes in cm!! The value of \\(\\alpha\\approx 0.3\\) can be interpreted as the \\(H_{1}\\) parameter of the multifractal analysis of liquid water content[24, 25, 52] and of liquid water path[57]. Whence, the appearance of broken clouds and clear sky following a period of thick stratus can be interpreted as a non equilibrium transition or a sort of fracture process in more conventional physics. The existence of a crossover suggests two types of correlated events as in classical fracture processes: nucleation and growth of diluted droplets. Such a marked change in persistence implies that specific fluctuation correlation dynamics should be usefully inserted as ingredients in _ad hoc_ models. ### Cloud base height The variations in the local \\(\\alpha\\)-exponent (\"multi-affinity\") suggest that the nature of the correlations change with time, so called intermittency phenomena. The evolution of the time series can be decomposed into successive persistent and anti-persistent sequences. It should be noted that the intermittency of a signal is related to existence of extreme events, thus a distribution of events away from a Gaussian distribution, in the evolution of the process that has generated the data. If the tails of the distribution function follow a power law, then the scaling exponent defines the critical order value after which the statistical moments of the signal diverge. Therefore it is of interest to probe the distribution of the fluctuations of a time dependent signal \\(y(t)\\) prior investigating its intermittency. Much work has been devoted to the cloud base height [54, 55, 56], under various ABL conditions, and the LWP [57, 64]. Neither the distribution of the fluctuations of liquid water path signals nor those of the cloud base height appear to be Gaussian. The tails of the distribution follow a power law pointing to \"large events\" also occurring in the meteorological (space and time) framework. This may suggest routes for other models. ### Sea Surface Temperature Other time series analysis have been investigated searching for power law exponents, like in atmospheric [76] or sea surface temperature (SST) fluctuations [77]. These are of importance for weighing their impacts on regional climate, whence finally to greatly increase predictability of precipitation during all seasons. Currently, climate patterns derived from global SST are used to forecast precipitation. Recently we have attempted to observe whether the fluctuations in the Southern Oscillation index (\\(SOI\\)) characterizing El Ni\\(\\tilde{n}\\)o were also prone to a power law analysis. For the \\(SOI\\) monthly averaged data time interval 1866-2000, the tails of the cumulative distribution of the fluctuations of \\(SOI\\) signal it is found that large fluctuations are more likely to occur than the Gaussian distribution would predict. An antipersistent type of correlations exist for a time interval ranging from about 4 months to about 6 years. This leads to favor specific physical models for El Ni\\(\\tilde{n}\\)o description [65]. ## V Conclusions Modern statistical physics techniques for analyzing atmospheric time series signals indicate scaling laws (exponents and ranges) for correlations. A few examples have been given briefly here above, mainly from contributed papers in which the author has been involved. Work by many other authors have not been included for lack of space. This brief set of comments is only intended for indicating how meteorology and climate problems can be tied to scaling laws and inherent time series data analysis techniques. Those ideas/theories have allowed me to reduce the list of quoted references, though even like this I might have been unfair. One example can be recalled in this conclusion to make the point: the stratus clouds break when the molecule density fluctuations become Gaussian, i.e. when the molecular motion becomes Brownian-like. This should lead to better predictability on the cloud evolution and enormously extend the predictability range in weather forecast along the lines of nonlinear dynamics [78]. **Acknowledgments** Part of this studies have been supported through an Action Concertee Program of the University of Li\\(\\grave{e}\\)ge (Convention 02/07-293). Comments by A. Pekalski, N. Kitova, K. Ivanova and C. Collette are greatly appreciated. ## References * [1][http://deutsche-boerse.com/app/open/xelsius](http://deutsche-boerse.com/app/open/xelsius). * [2][http://www.arm.gov](http://www.arm.gov). * [3] G.M. Stokes, S.E. Schwartz, Bull. Am. Meteorol. Soc. 75 (1994) 1201. * [4][http://www.ecmwf.int/newsevents/training/rcourse](http://www.ecmwf.int/newsevents/training/rcourse)\\({}_{-}\\)notes/index.html. * [5] M. Monmonier, _Air Apparent. How meteorologists learned to map, predict, and dramatize weather_ (U. Chicago Press, Chicago, 1999). * [6][http://www.maa.org/features/mathchat/mathchat](http://www.maa.org/features/mathchat/mathchat)\\({}_{-}\\)4\\({}_{-}\\)20\\({}_{-}\\)00.html. * [7] R.E. Huschke, (Ed.), Glossary of Meteorology (Am. Meteorol. Soc., Boston, 1959). * [8] T.N. Palmer, Rep. Phys. Rep. 63 (2000) 71. * [9] R.A. Anthens, H.A. Panofsky, J.J. Cahir, A. Rango: _The Atmosphere_ (Bell & Howell Company, Columbus, OH, 1975). * [10] D. G. Andrews, _An Introduction to Atmospheric Physics_ (Cambridge University Press, Cambridge, 2000). * [11] A. Maurellis, Phys. World 14 (2001) 22. * [12] D. Rosenfeld, W. Woodley, Phys. World 14 (2001) 33. * [13] R.R. Rogers,_Short Course in Cloud Physics_ (Pergamon Press, New York, 1976). * [14] H.-W. Ou, J. Climate 14 (2001) 2976. * [15] K. Hasselmann, in _The Science of Disasters_, A. Bunde, J. Kropp, H.J. Schellnhuber (Springer, Berlin, 2002) 141. * [16][http://stommel.tamu.edu/baum/climate](http://stommel.tamu.edu/baum/climate)\\({}_{-}\\)modeling.html. * [17] R. Benzi, A. Sutera, A. Vulpiani, J. Phys. 14 (1981) L453. * [18] A. Pasini, V. Pelino, Phys. Lett. A 275 (2000) 435. * [19] E. N. Lorenz, J. Atmos. Sci. 20 (1963) 130. * [20] J.B. Ramsey and Z. Zhang, in _Predictability of Complex Dynamical Systems_, (Springer, Berlin, 1996) 189. * [21] J. R. Garratt, _The Atmospheric Boundary Layer_ (Cambridge University Press, Cambridge, 1992) * [22] A. G. Driedonks and P.G. Duynkerke, Bound. Layer Meteor. 46 (1989) 257. * [23] N. Kitova, Ph. D. thesis, University of Liege, unpublished * [24] A. Davis, A. Marshak, W. Wiscombe, R. Cahalan, J. Atmos. Sci. 53 (1996) 1538. * [25] A. Marshak, A. Davis, W. Wiscombe, R. Cahalan, J. Atmos. Sci. 54 (1997) 1423. * [26] S. Lovejoy, Science 216 (1982) 185. * [27] R.F. Cahalan, D. A. Short, G. R. North, Mon. Weather Rev. 110 (1982) 26. * [28] R. F. Cahalan and J. H. Joseph, Mon. Weather Rev. 117 (1989) 261. * [29] R.A.J. Neggers, H.J.J. Jonker, A.P. Siebesma, AP, J. Atmosph. Sci. 60 (2002) 1060. * [30] S.M.A. Rodts, P. G. Duynkerker, H.J.J. Jonker, J.J. Ham, J. Atmosph. Sci. 60 (2002) 1895. * [31] S.T.R. Pinho, R.F.S. Andrade, Physica A 255 (1998) 483 * [32] R.F.S. Andrade, Braz. J. Phys. 33 (2003) 437. * [33] J.G.V. Miranda, R.F.S. Andrade, Physica A 295 (2001) 38; Theor. Appl. Climatol. 63 (1999) 79. * [34] Y. Tessier, S. Lovejoy, D. Schertzer, J. Appl. Meteorol. 32 (1993) 223. * [35] D. Schertzer, S. Lovejoy, J. Appl. Meteorol. 36 (1997) 1296. * [36] S. Lovejoy, D. Schertzer, J. Appl. Meteorol. 29 (1990) 1167. * [37] C. S. Bretherton, E. Klinker, J. Coakley, A. K. Betts, J. Atmos. Sci. 52 (1995) 2736. * [38] O. Karner, J. Geophys. Res. 107 (2002) 4415. * [39] A. Davis, A. Marshak, W. J. Wiscombe, and R. F. Cahalan, in _Current Topics in Nonstationary Analysis_, Eds. G. Trevino, J. Hardin, B. Douglas, and E. Andreas, (World Scientific, Singapore, 1996) 97-158. * [40] Th. Schreiber, Phys. Rep. 308 (1999) 1. * [41] P.J. Brockwell and R.A. Davis, _Time Series : Theory and Methods_ (Springer-Verlag, Berlin,1991) * [42] K. Fraedrich, R. Blender, Phys. Rev. Lett. 90 (2003) 108501 * [43] E. Koscielny-Bunde, A. Bunde, S. Havlin, H. E. Roman, Y. Goldreich, H.-J. Schellnhuber, Phys. Rev. Lett. 81 (1998) 729. * [44] E. Koscielny-Bunde, A. Bunde, S. Havlin, Y. Goldreich, Physica A 231 (1993) 393. * [45] K. Ivanova, M. Ausloos, E. E. Clothiaux, and T. P. Ackerman, Europhys. Lett. 52 (2000) 40. * [46] A.A. Tsonis, P.J. Roeber and J.B. Elsner, Geophys. Res. Lett. 25 (1998) 2821. * [47] A.A. Tsonis, P.J. Roeber and J.B. Elsner, J. Climate 12 (1999) 1534. * [48] P. Talkner and R.O. Weber, Phys. Rev. E 62 (2000) 150. * [49] K. Ivanova, M. Ausloos, Physica A 274 (1999) 349. * [50] K. Ivanova, T.P. Ackerman, E.E. Clothiaux, P.Ch. Ivanov, H.E. Stanley, and M. Ausloos, J. Geophys. Res., 108 (2003) 4268. * [51] S.G. Roux, A. Arneodo, N. Decoster, Eur. Phys. J. B 15 (2000) 765. * [52] A. Davis, A. Marshak, W. Wiscombe, R. Cahalan, J. Geophys. Research. 99 (1994) 8055. * [53] A. Marshak, A. Davis, W. J. Wiscombe, R. F. Cahalan, J. Atmos. Sci. 54 (1997) 1423. * [54] N. Kitova, K. Ivanova, M. Ausloos, T.P. Ackerman, M. A. Mikhalev, Int. J. Modern Phys. C 13(2002) 217. * [55] K. Ivanova, H.N. Shirer, E.E. Clothiaux, N. Kitova, M.A. Mikhalev, T.P.Ackerman, and M. Ausloos, Physica A 308 (2002) 518. * [56] N. Kitova, K. Ivanova, M.A. Mikhalev and M. Ausloos, in \"From Quanta to Societies\", W. Klonowski, Ed. (Pabst, Lengerich, 2002) 263. * [57] K. Ivanova, T. Ackerman, Phys. Rev. E 59 (1999) 2778. * [58] C.R. Neto, A. Zanandrea, F.M. Ramos, R.R. Rosa, M.J.A. Bolzan, L.D.A. Sa, Physica A 295 (2001) 215. * [59] H.F.C. Velho, R.R. Rosa, F.M. Ramos, R.A. Pielke, C.A. Degrazia, C.R. Neto, A. Zanadrea, Physica A 295 (2001) 219. * [60] B.D. Malamud, D.L. Turcotte, J. Stat. Plann. Infer. 80 (1999) 173. * [61] K. Ivanova, M. Ausloos, A.B. Davis, T.P. Ackerman, Physica A 272 (1999) 269. * [62] M. S. Santhanam, P. K. Patra, Phys. Rev. E 64 (2001) 16102. * [63] M.J. O'Connel, Comp. Phys. Comm. 8 (1974) 49. * Atmosph. 107 (2002) 4708. * [65] M. Ausloos and K. Ivanova, Phys. Rev. E 63 (2001) 047201. * [66] J.I. Salisbury, M. Winbush, Nonlin. Process. Geophys. 9 (2002) 341. * [67] K. R. Sreenivasan, Ann. Rev. Fluid Mech. 23 (1991) 539. * [68] C. S. Kiang, D. Stauffer, G. H. Walker, O. P. Puri, J. D. Wise, Jr. and E. M. Patterson, J. Atmos. Sci. 28 (1971) 1222. * [69] E.R. Westwater, in: _Atmospheric Remote Sensing by Microwave Radiometry_, ed. by M.A. Janssen (John Wiley and Sons, New York 1993) pp. 145-213. * [70] E.R. Westwater, Radio Science 13 (1978) 677. * [71] W.G. Rees: _Physical Principles of Remote Sensing_ (Cambridge University Press, Cambridge, 1990). * [72][http://www.arm.gov/docs/instruments/static/blc.html](http://www.arm.gov/docs/instruments/static/blc.html). * [73] H. Gerber, J.B. Jensen, A. Davis, A. Marshak, W. J. Wiscombe, J. Atmos. Sci. 58 (2001) 497. * [74] J.C. Liljegren, B.M. Lesht, IEEE Int. Geosci. and Remote Sensing Symp. 3 (1996) 1675. * [75] K. Ivanova, E.E. Clothiaux, H.N. Shirer, T.P. Ackerman, J. Liljegren and M. Ausloos, J. Appl. Meteor. 41 (2002) 56. * [76] J.D. Pelletier, Earth Planet. Sci. Lett. 158 (1998) 157. * [77] R.A. Monetti, S. Havlin, A. Bunde, Physica A 320 (2003) 581. * [78] F. Molteni, R. Buizza, T.N. Palmer, T. Petroliagis, Q. J. R. Meteorol. Soc. 122 (1996) 73.
Various aspects of modern statistical physics and meteorology can be tied together. The historical importance of the University of Wroclaw in the field of meteorology is first pointed out. Next, some basic difference about time and space scales between meteorology and climatology is outlined. The nature and role of clouds both from a geometric and thermal point of view are recalled. Recent studies of scaling laws for atmospheric variables are mentioned, like studies on cirrus ice content, brightness temperature, liquid water path fluctuations, cloud base height fluctuations, Technical time series analysis approaches based on modern statistical physics considerations are outlined.
Give a concise overview of the text below.
arxiv-format/0401522v1.md
# Spectra and Diagnostics for the Direct Detection of Wide-Separation Extrasolar Giant Planets Adam Burrows1, David Sudarsky1, & Ivan Hubeny123 Footnote 1: affiliation: Department of Astronomy and Steward Observatory, The University of Arizona, Tucson, AZ 85721 Footnote 2: affiliation: NOAO, Tucson, AZ 85726 Footnote 3: affiliation: NASA Goddard Space Flight Center, Greenbelt, MD 20771 ## 1. Introduction To date, more than 110 EGPs (Extrasolar Giant Planets) have been discovered by the radial-velocity technique around stars with spectral types from M4 to F71. A sample of relevant references to the discovery literature, by no means exhaustive, includes Mayor and Queloz (1995), Marcy and Butler (1996), Butler et al. (1997,1999), Marcy et al. (1998,1999), Marcy, Cochran, and Mayor (2000), Queloz et al. (2000), Santos et al. (2000), and Konacki et al. (2003). These planets have minimum masses (\\(m_{p}\\sin(i)\\)), where \\(i\\) is the orbital inclination) between \\(\\sim\\)0.12 \\(M_{\\rm J}\\) and \\(\\sim\\)15 \\(M_{\\rm J}\\) (\\(M_{\\rm J}\\)= one Jupiter mass), orbital semi-major axes from \\(\\sim\\)0.0225 AU to \\(\\sim\\)5.9 AU, and eccentricities from \\(\\sim\\)0 to above 0.7. Given our all-too-narrow experience within the solar system, such variety and breadth was wholly unanticipated. Footnote 1: affiliation: Department of Astronomy and Steward Observatory, The University of Arizona, Tucson, AZ 85721 Footnote 2: affiliation: NOAO, Tucson, AZ 85726 Footnote 3: affiliation: NASA Goddard Space Flight Center, Greenbelt, MD 20771 Importantly, two EGPs (HD 209458b and OGLE-TR-56b) have been found to transit their primaries (Henry et al. 2000; Charbonneau et al. 2000,2001; Brown et al. 2001; Konacki et al. 2003; Torres et al. 2003). Furthermore, in the first measurement of the composition of an extrasolar planet of any kind, Charbonneau et al. (2002) detected sodium (Na-D) in their HD 209458b transit spectrum. This was followed by the detection of atomic hydrogen at Lyman-\\(\\alpha\\) and the discovery of a planetary wind (Vidal-Madjar et al. 2003; Burrows and Lunine 1995). With both transit and radial-velocity data, an EGP's mass and radius can be determined, enabling its physical and structural study. Such data can resolve the ambiguity inherent in the radial-velocity technique's sensitivity to only the combination \\(m_{p}\\sin(i)\\). Precision astrometry can also be used to derive masses, as has been done for GJ 876b with the Fine Guidance Sensors on HST (Bennett et al. 2002), and space interferometry using SIM (Unwin and Shao 2000) promises to provide unprecedented astrometric masses by the year \\(\\sim\\)2010. However, it is only by direct detection of a planet's light using photometry, spectrophotometry, or spectroscopy that the detailed and rigorous study of its physical attributes can be conducted. By this means, the composition, gravity, radius, and mass of the giant might be derived and the general theory of EGP properties and evolution might be tested (Burrows et al. 1995,1997; Marley et al. 1999; Sudarsky, Burrows and Pinto 2000; Sudarsky, Burrows, and Hubeny 2003 (SBH); Baraffe et al. 2003). For close-in EGPs, we can anticipate in the next few years wide-band precision photometry from MOST (Matthews et al. 2001), Kepler (Koch et al. 1998), Corot (Antonello and Ruiz 2002), or MONS (Christensen-Dalsgaard 2000) that will provide the variations of the summed light of the planet and star due to changes in the planetary phase. In the mid- to far-infrared, the Spitzer Space Telescope (a.k.a. SIRTF, Space InfraRed Telescope Facility; Werner and Fanson 1995) might soon be able to measure the variations in the planet/star flux ratios of close-in EGPs (SBH). For wide-separation EGPs, it is necessary to measure the planet's light from under the glare of the primary star at very high star-to-planet contrast ratios. To achieve this from the ground, telescopes such as the VLT interferometer (Paresce 2001), the Keck interferometer (van Belle and Vasisht 1998; Akeson and Swain 2000; Akeson, Swain, and Colavita 2001), and the LBT nulling interferometer (Hinz 2001) will be enlisted. From space, the Terrestrial Planet Finder (TPF, Levine et al. 2003) and/or a coronagraphic optical imager such as _Eclipse_ (Trauger et al. 2000,2001) could obtain low-resolution spectra. To support the above efforts and planning for future programs of direct detection of extrasolar giant planets and to provide the theoretical context for the general analysis of the spectra and photometry of irradiated and isolated EGPs, our group has embarked upon a series of papers of EGP spectra, evolution, chemistry, transits, orbital phase functions, and light curves. The most recent paper in this series (SBH) explored generic features of irradiated EGP spectra as a function of orbital distance, cloud properties, and composition class (Sudarsky, Burrows, and Pinto 2000). Our technical approach vis a vis radiative transfer, molecular abundance determinations, and cloud modeling is described in detail in that paper, to which the interested reader is referred. However, SBH did not explore the diagnostics of planetary mass and age. In addition, in their study of the orbital distance dependence of EGP spectra SBH did notincorporate the effects of water or ammonia clouds in a fully consistent fashion. In the current paper, we allow our atmosphere code to determine cloud placement in a fully iterative, converged fashion. The result is a consistent determination of the dependence on distance of the spectra of EGPs irradiated by a G2V star, including the effects of the water and ammonia clouds that should form in their atmospheres. In SS2, we summarize our numerical approach. Then in SS3 we present and describe our results for the dependence of the spectra of irradiated EGPs on orbital distance. In this paper, we emphasize the results for EGPs at wider separations (\\(>\\) 0.2 AU) and defer discussion of the corresponding theory for close-in EGPs to a later paper5. This section treats the entire expected range of EGP emission/reflection spectra and behavior. In SS4, we provide a representative sequence of models that portray the dependence of irradiated EGP spectra on age, at a given mass and orbital distance. In SS5, we present a representative sequence with EGP mass, at a given age and orbital distance. Section 6 is a digression into the effect of stellar irradiation on companion brown dwarfs, characterized by much larger masses and more slowly decaying heat content. We study the signature in the optical of the reflection of stellar light from a companion brown dwarf. In fact, much of this paper is concerned with the signatures and diagnostics of the physical parameters of irradiated substellar-mass companions6. However, it is not feasible in one paper to explore all the possible combinations of planetary mass, age, composition, orbital semi-major axis, eccentricity, and orbital phase with all stellar types. Hence, to maintain a reasonable focus, we restrict our discussions to G2V primaries, zero eccentricity orbits, and solar metallicity. We narrow our scope further by plotting only phase-averaged spectra (as in SBH) at zero orbital inclination. Papers on EGP orbital phase functions, albedos, and light curves are to follow this one (e.g., Sudarsky, Burrows, and Hubeny 2004). These papers address the dependence on phase and Keplerian parameters. Finally, in SS7 we present predicted phase-averaged planet/star flux ratios for several known EGPs at wide separations and 88 reprises the essential conclusions of the paper. Footnote 5: Note, however, that the spectra of close-in EGPs has been addressed in SBH, as well as in Seager and Sasselov (1998,2000), Seager, Whitney, and Sasselov (2000), and Goukenleuque et al. 2000. ## 2 Numerical issues To calculate radiative/convective equilibrium atmospheres and spectra we use a specific variant of the computer program TLUSTY (Hubeny 1988; Hubeny & Lanz 1995). This variant involves the hybrid Complete Linearization/Accelerated Lambda Iteration (CL/ALI) method, although in the present runs we use a full ALI mode which leads to an essential saving of computer time without slowing down the iteration process significantly. In addition, we employ TLUSTY's Discontinuous Finite Element (DFE) version (Castro, Dykema, & Klein 1992). The DFE technique, being first order, is optimal for handling irradiated atmospheres (SBH). All models are converged to one part in \\(10^{3}\\). Stellar spectra from Kurucz (1994) are used for the incoming fluxes at the outer boundaries. The inner boundary condition is the interior flux and this (indexed by T\\({}_{\\rm eff}\\)) is taken from the evolutionary models of Burrows et al. (1997) for the given mass and age, unless otherwise indicated. This approximate procedure works well for wider-separation EGPs, but not as well for the closer-in EGPs (\\(<\\) 0.15 AU). We use a standard mixing-length prescription to handle convection and a mixing length of one pressure scale height. Since the atmosphere code is planar, we use the redistribution technique descibed in SBH, i.e. we weight the incident flux by 1/2 to account for the average inclination of the planetary surface to the line of sight to the primary. As described in SBH, the planetary spectra we present here are phase/time-averaged over the orbit. Care is taken with this procedure to ensure that energy is conserved and that energy _in_ (from the star and the planetary interior) equals energy _out_. To account for the anisotropy of single scattering off of cloud particles, we calculate using Mie theory the average of the cosine of the scattering angle, and reduce the Mie-theory-derived total scattering cross section by one minus this average. With this procedure, we are substituting the \"transport cross section\" for the total scattering cross section. This approach has been shown to mimic the effect of asymmetric scattering quite well (Sudarsky, Burrows, and Pinto 2000). For molecular and atomic compositions, we use an updated version of the chemical code of Burrows and Sharp (1999), which includes a prescription to account for the rainout of condensed species in a gravitational field and new thermochemical data. The derived molecular abundances are very similar to those obtained by Lodders (1999) and Lodders and Fegley (2002). Our solar metallicity is defined as the elemental abundance pattern found in Anders and Grevesse (1989). For molecular and atomic opacities, we have developed an extensive database, described in part in Burrows et al. (2001) and SBH. The treatment of H\\({}_{2}\\)O and NH\\({}_{3}\\) clouds is done in a manner consistent with their respective condensation curves and the cloud base is put at the intersection of the corresponding condensation curve at solar metallicity with the object's temperature/pressure (\\(T/P\\)) profile. In each iteration of the global ALI scheme, we find a position of the cloud base as the intersection of the corresponding condensation curve with the current \\(T/P\\) profile. The scale height of a cloud is assumed to be equal to one pressure scale height. We use Mie theory for the absorptive and scattering opacities of particles whose model size is determined by the theory of Cooper et al. (2003). In the subsequent iteration of the ALI scheme, we employ this cloud opacity and scattering self-consistently in the radiative transfer equation and the energy balance equation. In the next ALI iteration we again recalculate the position of the cloud base, and the whole process is repeated until the cloud position is fully stabilized. We note that in the initial stages of the global iteration process the cloud position may vary significantly; in some case clouds appear (disappear) after several iterations of the cloudless (cloudy) atmosphere. This procedure ensures that the cloud position is self-consistent with the overall model atmosphere. This also means that the predicted cloud properties, in particular optical depths and base pressures, vary in a physical and consistent way with the orbital distance, mass, and age. ## 3 Orbital distance dependence of EGP spectra from 0.2 to 15 au Figure 1 shows the \\(T/P\\) profiles for the distance sequence from 0.2 to 15 AU for a 1-\\(M_{1}\\)EGP irradiated by a G2V star. For specificity, we have assumed a radius of 1 \\(R_{\\rm j}\\) and an internal flux T\\({}_{\\rm eff}\\) of 100 K for the entire family. According to the models of Burrows et al. (1997), this corresponds roughly to an age of 5 Gyr, but after \\(\\sim\\)0.1 Gyr, the radius and gravity of the EGP vary little. The orbits are taken to be circular, so there is no assumed orbital phase dependence 7. The intercepts with the dashed lines identified by either {NH\\({}_{3}\\)} or {H\\({}_{2}\\)O} denote the positions where the corresponding clouds form. Due to the cold trap effect and depletion due to rainout (Burrows and Sharp 1999), the higher-pressure intercept is taken to be at the base of the cloud. The spectral/atmospheric models include the effects of these clouds in a consistent way. Table 1 gives the modal particle sizes in microns that we derive using the theory of Cooper et al. (2003) when a cloud of either water or ammonia (or both) appears. For the ammonia clouds that form in this orbital distance sequence (at \\(\\lower 2.15pt\\hbox{$\\buildrel>\\over{\\sim}$}\\)6 AU), the modal particle sizes we find hover near 50-60 \\(\\mu\\)m. The corresponding particle sizes in the water clouds are near 110 \\(\\mu\\)m. These particles are larger than for more massive SMOs. Furthermore, we assume that the particle size does not vary with altitude and that the particle size distribution (given a modal radius) is that of Deirmendjan (1964,1969). Clearly, a major ambiguity in EGP modeling is cloud physics (SBH). We have settled on the Cooper et al. (2003) theory to provide a consistent framework. Footnote 7: Note that with a significant eccentricity, this assumption is not valid. When the chemistry indicates that both cloud types are present, we include them both in the atmospheric/spectral calculation. As Fig. 1 shows, the ammonia cloud is always above the water cloud. Water clouds form around a G2V star exterior to a distance near 1.5 AU, whereas ammonia clouds form around such a star exterior to a distance near 4.5 AU. Note that Jupiter itself is at the distance from the Sun of \\(\\sim\\)5.2 AU. For the closer-in objects, the \\(T/P\\) profile manifests an inflection. This inflection is a consequence of the dominance of external radiation over the internal heat flux. For longer ages and low masses, the orbital distance at which one must place the SMO/EGP to erase this inflection is large. For this model set, that distance is \\(\\sim\\)2 AU. For larger masses, dimmer primaries, and shorter ages, that distance decreases. In addition, the strong irradiation that produces the inflection in the \\(T/P\\) profile also forces the radiative/convective boundary to recede to higher pressures. For a 1-\\(M_{\\rm J}\\)EGP at an orbital distance of 0.05 AU (not shown) around a G2V star, this pressure can be greater than 1000 bars! Such is the case for HD 209458b and OGLE-TR56b (Burrows, Sudarsky, and Hubbard 2003; Fortney et al. 2003). For comparison, and in anticipation of the discussion in SS4 concerning the age dependence of irradiated EGP spectra, Fig. 2 portrays the evolution in the \\(T/P\\) profile of a 1-\\(M_{\\rm J}\\)EGP at a distance of 4 AU around a G2V star. Table 2 gives the corresponding modal particle sizes in microns (Cooper et al. 2003) for the ice particles of the water clouds that appear in this evolutionary sequence, as well as the inner flux T\\({}_{\\rm eff}\\), gravity, and planet radius as this 1-\\(M_{\\rm J}\\)EGP evolves according to the theory of Burrows et al. (1997). The modal particle size is very roughly constant with age. The radius of the 1-\\(M_{\\rm J}\\)EGP decreases by \\(\\sim\\)15% from 0.1 to 5 Gyr. Not surprisingly, at a distance of 4 AU no inflection in the \\(T/P\\) profile is produced. Note that at this orbital distance and as early as \\(\\sim\\)50 Myr (not shown), water clouds form in Jovian-mass objects. Note also that at 4 AU, even after 5 Gyr ammonia clouds have not yet formed in the atmosphere of an irradiated 1-\\(M_{\\rm J}\\)EGP. This is not true for a similar object in isolation (Burrows, Sudarsky, and Lunine 2003). Figure 3 depicts the planet-to-star flux ratios from 0.5 \\(\\mu\\)mto 30 \\(\\mu\\)m for the orbital distance study associated with the \\(T/P\\) profiles shown in Fig. 1. In the optical, the flux ratios vary between 10\\({}^{-8}\\) and 10\\({}^{-10}\\). In the near infrared, this ratio varies widely from \\(\\sim\\)10\\({}^{-4}\\) to 10\\({}^{-16}\\). However, in the mid-infrared beyond 10 \\(\\mu\\)m, the flux ratio varies more narrowly from 10\\({}^{-4}\\) to 10\\({}^{-7}\\). Hence, it makes a difference in what wavelength region one conducts a search for direct planetary light. Gaseous water absorption features (for all orbits) and methane absorption features (for the outer orbits) sculpt the spectra. The reflected component due to Rayleigh scattering and clouds (when present) is most manifest in the optical and the emission component (similar to the spectrum of an isolated low-gravity brown dwarf) takes over at longer wavelengths. For the EGPs interior to \\(\\sim\\)1.0 AU, the fluxes longward of \\(\\sim\\)0.8 \\(\\mu\\)m are primarily due to thermal emission, not reflection. These atmospheres do not contain condensates and are heated efficiently by stellar irradiation. As a result, the \\(Z\\) (\\(\\sim\\)1.0 \\(\\mu\\)m), \\(J\\) (\\(\\sim\\)1.2 \\(\\mu\\)m), \\(H\\) (\\(\\sim\\)1.6 \\(\\mu\\)m), and \\(K\\) (\\(\\sim\\)2.2 \\(\\mu\\)m) band fluxes are larger by up to several orders of magnitude than those of the more distant EGPs. Generally, clouds increase a planet's flux in the optical, while decreasing it in the \\(J\\), \\(K\\), \\(L^{\\prime}\\) (\\(\\sim\\)3.5 \\(\\mu\\)m), and \\(M\\) (\\(\\sim\\)5.0 \\(\\mu\\)m) bands. The transition between the reflection and emission components moves to longer wavelengths with increasing distance, and is around 0.8-1.0 \\(\\mu\\)m and 0.2 AU and \\(\\sim\\)3.0 \\(\\mu\\)m at 15 AU. However, since the irradiation and atmospheric structure and spectra are being calculated self-consistently, emission and reflection components are in fact inextricibly intertwined and it is not conceptually correct to separate them. At large distances exterior to \\(\\sim\\)3.5 AU, the flux longward of \\(\\sim\\)15 \\(\\mu\\)m manifests undulations due to pressure-induced absorption by H\\({}_{2}\\). Importantly, there is always a significant bump around the \\(M\\) band at 4-5 \\(\\mu\\)m. The peak of this bump shifts from \\(\\sim\\)4 \\(\\mu\\)m to \\(\\sim\\)5 \\(\\mu\\)m with increasing orbital distance. Though muted by the presence of clouds, it is always a prominent feature of irradiated EGPs, as it is in T dwarfs and the Jovian planets of our solar system. Curiously, but not unexpectedly, as Fig. 3 indicates, the planet/star flux ratio is most favorable in the mid-infrared. This fact should be of some interest to those planning TPF or successor missions to the Spitzer Space Telescope. Though the major trend is a monotonic decrease in a planet's flux with increasing orbital distance, the 0.2-AU model atmosphere is hot enough that the sodium and potassium resonance absorption lines appear and surpress the flux around Na-D (0.589 \\(\\mu\\)m) and the related K I doublet at 0.77 \\(\\mu\\)m. The result is a lower integrated visible flux that is comparable to that of the otherwise dimmer 0.5-AU model. Figures 4 and 5 focus in on the 0.5 \\(\\mu\\)mto 2.0 \\(\\mu\\)mregion and allow one to distinguish one model from another at shorter wavelengths more easily than is possible in the panoramic Fig. 3. These figures allow us to see that for greater orbital distances, the atmospheric temperatures are too low for the alkali metals to appear, but the methane features near 0.62 \\(\\mu\\)m, 0.74 \\(\\mu\\)m, 0.81 \\(\\mu\\)m, and 0.89 \\(\\mu\\)m come into their own. Broad water bands around 0.94 \\(\\mu\\)m, 1.15 \\(\\mu\\)m, 1.5 \\(\\mu\\)m and 1.85 \\(\\mu\\)m that help to define the \\(Z\\), \\(J\\), and \\(H\\) bands are always in evidence, particularly for the 0.2 and 1.0 AU models that don't contain water clouds. For greater distances, the presence of water clouds slightly mutes the variation with wavelength in the planetary spectra. Hence, smoothed water features and methane bands predominate beyond \\(\\sim\\)1.5 AU. Age dependence from 100 Myr to 5 Gyr of the spectrum of a 1-\\(M_{\\rm J}\\)\\(\\,\\)EGP at a given distance Figure 6 presents the planet-to-star flux ratios from 0.5 \\(\\mu\\)mto 6.0 \\(\\mu\\)mfor a 1-\\(M_{\\rm J}\\)\\(\\,\\)EGP orbiting a G2V star at 4 AU as a function of age. Figure 2 depicts the corresponding \\(T/P\\) profiles, along with the NH\\({}_{3}\\) and H\\({}_{2}\\)O condensation lines at solar metallicity. The theory of Burrows et al. (1997) is used to obtain an approximate mapping between mass, age, internal T\\({}_{\\rm eff}\\), and gravity1. The depicted ages are 0.1, 0.3, 1, 3, and 5 Gyr. These planet parameters are chosen merely to represent the systematics with age; different EGP masses and orbital distances will yield quantitatively different spectra. Table 2 shows that for this suite of models the internal flux T\\({}_{\\rm eff}\\)varies from 290 K to 103 K, the surface gravity varies from 1695 cm s\\({}^{-2}\\) to 2325 cm s\\({}^{-2}\\), and the planet radius varies from 1.17 \\(R_{\\rm J}\\)to 1.0 \\(R_{\\rm J}\\). Footnote 1: We note that the star evolves as well, but for clarity we have neglected this effect. As is clear from Fig. 6, younger EGPs with higher inner boundary T\\({}_{\\rm eff}\\)s (Table 2) have much higher fluxes in the \\(Z\\), \\(J\\), \\(H\\), \\(K\\), and \\(M\\) bands. However, the older objects, having cooled more, have lower internal luminosities. This results in lower fluxes in those same bands by as much as two orders of magnitude. As Figure 2 shows, the older EGPs have progressively deeper water clouds. What is not obvious from Figure 2 is that these clouds are also thicker. This results in a very slightly increasing reflected optical flux with increasing age that accompanies the reverse trend in the near infrared. Hence, the fluxes in the optical shortward of \\(\\sim\\)1.0 \\(\\mu\\)mare only weak functions of age, while the fluxes in the \\(Z\\), \\(J\\), \\(H\\), \\(K\\), and \\(M\\) bands at early ages are strong functions of age. At later ages, the formation of water clouds moderates the age dependence of the \\(Z\\), \\(J\\), \\(H\\), and \\(K\\) band fluxes. In fact, there can be slight increases in flux in the \\(Z\\), \\(J\\), and \\(H\\) bands with increasing age. However, the \\(M\\) band flux continues to be diagnostic of age, monotonically decreasing by almost two orders of magnitude from 0.1 to 5 Gyr. Hence, the best diagnostics of age are in the near infrared, not the optical. Irradiated EGP spectra as a function of mass from 0.5\\(M_{\\rm J}\\)to 8\\(M_{\\rm J}\\) at a given age and orbital distance Figure 7 portrays planet-to-star flux ratios from 0.4 \\(\\mu\\)mto 6.0 \\(\\mu\\)m for a 5-Gyr EGP orbiting a G2V star at 4 AU, as a function of EGP mass. The masses represented are 0.5, 1, 2, 4, 6, and 8 \\(M_{\\rm J}\\). An inner flux boundary condition T\\({}_{\\rm eff}\\) from the evolutionary calculations of Burrows et al. (1997) has been employed and is given in Table 3. T\\({}_{\\rm eff}\\) varies from 82 K to 251 K, the surface gravity varies from 1290 cm s\\({}^{-2}\\) to 17800 cm s\\({}^{-2}\\), and the radii vary from 0.95 \\(R_{\\rm J}\\)to 1.04 \\(R_{\\rm J}\\). Note that these radii peak in the middle of the sequence near 4 \\(M_{\\rm J}\\). Due to the 5-Gyr age assumed, all these models have water clouds and the derived modal particle sizes decrease monotonically with increasing mass from 146 \\(\\mu\\)mat 0.5 \\(M_{\\rm J}\\)to 39 \\(\\mu\\)mat 8 \\(M_{\\rm J}\\). In general, the larger the EGP mass, the higher in the atmosphere the clouds form, but cloud position is not fully monotonic at the low-mass end of the sequence. For higher mass, at a given age an EGP's inner T\\({}_{\\rm eff}\\) and internal luminosity are higher. This results in higher fluxes in the \\(Z\\), \\(J\\), \\(H\\), \\(K\\), and \\(M\\) bands for higher masses and is similar to the trend seen in SS4 with decreasing age. However, larger-mass EGPs also have higher surface gravities, which result in water clouds with lower column depths, and, hence, lower optical depths, despite the contrary trend of modal particle size (Table 3). The upshot is that higher-mass EGPs have slightly smaller planet/star flux ratios in the optical. This optical component is due to reflection off of cloudy atmospheres with roughly similar compositions. Hence, there is an anti-correlation between flux levels in the visible and near-infrared that might be diagnostic of planet mass for Jupiter-aged EGPs. ## 6 Irradiated brown dwarfs Depicted in Fig. 8 are theoretical spectra of a 30-\\(M_{\\rm J}\\)\\(\\,\\)brown dwarf at ages of 1 and 5 Gyr in orbit around a G2V star from 0.4 \\(\\mu\\)mto 1.5 \\(\\mu\\)m. The results are for orbital distances from 5 AU to 40 AU and a distance to the Earth of 10 parsecs and include irradiation effects. As before, the theory of Burrows et al. (1997) is used to determine T\\({}_{\\rm eff}\\) and gravity for this mass and these ages. In Fig. 8, the prominence of the Na-D (0.589 \\(\\mu\\)m) and K I (0.77 \\(\\mu\\)m) features at shorter wavelengths is clear and is canonical for brown dwarfs (Burrows, Marley, and Sharp 2000). Unlike for the lower mass EGPs discussed in SS3, SS4, and SS5, the internal luminosity of such a relatively massive SMO dominates its energy budget. As a consequence, the brown dwarf's spectrum longward of 0.9 \\(\\mu\\)m is unaffected by irradiation. However, the reflected component in the optical, particularly for the older brown dwarf with lower internal heat content and luminosity, is a function of distance. As Fig. 8 demonstrates, the optical flux from a cool brown dwarf can be elevated shortward of 0.7 \\(\\mu\\)mby as much as an order of magnitude in the \\(V\\) band. In particular, the shape of the Na-D feature can be significantly altered. Since there are no clouds in the atmospheres of these brown dwarf models, Rayleigh scattering off of H\\({}_{2}\\), He, and H\\({}_{2}\\)O accounts for this reflection. Our prediction is that the optical spectra of brown dwarfs that are close companions to K, G, or F stars will be modified by irradiation. Note that the T dwarf Gliese 229B is at a projected distance of \\(\\sim\\)40 AU, but that its primary is an M4V star. Such a star is too dim to so radically alter a brown dwarf's optical flux. For an L dwarf companion, the presence of silicate clouds in its atmosphere will reflect a primary's light in the blue and UV. Someday, such a reflected component might be detectable. ## 7 Predicted phase-averaged spectra for known egPs at wide angular separations Table 4 lists many of the known EGPs that, due to a propitious combination of semi-major axis and distance from the Earth, are at wide angular separations from their parent stars2. Ordered by decreasing separation (defined as the ratio of semi-major axis to distance), Table 4 also lists the stellar type of the primary, semi-major axis, Hipparcos distance, orbital period, \\(m_{p}\\sin(i)\\), and orbital eccentricity. This family of known EGP comprises some of the prime candidates for direct detection of planetary light using interferometric, adaptive-optics, or coronagraphic techniques (SS1). Note that the separations quoted in Table 4 don't take into account the projection of the orbit or variations due to non-zero eccentricities (which can be large). In particular, variations due to the significant excursions in planet-star distance that attend large eccentricities can result in large changes in irradiation regimes. In turn, this can result in large variations in planet spectrum. Interestingly, it is possible for an EGP atmosphere to cycle between states with and without clouds, with the concomitant large changes in flux ratios and spectral signatures with orbital phase (Sudarsky, Burrows, and Hubeny 2004). Figure 3 gives some idea of the range of spectral variation possible for highly-eccentric EGPs. Figures 3, 6, and 7 provide a broad-brush view of generic EGP spectra for a range of orbital distances, ages, and masses. With Figs. 9, 10, and 11, we provide phase-averaged predictions/calculations for a specific subset of the known EGPs listed in Table 4. This subset includes HD 39091b, \\(\\gamma\\) Cephei b, HD 70642b, \\(\\upsilon\\) And d, Gliese 777A b, HD 216437b, HD 147513b, 55 Cancri d, 47 UMa b, 47 UMa c, 14 Her b, and \\(\\epsilon\\) Eri b. Table 5 itemizes the EGPs, along with their derived T\\({}_{\\rm eff}\\)s and surface gravities. In addition, Table 5 lists the theoretical modal particle sizes of the water droplets that form in their atmospheres. For definiteness, the planetary masses are set equal to the measured \\(m_{p}\\sin(i)\\) and, as before, the evolutionary theory of Burrows et al. (1997) is used to estimate the corresponding surface gravities and inner boundary T\\({}_{\\rm eff}\\)s. As described in SBH and SS2, a stellar spectrum from Kurucz (1994) for the primary stellar type given for each EGP in Table 5 is used for the outer irradiation boundary condition in the self-consistent atmosphere/spectrum calculation. As a comparison of Figs. 3, 4, and 5 with Figs. 9, 10, and 11 demonstrates, the major dependence of the planet-to-star flux ratio is with orbital distance. Note that for all of the EGPs listed in Table 5, the assumption that the planet-star distance is fixed at the measured semi-major axis does not result in the formation of ammonia clouds. However, such clouds should form near apastron for the known EGPs with large semi-major axes and high eccentricities, such as \\(\\epsilon\\) Eri b and 55 Cnc d. The appearance and disappearance of such clouds with orbital phase would be exciting signatures to detect. ## 8 Summary and Conclusions In this paper, we have calculated theoretical phase-averaged planet/star flux ratios in the optical, near-infrared, and mid-infrared for wide-separation irradiated EGPs as a function of orbital distance, mass, and age. We have also predicted the corresponding quantities for 12 specific known EGPs, given their corresponding primary spectra, average orbital distances, approximate masses, and approximate ages. Hence, we have explored various physical diagnostics that can inform the direct EGP detection programs now being planned or proposed. In the optical, the flux ratios for distances from 0.2 AU to 15 AU and masses from 0.5 \\(M_{\\rm J}\\) to 8 \\(M_{\\rm J}\\)can vary from slightly above 10\\({}^{-8}\\) to \\(\\sim\\)10\\({}^{-10}\\). In the near infrared around 2-4 \\(\\mu\\)m, the flux ratio ranges more widely, spanning values from 10\\({}^{-4}\\) to as low as 10\\({}^{-16}\\). At \\(\\sim\\) 5 \\(\\mu\\)m, the planet/star flux ratio ranges approximately four orders of magnitude and can be as high as 10\\({}^{-4}\\). The \\(M\\) band should be a useful region to explore and \\(M\\)-band fluxes are sensitive to the presence of clouds. For all models, the mid-infrared from 10 \\(\\mu\\)mto 30 \\(\\mu\\)m is always encouragingly high. In fact, for closer separations (0.05 AU-0.1 AU), not the subject of this paper, the flux ratio at \\(\\sim\\)20 \\(\\mu\\)m can be \\(\\sim\\)10\\({}^{-3}\\). Depending upon orbital distance, age, and mass, spectral features due to methane, water, and the alkali metals are prominent. Furthermore, there is a slight anti-correlation in the effects of clouds in the optical and infrared, with the optical fluxes increasing and the infrared fluxes decreasing with increasing cloud depth. For young and massive irradiated EGPs, there are prominent peaks in the \\(Z\\), \\(J\\), \\(H\\), \\(K\\), and \\(M\\) bands. Though the optical flux is not a very sensitive function of age, there is a useful age dependence of the fluxes in these bands. Furthermore, there is an anti-correlation with increasing mass at a given age and orbital distance between the change in flux in the optical and in the near-IR bands, with the optical fluxes decreasing and the \\(Z\\), \\(J\\), \\(H\\), \\(K\\), and \\(M\\) band fluxes increasing with increasing mass. The spectra of more massive SMOs (brown dwarfs) longward of \\(\\sim\\)0.9 \\(\\mu\\)mare not significantly affected by stellar irradiation. Their internal heat content and interior fluxes are too large. However, brown dwarf fluxes from 0.4 \\(\\mu\\)m to 0.65 \\(\\mu\\)m, can be enhanced by Rayleigh reflection by as much as a factor of 10. This is particularly true of old or low-mass brown dwarfs and is a predictable function of distance. Moreover, irradiation can alter the profile shape of the Na-D feature significantly. We are not able to calculate EGP spectra for all possible combinations of mass, age, composition, orbital distance, eccentricity, orbital phase, Keplerian element, and primary spectral type. This fact is what motivates the more modest synoptic view we have provided in this paper. However, in the process of developing the tools for this study, we have established the capability to calculate EGP spectra for any combination of these parameters. In particular, Sudarsky, Burrows, and Hubeny (2004) address the orbital phase and eccentricity dependences of irradiated EGPs spectra. The remote sensing of the atmospheres of EGPs will be challenging, but the detection and characterization of the direct light from a planet outside our solar system will be an important milestone in both astronomy and planetary science. One instrument proposed to meet this challenge is the space-based coronagraphic imager _Eclipse_ (Trauger et al. 2000,2001). The _Eclipse_ instrument team is predicting a contrast capability of \\(\\sim\\)10\\({}^{-9}\\) for an inner working angle of 0.3\\({}^{\\prime\\prime}\\) or 0.46\\({}^{\\prime\\prime}\\) in the \\(V\\)/\\(R\\) or \\(Z\\) bands, respectively. As Figs. 3 through 7 indicate, with such a capability irradiated EGPs could be detected and analyzed. Many could even be discovered. However, whether such sensitivity is achievable remains to be demonstrated. Be that as it may, further advancement in our understanding of extrasolar planets is contingent upon technical advances that would enable the direct measurement of the dynamically dominant and brighter components of extrasolar planetary systems, the EGPs. The authors wish to acknowledge Bill Hubbard, Jonathan Lunine, Jim Liebert, John Trauger, Jonathan Fortney, Aigen Li, Christopher Sharp, Drew Milsom, Maxim Volobuyev, and Curtis Cooper for fruitful conversations or technical aid and help during the course of this work, as well as NASA for its financial support via grants NAG5-10760 and NAG5-10629. Furthermore, we acknowledge support through the Cooperative Agreement#NNA04CC07A between the University of Arizona/NOAO LAPLACE node and NASA's Astrobiology Institute. Finally, the first author would also like to thank the Kavli Institute for Theoretical Physics where some of this work was performed. ## References * [1] Akeson, R. L. & Swain, M. R. 2000, in _From Giant Planets to Cool Stars_, ed. C. A. Griffith & M. S. Marley, ASP Conference Series, 212, 300 * [2] Akeson, R. L., Swain, M. R., & Colvatiu, M. M 2000, in _Interferometry in Optical Astronomy_, ed. P. J. Lena, Proc. SPIE 4006, 321 * [3] Anders, E. & Grevesse, N. 1989, Geochim. Cosmochim. Acta, 53, 197 * [4] Antonello, E. & Ruiz, S. M. 2002, _The Conf. Mission_, [http://www.astro-mrs.fr/projects/cord/cortomism.ps](http://www.astro-mrs.fr/projects/cord/cortomism.ps) * [5] Baraffe, I., Chabrier, G., Allard, F., and Hauschildt, P.H. 2003, Astron. Astrophys., 402, 701 * [6] Bennett, G.F. et al. 2002, ApJ, 581, L115 * [7] Brown, T. M., Charbonneau, D., Gilliland, R.L., Noyes, R.W., and Burrows, A. 2001, ApJ, 552, 699 * [8] Burrows, A., Saumon, D., Guillot, T., Hubbard, W.B., & Lunine, J.I. 1995, Nature, 373, 1919 * [9] Burrows, A. & Lunine, J.I. 1995, Nature, 378, 333 * [10] Burrows, A., Marley, M., Hubbard, W. B., Lunine, J. I., Guillot, T., Saumon, D., Freedman, R., Sudarsky, D., & Sharp, C. 1997, ApJ, 491, 856 * [11] Burrows, A. & Sharp, C.M. 1999, ApJ, 512, 843 * [12] Burrows, A., Marley, M.S., & Sharp, C.M. 2000, ApJ, 531, 438 * [13] Burrows, A., Hubbard, W.B., Lunine, J.I., & Liebert, J. 2001, Rev. Mod. Phys., 73, 719 * [14] Burrows, A., Sudarsky, D., & Hubbard, W.B. 2003, ApJ, 594, 545 * [15] Burrows, A., Sudarsky, D., & Lunine, J.I. 2003, ApJ, 596, 587 * [16] Butler, R. P., Marcy, G. W., Williams, E., Hauser, H. & Shirts, P. 1997, ApJ, 474, L115 * [17] Butler, R. P. Marcy, G. W., Fischer, D. A., et al. 1999, ApJ, 526, 916 * [18] Castor, J. I., Dykema, P. G., & Klein, R. I. 1992, ApJ, 387, 561 * [19] Charbonneau, D., Brown, T. M., Latham, D. W., & Mayor, M. 2000, Astrophys. J. Letters, 529, L45 * [20] Charbonneau, D., Brown, T. M., Noyes, R. W., Gilliland, R. L., and Burrows, A. 2001, ApJ, 552, 891 * [21] Charbonneau, D., Brown, T. M., Noyes, R. W., & Gilliland, R. L. 2002, ApJ, 568, 377 * [22] Christensen-Dalsgaard, J. 2000, [http://bigcat.obs.aau.dk/hans/mons](http://bigcat.obs.aau.dk/hans/mons) * [23] Cooper, C. S., Sudarsky, D., Milson, J. A., Lunine, J. I., & Burrows, A. 2003, ApJ, 586, 1320 * [24] Deimjendhan, D. 1964, Applied Optics, 3, 187 * [25] Deimjendhan, D. 1969, _Electromagnetic Scattering on Spherical Polydispers_, (New York: Elsevier) * [26] Fortney, J.J, Sudarsky, D., Hubeny, I., Cooper, C.S., Hubbard, W.B., Burrows, A., & Lunine, J.I. 2003, ApJ, 589, 615 * [27] Goukenlaupe, C., Bezard, B., Joguet, B., Lellouch, E., & Freedman, R. 2000, Icarus, 143, 308 * [28] Henry, G., Marcy, G. W., Butler, R. P., & Vogt, S. S. 2000, ApJ, 529, L41 * [29] Hinz, P. M. 2001, PhD Thesis. The University of Arizona * [30] Hubeny, I. 1988, Computer Physics Comm., 52, 103 * [31] Hubeny, I. & Lanz, T. 1995, ApJ, 439, 875 * [32] Koch, D., Borucki, W., Webster, L., Dunham, E., Jenkins, J., Marrion, J., & Reitsema, H. 1998, SPIE Conference 3356: _Space Telescopes and Instruments_ V, 599 * [33] Konacki, M., Torres, G., Jha, S., and Sasselov, D. 2003, Nature, 421, 507 * [34] Kuncz, R. 1994, _Kuzic CD-ROM No. 19_, (Cambridge: Smithsonian Astrophysical Observatory) * [35] Levine, B.M. et al. 2003, SPIE, 4852, 221 * [36] Lodders, K. 1999, ApJ, 519, 793 * [37] Lodders, K. & Fegley, B. 2002, Icarus, 155, 393 * [38] Marcy, G.W. and R.P. Butler 1996, ApJ, 464, L147 * [39] Marcy, G. W., Butler, R. P., Vogt, S. S, Fischer, D., & Lissauer J. J. 1998, ApJ, 505, L147 * [40] Marcy, G.W., R.P. Butler, S.S. Vogt, D. Fischer, and M.C. Liu 1999, ApJ, 520, 239 * [41] Marcy, G.W., Vochman, and M. Mayor, 2000, in _Protostars and Planets IV_, ed. V. Mannings, A.P. Ross, and S.S. Russell (Tucson: The University of Arizona Press), p. 1285-1311 * [42] Marley, M. S., Gelino, C., Stephens, D., Lunine J. I, & Freedman, R. 1999, ApJ, 513, L879 * [43] Matthews, J. M., Kuschnig, R., Walker, G. A. H. et al. 2001, in _The Impact of Large-Scale Surveys on Pulsating Star Research_, ed. L. Szabados & D. Kurtz, p. 74 * [44] Mayor, M. & Queloz, D. 1995, Nature, 378, 355 * [45] Paresce, F. 2001, _Scientific Objectives of the VLTI Interferometer_ * [46] Queloz, D., Mayor, L. Weber, A. Blecha, M. Burnet, B. Confino, D. Naef, F. Pepe, N. Santos, and S. Udry 2000, Astron. Astrophys., 354, 99 * [47] Santos, N.C. M., Mayor, D. Naef, F. Pepe, D. Queloz, S. Udry, M. Burnet, and Y. Revaz 2000, Astron. Astrophys., 356, 599 * [48] Seager, S. & Sasselov, D. D. 1998, ApJ, 502, 157 * [49] Seager, S., Whitney, B. A., & Sasselov, D. D. 2000, ApJ, 540, 504 * [50] Seager, S. & Sasselov, D. D. 2000, ApJ, 537, 916 * [51] Sudarsky, D., Burrows, A., & Pinto, P. 2000, ApJ, 538, 885 * [52] Sudarsky, D., Burrows, A., & Hubeny, I. 2003, ApJ, 588, 1121 * [53] Sudarsky, D., Burrows, A., & Hubeny, I. 2004, in preparation * [54] Torres, G., Konacki, M., Sasselov, D., and Jha, S. 2003, astro-ph/0310114 * [55] Trauger, J., Backman, D., Brown, R. A. et al. 2000, AAS Meeting 197, 49.07 * [56] Trauger, J., Hull, A. B., & Redding, D. A. 2001, AAS Meeting 199, 86.04 * [57] Unwin, S. C. & Shao, M. 2000, in Interferometry in Optical Astronomy, ed. P. J. Lena & A. Quirrenbach, 754 * [58] van Belle, G. & Vasisht, G. 1998, _The Keck Interferometer Science Requirements Document, Revision 2.2_, Jet Propulsion Laboratory * [59] Vidal-Madjar, A., des Etangs, A., Desert, J.-M., Ballester, G.E., Ferlet, R., Hebrard, G., Mayor, M. 2003, Nature, 422, 143 * [60] Werner, M.W. and Fanson, J.L. 1995, Proc. SPIE, 2475, p. 418-427 \\begin{table} \\begin{tabular}{c c c c c c c} \\hline \\hline \\(M/M_{J}\\) & T\\({}_{\\mbox{eff}}\\) (K) & \\(log_{10}\\)\\(g\\) (cm s\\({}^{-2}\\)) & \\(R/R_{J}\\) & \\(r_{0}\\) (\\(\\mu\\)m) \\\\ \\hline 0.5 & 82 & 3.11 & 0.95 & 146 \\\\ 1 & 103 & 3.37 & 1.00 & 109 \\\\ 2 & 134 & 3.64 & 1.03 & 80 \\\\ 4 & 177 & 3.93 & 1.04 & 57 \\\\ 6 & 216 & 4.12 & 1.03 & 46 \\\\ 8 & 251 & 4.25 & 1.02 & 39 \\\\ \\hline \\end{tabular} \\end{table} Table 4Interesting EGPs Listed by Angular SeparationReferences. -- 1) From Sudarsky, Burrows, & Hubeny (2003), where a modal particle size of 5 \\(\\mu\\)m was assumed. Figure 1: Profiles of atmospheric temperature (in Kelvin) versus the logarithm base ten of the pressure (in bars) for a family of irradiated 1-\\(M_{\\rm J}\\) EGPs around a G2V star as a function of orbital distance. Note that the pressure is decreasing along the ordinate, which thereby resembles altitude. The orbits are assumed to be circular, the planets are assumed to have a radius of 1 \\(R_{\\rm J}\\), the effective temperature of the inner boundary flux is set equal to 100 K, and the orbital separations vary from 0.2 AU to 15 AU. The intercepts with the dashed lines identified with either \\(\\rm\\{NH_{3}\\}\\) or \\(\\rm\\{H_{2}O\\}\\) denote the positions where the corresponding clouds form. See text for a discussion Figure 2.— Similar to Fig. 1, this figure depicts atmospheric profiles of temperature (in Kelvin) versus the logarithm base ten of the pressure (in bars) for a family of irradiated 1-\\(M_{\\rm j}\\) EGPs around a G2V star, but as a function of age at a given orbital distance of 4.0 AU. The ages vary from 0.1 to 5.0 Gyr. Note that the pressure is decreasing along the ordinate, which thereby resembles altitude. The cloud condensation curves for both NH\\({}_{3}\\) (mon for this sequence) and H\\({}_{2}\\)O are given as the dashed lines and the spectra/atmospheric models include the effects of the water clouds in a consistent way. See text in §3 and §4 for relevant details and discussion. Figure 3.— Planet to star flux ratios versus wavelength (in microns) from 0.5 \\(\\mu\\)mto 30 \\(\\mu\\)m for a 1-\\(M_{\\rm J}\\) EGP with an age of 5 Gyr orbiting a G2V main sequence star similar to the Sun. This figure portrays ratio spectra as a function of orbital distance from 0.2 AU to 15 AU. Zero eccentricity is assumed and the planet spectra have been phase-averaged as described in Sudarsky, Burrows, and Hubeny (2003). The associated \\(T/P\\) profiles are given in Fig. 1 and Table 1 lists the modal radii for the particles in the water and ammonia clouds. Note that the planet/star flux ratio is most favorable in the mid-infrared. See text for further discussion. Figure 4.— The same as Fig. 3, but highlighting the shorter wavelengths and for a subset of distances (0.2, 1, 4, 10 AU). This figure provides a clearer picture of the features shortward of 2.0 \\(\\mu\\)m for each of the represented models. For the 0.2 AU model, the temperatures of the atmosphere are high enough for the Na-D doublet around 0.589 \\(\\mu\\)m and the K I doublet near 0.77 \\(\\mu\\)m to be visible. These features are even more prominent for closer-in EGPs (Sudarsky, Burrows, and Hubeny, 2003). A great orbital distances, the atmospheric temperatures are too low for the alkali metals to appear, but the methane features near 0.62 \\(\\mu\\)m, 0.74 \\(\\mu\\)m, 0.81 \\(\\mu\\)m, and 0.89 \\(\\mu\\)m come into their own. Water bands around 0.94 \\(\\mu\\)m, 1.15 \\(\\mu\\)m, 1.5 \\(\\mu\\)m and 1.85 \\(\\mu\\)mthat help to define the \\(Z\\), \\(J\\), and \\(H\\) bands are always of importance. For greater distances, the presence of water clouds can smooth the variations in the planetary spectra that would otherwise be large due to the strong absorption features of gaseous water vapor. See text for discussion. Figure 5: The same as Fig. 3, but, as in Fig. 4, highlighting the shorter wavelengths. A different subset of distances (0.5, 2, 6, 15 AU) is shown. See the text and the figure caption for Fig. 4 for details. Figures 4 and 5 allow one to distinguish more easily than is possible in the panoramic Fig. 3 one model from another. Figure 6: The planet-to-star flux ratio from 0.5 \\(\\mu\\)m to 6.0 \\(\\mu\\)m for a 1-\\(M_{J}\\)EGP orbiting a G2V star at 4 AU as a function of age. The ages are 0.1, 0.3, 1, 3, and 5 Gyr. An inner flux boundary condition T\\({}_{\\rm eff}\\)from the evolutionary calculations of Burrows et al. (1997) has been employed. The effect of clouds is handled in the radiative transfer calculation in a completely consistent fashion. See Table 2, Figure 2, and text for details and discussion. Figure 7.— Similar to Fig. 6, but the planet-to-star flux ratio from 0.4 \\(\\mu\\)mto 6.0 \\(\\mu\\)m for a 5-Gyr EGP orbiting a G2V star at 4 AU, as a function of EGP mass. The masses represented are 0.5, 1, 2, 4, 6, and 8 \\(M_{\\rm J}\\). See Table 3 and text for a discussion. Figure 8: Depicted are theoretical spectra of a 30-\\(M_{\\rm J}\\) brown dwarf at ages of 1 and 5 Gyr in orbit around a G2V star that is irradiating it. The logarithm base ten of the flux in millilanskys at 10 parsecs versus wavelength in microns from 0.4 \\(\\mu\\)mto 1.5 \\(\\mu\\)mis given. The theory of Burrows et al. (1997) was used to determine T\\({}_{\\rm eff}\\) and gravity for these ages and mass. The results are shown for different orbital distances from 5 to 40 AU. See text for a discussion. Figure 9: Similar to Figs. 6 and 7, but for four specific known EGPs listed in Table 5. These are HD 39091b, \\(\\gamma\\) Cephei b, HD 70642b, and \\(\\upsilon\\) And d. The wavelengths range from 0.5 \\(\\mu\\)m to 6.0 \\(\\mu\\)m. The planet fluxes are phase-averaged and the effect of the known eccentricities is ignored. The orbital distances are assumed to be equal to the measured semi-major axes and the planets’ masses are set equal to the measured values of \\(m_{p}\\ Figure 10: Same as Fig. 9, but for Gliese 777A b, HD 216437b, HD 147513b, and 55 Cancri d. Refer to Fig. 9, Table 5, and the text for further details and discussion. Figure 11.— Same as Figs. 9 and 10, but for 47 UMa b, 47 UMa c, 14 Her b, and \\(\\epsilon\\) Eridani b. Refer to Fig. 9, Table 5, and the text for further details and discussion.
We calculate as a function of orbital distance, mass, and age the theoretical spectra and orbit-averaged planet/star flux ratios for representative wide-separation extrasolar giant planets (EGPs) in the optical, near-infrared, and mid-infrared. Stellar irradiation of the planet's atmosphere and the effects of water and ammonia clouds are incorporated and handled in a consistent fashion. We include predictions for 12 specific known EGPs. In the process, we derive physical diagnostics that can inform the direct EGP detection and remote sensing programs now being planned or proposed. Furthermore, we calculate the effects of irradiation on the spectra of a representative companion brown dwarf as a function of orbital distance. planetary systems--binaries: general--planets and satellites: general--stars: low-mass, brown dwarfs--radiative transfer--molecular processes--infrared: stars
Summarize the following text.
arxiv-format/0401598v3.md
# Transmission of severe acute respiratory syndrome in dynamical small-world networks Naoki Masuda Faculty of Engineering, Yokohama National University, 79-5, Tokiwadai, Hodogaya, Yokohama, 240-8501 Japan Norio Konno Faculty of Engineering, Yokohama National University, 79-5, Tokiwadai, Hodogaya, Yokohama, 240-8501 Japan Kazuyuki Aihara Department of Complexity Science and Engineering, Graduate School of Frontier Sciences, University of Tokyo, 7-3-1 Hongo Bunkyo-ku Tokyo 113-8656 Japan ERATO Aihara Complexity Modelling Project, Japan Science and Technology Agency, Tokyo, Japan November 3, 2021 ## I Introduction The first case of the recent outbreak of severe acute respiratory syndrome (SARS) is estimated to have started in the Guandong province of the People's Republic of China in November of 2002. After that, SARS spread to many countries, causing a number of infectious cases. In spite of worldwide research efforts, the biological mechanism of the SARS infection is not yet fully clarified, which mars developments of antiviral drugs or other means of conclusive medication. Under this condition, an effective way was to track everybody suspected to be involved in the spreads and quarantine them, which is the same as a century ago. However, more effective strategies in terms of safety and cost could be established with the knowledge of dynamical mechanisms of the outbreak including the effects of so called superspreaders (SS's) and spreads in hospitals. Along this line, epidemiological models that explain the actual and potential transmission patterns can be helpful for suppressing the spreads. For example, dynamical compartmental models for fully mixed population [1] and for geographical subpopulations in Hong Kong [2] have been proposed and fitted to the real data, and they are successful in explaining the real data and determining the basic reproductive number [3]. However, the models contain many compartments and many parameters whose values are determined manually, which may obscure relative contributions of the factors. Here we rather propose a simplified spatial model to indicate how interplay between network structure and individual factors affects the epidemics. A prominent feature in the SARS epidemics is the dominant influence of SS's [1; 2; 4]. According to the US Centers for Disease Control and Prevention (CDC), a patient is defined to be a SS if he or she has infected more than 10 people. The SARS epidemics are special in that a majority of cases originated from just a small number of SS's. On the other hand, nonsuperspreading patients, which by far outnumber SS's, explain only a small portion of the infection events. In Singapore, just 5 SS's have infected 80% of about 200 patients, whereas about 80% of the patients have infected nobody [4; 5; 6]. Also in Hong Kong, one patient caused more than 100 successive cases [2; 6]. Similar key persons are identified in other parts of the world as well. Also epidemics of Ebola, measles, and tuberculosis often accompany SS's [4]. It is believed that SS's are caused both by biological reasons such as genetic tendencies, health conditions, and strength of the virus and by social reasons suchas the manner of social contacts and global structure of social interaction. It agrees with general understanding that epidemics depend on the personal factors and the structure of social networks [7; 8]. Although previous dynamical models consider SS's to be exceptional [2] or do not model them explicitly [1], we incorporate them as a key factor for the spreading. Another feature of SARS is rapid spreading in hospitals, which played a pivotal role in, at least, local outbreaks, sometimes accounting for more than half the total regional cases. The embarrassing fact that hospitals are actually amplifying diseases [2; 4] should be provided with convincing mechanisms so that we can reduce the risk of spreads in hospitals and relieve the public of anxieties. To this end again, we will examine the combined effects of SS's and the network structure. Here we construct a dynamical model for SARS spreads, which is simpler than the previous models [1; 2] but takes into account SS's and the spatial structure represented by the small-world properties [9]. We then propose possible means for preventing SARS spreads in the absence of vaccination. The simulated SARS epidemics are also compared with the epidemics of sexually transmitted diseases (STD's) and computer viruses whose mechanism owes much to scale-free properties of the underlying networks [8; 10; 11; 12]. ## II Model and general theory Our model is composed of \\(n\\) persons located on vertices of a graph. A pair of individuals connected by an undirected edge directly interact and possibly transmit SARS. We simply assume three types of individuals: namely, the susceptible, the infected but non-SS's, and the SS's. Here a SS, probably with strong and/or a large amount of viruses, has a strong tendency to infect the susceptible, even without frequent social contacts. The dynamics is the contact process with three states [12; 13; 14]. A susceptible can be infected by an adjacent patient (a SS or an infected non-SS) at certain rates. A patient returns to the susceptible state at rate 1, mimicking the recovery from SARS or its death followed by the local emergence of a new healthy person. The infected non-SS's and SS's are modeled with different rates of infection [3; 8; 14]. An infected turns an adjacent susceptible into infected non-SS or SS at rate \\(\\lambda_{I}(1-p)\\) or \\(\\lambda_{I}p\\), respectively, where \\(p\\) parametrizes the number of SS's divided by the number of patients. Similarly, a SS infects an adjacent susceptible into infected non-SS or SS at a rate \\(\\lambda_{SS}(1-p)\\) or \\(\\lambda_{SS}p\\), respectively [14]. The infected non-SS's and SS's do not have direct interactions even if they are next to each other. However, they interact indirectly owing to the cross-talk rates \\(\\lambda_{I}p\\) and \\(\\lambda_{SS}(1-p)\\). These infection events as well as death events at rate 1 happen independently for all the sites. The parameter values depend on the definition of a SS, the network structure, and the time scales. With the supposition of total mixing of the individuals and the definition of a SS by CDC, the data of the outbreak in Singapore [4] provide a rough estimate of \\(p=0.03\\). As a rough estimation, we set \\(\\lambda_{SS}/\\lambda_{I}=20\\) based on the descriptions on a small number of superspreaders identified in Singapore [4] and Hong Kong [2; 6]. To our knowledge, larger data about the number of cases caused by each patient or about the detailed chains of transmissions are not available in other regions. A relevant condition that seemingly holds in the current outbreak is \\(\\lambda_{I}<1<\\lambda_{SS}\\), where \\(\\lambda_{I}\\) and \\(\\lambda_{SS}\\) are multiplied by the number of neighbors for a moment. In this situation, the mean-field theory predicts the existence of a threshold for \\(p\\) above which the disease spreads widely [14]. The recent outbreak may have led to a suprathreshold regime even with small \\(p\\) because \\(\\lambda_{SS}\\) is presumably huge. The model studies using real data suggest that the threshold has been crossed from the above by the control efforts [1; 2]. Next, we introduce the local network structure. At a given time, the whole population is typically divided into groups within which relatively frequent social contacts are expected. A group represents, for example, hospital, school, family, market, train, and office, and it is characterized by clustering properties [9; 15] and dense coupling. We prepare \\(g\\) groups, each containing \\(n_{g}=n/g\\) individuals. The \\(i\\)th individual (\\(1\\leq i\\leq n\\)) is connected to randomly chosen \\(k_{i}\\) (\\(0\\leq k_{i}\\leq n_{g}-1\\)) individuals within the group. The rate of transmission is proportional to the vertex degree \\(k_{i}\\) in the early stage of epidemics [3; 12]. Apart from the effects of \\(k_{i}\\), \\(\\lambda_{I}\\), and \\(\\lambda_{SS}\\), some social groups are more prone to transmit SARS than others. This group dependence originates in, for example, ventilation, sanitary levels, and the duration of grouping [1; 2; 5]. The effect is represented by a multiplicative factor \\(T_{j}\\) for the \\(j\\)th group (\\(1\\leq j\\leq g\\)). Then the effective intragroup infection strength is calculated as \\(\\left\\langle k_{i}\\right\\rangle_{j}T_{j}\\), where \\(\\left\\langle\\cdots\\right\\rangle_{j}\\) is the average over \\(i\\) in the \\(j\\)th group. Presumably, social groups such as hospitals, congested trains, airplanes, and poorly ventilated residences have large \\(\\left\\langle k_{i}\\right\\rangle_{j}T_{j}\\). For example, hospitals may have large \\(\\left\\langle k_{i}\\right\\rangle_{j}T_{j}\\) because of a high population density yielding large \\(\\left\\langle k_{i}\\right\\rangle_{j}\\) and the fact that the susceptible hospitalized for other diseases may be generally weak against infectious diseases including SARS. The influence of trains due to congestion and closedness of the air for long time is a potential source of outbreaks in the regions where people habitually commutate by congested public transportations, like Japan. In contrast, \\(\\left\\langle k_{i}\\right\\rangle_{j}T_{j}\\) may be low for groups formed in open spaces. However, we note that SARS can also break out in low-risk groups if \\(\\lambda_{SS}\\) is sufficiently large. For simplicity, we assume that \\(g_{0}\\) out of \\(g\\) groups have \\(T_{j}=T_{h}\\) that is larger than \\(T_{j}=T_{l}\\) taken by the other \\(g-g_{0}\\) groups. Although many models ignore the spatial structure of the population and rely on meanfield descriptions [1; 3], spatial aspects should be incorporated for understanding the real dynamics of epidemics [2; 7; 8; 16]. Mainstream from this standpoint are methods of percolation and the contact process on regular lattices [13; 14; 17]. However, \\(d\\)-dimensional lattices have characteristic path length \\(L\\) -- that is, the mean distance between a pair of vertices -- proportional to \\(n^{1/d}\\). In social networks, \\(L\\) is approximately proportional to \\(\\log n\\) as in random graphs [9]. To cope with this observation, we introduce random recombination of \\(n\\) individuals into \\(g\\) new groups. In reality, one belongs to many groups that dynamically break and reform more or less randomly by way of social activities [7; 18]. For example, one may commute to one's workplace and return home everyday, possibly by changing trains, which serve as temporary social groups as well. After time \\(t_{0}\\), we randomly shuffle all the vertices and reorganize them into \\(g\\) groups and wire the vertices within each group in the same manner as before. Then the epidemic dynamics is run for another \\(t_{0}\\) before next shuffling occurs. For simplicity, just two independent groupings are assumed to alternate, as schematically shown in Fig. 1. However, the results are easily extended to the case of longer chains of group reformation. Owing to the shuffling, individuals initially belonging to different groups can interact in the long run. We denote \\(x_{\\alpha,I}\\) and \\(x_{\\alpha,SS}\\) the number of the infected non-SS's and that of the SS's summed over the groups with \\(T_{j}=T_{\\alpha}\\) (\\(\\alpha=h\\), \\(l\\)). In the early stages of epidemics, the dynamics between two switching events is given by the meanfield description as follows: \\[\\frac{d}{dt}\\left(\\begin{array}{c}x_{h,SS}\\\\ x_{h,I}\\\\ x_{l,IS}\\\\ x_{l,I}\\end{array}\\right)=\\left(\\begin{array}{c}\\lambda_{SSP}\\left\\langle k_ {i}\\right\\rangle_{b}T_{h}-1&\\lambda_{IP}\\left\\langle k_{i}\\right\\rangle_{b}T_ {h}&0&0\\\\ \\lambda_{SS}(1-p)\\left\\langle k_{i}\\right\\rangle_{b}T_{h}&\\lambda_{I}(1-p) \\left\\langle k_{i}\\right\\rangle_{b}T_{h}-1&0&0\\\\ 0&0&\\lambda_{SSP}\\left\\langle k_{i}\\right\\rangle_{T}-1&\\lambda_{IP}\\left\\langle k _{i}\\right\\rangle_{b}T_{l}\\\\ 0&0&\\lambda_{SSP}\\left\\langle k_{i}\\right\\rangle_{b}T_{l}&\\lambda_{I}(1-p) \\left\\langle k_{i}\\right\\rangle_{b}T_{l}-1\\end{array}\\right)\\left(\\begin{array} []{c}x_{h,SS}\\\\ x_{h,I}\\\\ x_{l,I}\\end{array}\\right), \\tag{1}\\] where \\(\\left\\langle\\cdots\\right\\rangle_{\\alpha}\\) denotes averaging over the groups with \\(T_{j}=T_{\\alpha}\\). The random shuffling is expressed by multiplication of the following matrix from the left: \\[\\left(\\begin{array}{cccc}\\frac{g_{0}}{g}+\\sigma&0&\\frac{g_{0}}{g}+\\sigma&0 \\\\ 0&\\frac{g_{0}}{g}+\\sigma&0&\\frac{g_{0}}{g}+\\sigma\\\\ \\frac{g-g_{0}}{g}-\\sigma&0&\\frac{g-g_{0}}{g}-\\sigma&0\\\\ 0&\\frac{g-g_{0}}{g}-\\sigma&0&\\frac{g-g_{0}}{g}-\\sigma\\end{array}\\right), \\tag{2}\\] where \\(\\sigma\\) is the possible correlation factor specifying the tendency for patients to join groups with \\(\\left\\langle k_{i}\\right\\rangle_{j}T_{j}=\\left\\langle k_{i}\\right\\rangle_{h}T _{h}\\). Purely random mixing yields \\(\\sigma=0\\). The map for the one-round dynamics comprising the contact process for time \\(t_{0}\\) followed by switching has eigenvalues \\(0\\), \\(0\\), \\(\\mathrm{e}^{-t_{0}}\\cong 1-t_{0}\\), and \\[(\\frac{g_{0}}{g}+\\sigma)\\mathrm{e}^{(-1+T_{h}\\left\\langle k_{i} \\right\\rangle_{h}(\\lambda_{I}(1-p)+\\lambda_{SSP}))t_{0}}+(\\frac{g-g_{0}}{g}- \\sigma)\\mathrm{e}^{(-1+T_{l}\\left\\langle k_{i}\\right\\rangle_{l}(\\lambda_{I}(1 -p)+\\lambda_{SSP}))t_{0}}\\] \\[\\cong 1+\\left\\{\\left[\\left(\\frac{g_{0}}{g}+\\sigma\\right)T_{h}\\left\\langle k _{i}\\right\\rangle_{h}+\\left(\\frac{g-g_{0}}{g}-\\sigma\\right)T_{l}\\left\\langle k _{i}\\right\\rangle_{l}\\right]\\left[\\lambda_{I}(1-p)+\\lambda_{SSP}]-1\\right\\}t_{0}\\] for \\(t_{0}\\) small with respect to the system time \\(t\\) introduced in Eq. (1). An important indicator of outbreaks is the basic reproductive number \\(R_{0}\\) defined as the mean number of secondary infections produced by a single patient in a susceptible population [1; 2; 3; 7; 8; 19]. If \\(R_{0}\\) exceeds unity, the disease spreads on average in mixed populations such as the local groups in Fig. 1. Since \\(R_{0}\\) equals the largest eigenvalue, what matters is whether \\[\\left[(\\frac{g_{0}}{g}+\\sigma)T_{h}\\left\\langle k_{i}\\right\\rangle_{h}+(\\frac{g- g_{0}}{g}-\\sigma)T_{l}\\left\\langle k_{i}\\right\\rangle_{l}\\right]\\left[\\lambda_{I}(1-p)+ \\lambda_{SSP}\\right] \\tag{3}\\] is greater than \\(1\\). As a result, multiple kinds of heterogeneities [3] -- namely, the factors associated with individual patients and those specific to the groups -- interact and determine the tendency to spread. Generally speaking, a positive \\(\\sigma\\) raises \\(R_{0}\\). Even if both factors are subthreshold in the absence of \\(\\sigma\\), that is \\[\\left(\\frac{g_{0}}{g}T_{h}\\frac{\\left\\langle k_{i}\\right\\rangle_{h}}{\\left\\langle k _{i}\\right\\rangle}+\\frac{g-g_{0}}{g}T_{l}\\frac{\\left\\langle k_{i}\\right\\rangle_ {l}}{\\left\\langle k_{i}\\right\\rangle}\\right)<1 \\tag{4}\\] and \\(\\left[\\lambda_{I}(1-p)+\\lambda_{SS}p\\right]\\left\\langle k_{i}\\right\\rangle<1\\), a positive \\(\\sigma\\) can make the whole dynamics suprathreshold. In actual SARS spreads in hospitals; \\(\\sigma>0\\) seems to have held; compared with healthy people, the SARS patients and the suspected are obviously more likely to go to hospital where \\(T_{j}\\) and \\(\\left\\langle k_{i}\\right\\rangle_{j}\\) are supposedly high. Currently, we do not have control over infection rates of individuals, particularly \\(\\lambda_{SS}\\)[2]. However, the threat of spreads may be decreased if their behavior is altered so that they avoid risky places. It is recommended that they be seen by doctor at home or some isolated sites. The strategies applied in many countries such as introducing more separated hospital rooms, making doctors and nurses work in a single ward [20], and ordering the public to stay home also decrease \\(k_{i}\\) and \\(\\sigma\\)[2]. ## III Simulation results We next examine effects of network structure by numerical simulations. To focus on topological factors, we simply set \\(T_{h}=T_{l}=1\\) and \\(k_{i}=k=n_{g}-1\\) (\\(1\\leq i\\leq n\\)). The group size \\(n_{g}\\), which is typically somewhat smaller than 100 [18], is chosen to be \\(81=9^{2}\\) for technical reasons, although the value really relevant to the SARS epidemics is not known [1]. With \\(g=100\\), \\(n=gn_{g}=90^{2}\\), and \\(t_{0}=0.5\\), the chains of infection after the total run time \\(\\overline{t}=1.0\\), from the viewpoint of two different groupings as in Fig. 1, are shown in Figs. 2(a) and 2(b). They more or less reproduce the transmission pattern of SARS in Singapore [4], including the rapid spreads mediated by small \\(L\\) and the massive influence of SS's (solid lines). The transmission naturally spreads over time, as shown in Fig. 2(c) corresponding to \\(\\overline{t}=2.0\\). By comparing Fig. 2(c) with Fig. 2(d), which shows the results for \\(\\overline{t}=2.0\\) and \\(t_{0}=1.0\\), we find that local transmission develops if the time spent with a fixed group configuration is relatively longer. More quantitatively, Fig. 3(a) shows, for \\(\\overline{t}=2.0\\) and \\(t_{0}=0.5\\), the distributions of \\(a_{i}\\), which is the number of people to whom the \\(i\\)th patient has directly infected. The patients with large \\(a_{i}\\) are mostly SS's. Small \\(a_{i}\\) is chiefly covered by other patients, and the distribution decays exponentially in \\(a_{i}\\) within this range. The homogeneous vertex degree and the Poisson property of the processes caused the exponential tail, which is preserved in small-world-type networks like ours and random graphs [9] where the vertex degrees obey narrow distributions. ## IV Discussion ### Comparison with regular lattices A time course of chains of infection in a two-dimensional square lattice are shown in Figs. 2(e), 2(f), and 2(g), with \\(n\\), \\(g\\), and \\(k_{i}\\), and the duration of the run the same as before. We assume the periodic boundary conditions, and \\(k_{i}=80\\) neighbors of a vertex (\\(x\\),\\(y\\)) (\\(1\\leq x,y\\leq 90\\)) are defined to be the vertices included in the square with center (\\(x\\),\\(y\\)) and side length 9. The infection pattern appears similar to Figs. 2(a)-2(d) if we ignore the underlying space. However, large \\(L\\), or the lack of global interactions, permits the disease to spread only linearly in time [13]. This contrasts with a small-world type of networks and fully mixed networks like random graphs in which diseases spread exponentially fast in the beginning [3; 21]. Accordingly, the transmission is by far slower than shown in Figs. 2(a)-2(d). Although propagations at linear rates would be good approximation before long-range transportations had become readily available, they do not match the recent spreads mediated by long-distance travelers that lessen \\(L\\)[2; 6; 9; 19]. Taken in another way, restrictions on long movements can be a useful spread control [2]. By the same token, mathematical approaches such as percolations and contact processes on regular lattices, which often yield valuable rigorous results [13; 14; 17], are subject to this caveat. ### Comparison with Scale-free Networks Another candidate for the network architecture is scale-free networks whose distributions of \\(k_{i}\\) obey the power laws [10]. Compared with the class of small-world networks [9], scale-free networks, particularly with the original construction algorithm, lack the clustering property, whereas they realize the power law often present in nature [15]. The chains of infection in a scale-free network with the mean vertex degree equal to the previous simulations are shown in Figs. 2(h) and 2(i) for \\(\\overline{t}=1.0\\) and \\(\\overline{t}=2.0\\), respectively. Compared with the case of our transmission model [see Figs. 2(a)-2(d)], the influence of SS's is more magnified. Figure 3(b), plotting the distributions of \\(a_{i}\\) for \\(\\overline{t}=2.0\\), shows that the distribution of \\(a_{i}\\) decays with a power law rather than exponentially for small \\(a_{i}\\). When more extensive data become available, we will be able to fit Fig. 3(a) or 3(b) to the real data as shown in Fig. 3(c) and gain more insights into the real epidemics, based on the distributions of \\(a_{i}\\). Figure 3 also suggests that more patients in total result from the epidemics in scale-free networks than in our model network, even though the mean transmission rate and the mean vertex degree are the same. In Fig. 4, we plot \\((k_{i},a_{i})\\) for each subpopulation of the susceptible (\\(a_{i}=0\\)), the infected non-SS's, and the SS's. For the infected non-SS's and SS's, \\(a_{i}\\) is roughly proportional to \\(k_{i}\\). This explains the power-law tail in Fig. 3(b) and enables the existence of extremely contagious SS's that could be called ultrasuperspreaders. The scale-free property implies highly heterogeneous distribution of \\(k_{i}\\). Compared with the same size of regular, small-world, or random networks whose \\(k_{i}\\)'s are relatively homogeneous, scale-free networks have larger \\(R_{0}\\propto\\left\\langle k_{i}^{2}\\right\\rangle/\\left\\langle k_{i}\\right\\rangle\\)[3; 8; 12; 16]. In percolation models, \\(R_{0}=\\sum_{i=0}^{n}k_{i}(k_{i}-1)\\lambda_{i}\\), where \\(\\lambda_{i}\\) denotes the rate of possible transmission from the \\(i\\)th individual [8]. Consequently, in the original scale-free networks whose density function of \\(k_{i}\\) is proportional to \\(k_{i}^{-3}\\), the critical value present for regular, small-world, or random networks of the same mean edge density is extinguished [8]. The same is true for dynamical models such as contact processes [12]. Accordingly, scale-free networks spread diseases even with infinitesimally small infection rates. Furthermore, if a positive critical value exists with the type of scale-free networks whose distribution of \\(k_{i}\\) follows \\(k_{i}^{\\gamma}\\) (\\(\\gamma<-3\\)), a tendency that SS's occupy vertices with large \\(k_{i}\\) can remove the critical values. For example, the critical infection rate shrinks to \\(0\\) if \\(\\lambda_{i}\\propto k_{i}^{\\gamma^{\\prime}}\\) with \\(\\gamma^{\\prime}>-\\gamma-3\\). Does this mechanism underlie the current and possible spreading of SARS? We think not, first because SS's do not necessarily seem to prefer to inhabit hubs of networks. Even without such correlation, heterogeneous infection strengths of patients are not probably determined by the highly heterogeneous \\(k_{i}\\). A major route for SARS transmission is daily personal contacts. In this respect, distributions of \\(k_{i}\\) of acquaintance networks and friendship networks do not follow power laws, but have exponential tails because of aging of individuals and their limited capacity [15; 16]. Particularly, the number of contacts per day is limited by the time and energy of a person, which flattens the distribution of \\(k_{i}\\); SS's of SARS seem to lead ordinary social lives. SS's possibly result from the combination of large \\(\\lambda_{i}\\) and the stay in groups with large \\(\\left\\langle k_{i}\\right\\rangle_{j}T_{j}\\), as has been discussed in this paper. Scale-free networks are rather relevant to spreads of computer viruses and STD's [11; 12; 16; 19]. Spreads are mostly mediated by individuals on hubs in such epidemics, and ultrasuperspreaders may result as a combination of large \\(\\lambda_{i}\\) and large \\(k_{i}\\)[3; 19]. Preventive efforts to target active patients with large \\(k_{i}\\) are effective in these diseases [8]. However, efforts to suppress SARS should be invested in identifying the patients with large \\(\\lambda_{i}\\) and places with large \\(\\left\\langle k_{i}\\right\\rangle_{j}T_{j}\\), rather than in looking for socially active persons that exist only with probability exponentially small in \\(k_{i}\\). ### Effects of clustering A bonus of using a small-world type of networks is that they are clustered, as measured by the cluster coefficient \\(C\\)[9]. In real situations, the probability that two patients directly infected by the same patient know each other is significantly high. Also from this viewpoint, small-world networks are more relevant than networks with small \\(C\\) such as scale-free networks or random graphs. We have used the network shown in Fig. 1 instead of the model by Watts and Strogatz [9] to facilitate analysis and comprehensive understanding of the dynamics. With edges appearing in different timings superimposed, \\(C\\cong\\left\\langle k_{i}\\right\\rangle/n_{g}c\\) where \\(c\\) is the number of random groupings (\\(c=2\\) in our simulations), whereas \\(L\\propto\\log n\\). If \\(k_{i}\\) is the order of \\(n_{g}\\) and \\(c\\) is not so large, our network has small-world properties characterized by large \\(C\\) and small \\(L\\). The notion of clustering might induce one to imagine situations in which people congregate andSARS spreads. However, infection occurs only on the boundaries between a susceptible and a patient, and propagation slows if a pair of the infected face each other as typically happens in highly clustered networks. An increase in \\(C\\) rather elevates the epidemic threshold in site percolations [21; 22], bond percolations [8; 22], and contact processes [9; 13; 7]. It also decreases the final size of the infected population or spreads in late stages [9; 7]. In spite of these general effects of \\(C\\), however, we claim that \\(C\\) does not count in the outbreak of SARS. The possibility of outbreaks and dynamics in initial stages are determined by other factors such as \\(\\lambda_{i}\\), \\(k_{i}\\), \\(T_{j}\\), and \\(\\sigma\\). If the \\(i\\)th individual that happens to be a patient has \\(\\overline{k}\\) neighboring patients, the effective \\(k_{i}\\) decreases to \\(k_{i}-\\overline{k}\\). However, \\(\\overline{k}\\) is tiny relative to \\(k_{i}\\) in early stages even if \\(C\\) is large. On the other hand, clustering in the sense of large \\(C\\) indirectly promotes the spreads by increasing \\(k\\). The arguments above on the effects of \\(C\\) are based on varying \\(C\\) with \\(k\\) fixed. However, the population density of a group concurrently modulates \\(k\\) and \\(C\\)[3]. In a group of \\(n_{g}\\) people with spatial size \\(S_{g}\\), \\(\\langle k_{i}\\rangle=(n_{g}-1)S_{p}/S_{g}\\), where \\(S_{p}\\) is the size of personal space within which each person randomly interacts with others. Obviously, \\(\\langle k_{i}\\rangle\\) is proportional to the population density \\(n_{g}/S_{g}\\). In addition, \\(C=S_{p}/S_{g}\\propto\\langle k_{i}\\rangle\\) even for a fully mixed population. Therefore, the concept of clustering related to the SARS spreads is high population density. The network with large \\(C\\) has been applied in this paper to respect the social reality. ## V Conclusions In this paper, we have proposed a dynamic network model for SARS epidemics and shown that combined effects of superspreaders and their possible tendencies to haunt potentially contagious places can amplify the spreads. In addition, we have contrasted the different dynamical consequences according to different types of underlying network structure. ###### Acknowledgements. We thank M. Urashima for helpful discussions. This study is partially supported by the Japan Society for the Promotion of Science and also by the Advanced and Innovational Research Program in Life Sciences from the Ministry of Education, Culture, Sports, Science, and Technology, the Japanese Government. ## References * (1) M. Lipsitch _et al._ Science **300**, 1966 (2003). * (2) S. Riley _et al._ Science **300**, 1961 (2003). * (3) R. M. May and R. M. Anderson, Nature (London) **326**, 137 (1987); Philos. Trans. R. Soc. London Ser. B **321**, 565 (1988). * (4) Y. S. Leo _et al._, MMWR **52**, 405 (2003). * (5) G. Vogel, Nature (London) **300**, 558 (2003). * (6) A. S. M. Abdullah, B. Tomlinson, C. S. Cockram, and G. N. Thomas, Emerg. Infect. Dis. **9**, 1042 (2003). * (7) M. J. Keeling, Proc. R. Soc. London Ser. B **266**, 859 (1999). * (8) M. E. J. Newman, Phys. Rev. E **66**, 016128 (2002). * (9) D. J. Watts and S. H. Strogatz, Nature (London) **393**, 440 (1998); D. J. Watts, _Small Worlds_ (Princeton University Press, Princeton, 1999). * (10) A.-L. Barabasi and R. Albert, Science **286**, 509 (1999). * (11) F. Liljeros _et al._ Nature (London) **411**, 907 (2001). * (12) R. Pastor-Satorras and A. Vespignani, Phys. Rev. Lett. **86**, 3200 (2001). * (13) R. Durrett. _Lecture Notes on Particle Systems and Percolation_ (Wadsworth, Belmont, CA, 1988). * (14) R. B. Schinazi. Math. Biosci. **173**, 25 (2001). * (15) L. A. N. Amaral, A. Scala, M. Barthelemy, and H. E. Stanley. Proc. Natl. Acad. Sci. USA **97**, 11149 (2000). * (16) A. L. Lloyd and R. M. May, Science **292**, 1316 (2001). * (17) T. M. Liggett, _Interacting Particle Systems_ (Springer, New York, 1985); R. B. Schinazi, _Classical and Spatial Stochastic Processes_ (Birkhauser, Boston, 1999). * (18) D. J. Watts, P. S. Dodds, and M. E. J. Newman, Science **296**, 1302 (2002). * (19) R. M. May and A. L. Lloyd, Phys. Rev. E **64**, 066112 (2001). * (20) L. A. Meyers, M. E. J. Newman, M. Martin, and S. Schrag, Emerging Infectious Diseases **9**, 204 (2003). * [21] M. E. J. Newman and D. J. Watts, Phys. Rev. E **60**, 7332 (1999). * [22] C. Moore and M. E. J. Newman, Phys. Rev. E **61**, 5678 (2000). Figure captions Figure 1: Schematic diagram of the dynamic network for \\(n_{g}=4\\) and \\(g=4\\). The vertices initially form random graphs within each group. After time \\(t_{0}\\), they are randomly shuffled to reform new groups. The graph switches between the two configurations with period \\(t_{0}\\). Figure 2: Chains of infection in the dynamical small-world network (a), (b), (c), (d), the two-dimensional regular lattice (e), (f), (g), and the scale-free network (h), (i). Transmissions from the infected non-SS's and those from SS's are shown by dashed and solid lines, respectively. We set \\(n=90^{2}\\), \\(n_{g}=81\\), \\(g=100\\), \\(\\lambda_{I}=0.026\\), \\(\\lambda_{SS}=0.52\\), \\(k=80\\), and the time step \\(\\Delta t=0.05\\). We set \\(t_{0}=0.5\\) and \\(\\overline{t}=1.0\\) in (a), (b), \\(t_{0}=0.5\\) and \\(\\overline{t}=2.0\\) in (c), \\(t_{0}=1.0\\) and \\(\\overline{t}=2.0\\) in (d), \\(\\overline{t}=1.0\\) in (e), (h), \\(\\overline{t}=2.0\\) in (f), (i), and \\(\\overline{t}=3.0\\) in (g). (a) and (b) correspond to the two groupings shown in Fig. 1. In (e), (f), (g), a square lattice with \\(90\\times 90\\) vertices are used, and \\(k=80\\). In (h), (i), the scale-free network with \\(k=80\\) and \\(n=90^{2}\\) is generated by starting with a complete graph of 40 vertices and adding \\(n-40\\) vertices. Each vertex is endowed with 40 new edges whose destinations are determined according to preferential attachment [10]. Figure 3: Distributions of \\(a_{i}\\) — namely, the number of individuals to whom a patient has directly infected — in (a) the dynamical small-world network, (b) the scale-free network, and (c) Singapore [4]. The distributions are shown for the SS's (crosses) and all the patients (circles). We set \\(\\overline{t}=2.0\\) in (a), (b) and \\(t_{0}=0.5\\) in (a). Figure 4: Relation between the vertex degree \\(k_{i}\\) and the number of infections, \\(a_{i}\\), in the scale-free network for the susceptible (squares), the infected non-SS's (crosses), and the SS'sars.tex: main text (PRE) Figure 1: Figure 2: Figure 4: Figure 3:
The outbreak of severe acute respiratory syndrome (SARS) is still threatening the world because of a possible resurgence. In the current situation that effective medical treatments such as antiviral drugs are not discovered yet, dynamical features of the epidemics should be clarified for establishing strategies for tracing, quarantine, isolation, and regulating social behavior of the public at appropriate costs. Here we propose a network model for SARS epidemics and discuss why superspreaders emerged and why SARS spread especially in hospitals, which were key factors of the recent outbreak. We suggest that superspreaders are biologically contagious patients, and they may amplify the spreads by going to potentially contagious places such as hospitals. To avoid mass transmission in hospitals, it may be a good measure to treat suspected cases without hospitalizing them. Finally, we indicate that SARS probably propagates in small-world networks associated with human contacts and that the biological nature of individuals and social group properties are factors more important than the heterogeneous rates of social contacts among individuals. This is in marked contrast with epidemics of sexually transmitted diseases or computer viruses to which scale-free network models often apply. pacs: 87.23.G, 87.23.C
Condense the content of the following passage.
arxiv-format/0402006v1.md
# The Limiting Temperature of hot nuclei from microscopic Equation of State M. BALDO\\({}^{*}\\), L. S. FERREIRA\\({}^{**}\\) and O.E. NICOTRA\\({}^{*\\dagger}\\) \\({}^{**}\\) Centro de Fisica das Int. Fund. and Dep. de Fisica, Inst. Sup. Tecn. Av. Rovisco Pais, 1096 Lisboa, Portugal \\({}^{*}\\) Istituto Nazionale di Fisica Nucleare, Sezione di Catania and Dipartimento di Fisica, Universita di Catania, Via S. Sofia 64, I-95123 Catania, Italy \\({}^{\\dagger}\\) Dipartimento di Fisica, Universita di Messina Salita Sperone 31, 98166, S. Agata, Messina ## I Introduction The knowledge of the Equation of State (EoS) of nuclear matter at finite temperature is one of the fundamental issues in nuclear physics. Phenomenological information on the EoS can be obtained from experimental data on heavy ion collisions at intermediate energies and astrophysical observations on supernovae explosions and neutron stars. The nuclear matter EoS is believed to go through a liquid-gas phase transition, as many theoretical calculations indicate [1, 2, 3, 4]. However, if this phase transition exists, does not possess a direct correspondence in finite nuclei, due to the presence of the Coulomb and finite size effects. In particular, the Coulomb interaction is of long range and strong enough to modify the nature of the phase transition. Instead, it has been recognized by some authors [5, 6], that the nuclear EoS is related to the maximal temperature a nucleus can sustain before reaching mechanical instability. This \"limiting temperature\" \\(T_{lim}\\) is mainly the maximal temperature at which a nucleus can be observed. It has to be stressed that the reaction dynamics can prevent the formation of a true compound nucleus. The onset of incomplete fusion reactions can mask completely the possible presence of fusion or nearly fusion processes. At higher energies, the heavy ion reaction can be fast enough that no (nearly) thermodynamical equilibrium can be reached, as demandedin a genuine standard fusion-evaporation reaction. However, combined theoretical and experimental analysis [7] indicate that a nearly equilibrium condition is reached in properly selected multifragmentation heavy ion reactions at intermediate energy. The main experimental observation is the presence of a plateau in the so-called \"caloric curve\", i.e. in the plot of temperature vs. total excitation energy [8, 9, 10, 11]. This behaviour was qualitatively predicted by the Copenhagen statistical model [12] of nuclear multifragmentation. The relation between multifragmentation processes and the nuclear EoS was extensively studied by several authors within the statistical approach to heavy ion reaction at intermediate energy [13, 14, 15, 16, 17, 18, 19]. In different experiments, various methods are used to extract from the data the values of the temperature of the source which produces the observed fragments, but a careful analysis of the data [7] seems to indicate a satisfactory consistency of the results. In refs. [7, 20] an extensive set of experimental data was analyzed and it was shown that the temperature at which the plateau starts is decreasing with increasing mass of the residual nucleus which is supposed to undergo fragmentation. Both the values and the decreasing trend of this temperature turn out to be consistent with its interpretation as limiting temperature \\(T_{lim}\\). According to this interpretation, at increasing excitation energy the point where the temperature plot deviates from Fermi gas behaviour and the starting point of the plateau mark the critical point for mechanical instability and the onset of the multifragmentation regime. The corresponding value of the critical temperature can be calculated within the droplet model, and indeed many estimates based on Skyrme forces are in fairly good agreements with the values extracted from phenomenology [7, 6]. Moreover, the relation between nuclear matter critical temperature \\(T_{c}\\) and \\(T_{lim}\\) appears to be quite stable and independent on the particular EoS and method used, which allows [20] to estimate \\(T_{c}\\) from the set of values of \\(T_{lim}\\). In general, one can expect that \\(T_{lim}\\) is substantially smaller than the critical one, \\(T_{c}\\). In fact, both the Coulomb repulsion and the lowering of the surface tension with increasing temperature tend to destabilize the nucleus with respect to infinite nuclear matter. Since the surface tension goes to zero at the critical temperature, \\(T_{lim}\\) is reached much before \\(T_{c}\\). These predictions were checked in the seminal paper of ref. [5], as well as in further studies based on macroscopic Skyrme forces [6], for which a simple relationship was established between \\(T_{lim}\\) and \\(T_{c}\\). In ref. [21] it was shown, however, that if microscopic EoS are used, the relationship between \\(T_{lim}\\) and \\(T_{c}\\) is not so simple and systematic as in the case of Skyrme force EoS, and only a qualitative connection exists. In this paper we consider the finite temperature EoS in the framework of microscopic non-relativistic and relativistic many-body theory of nuclear matter and the corresponding critical temperature. Then the limiting temperature for finite nuclei is calculated on the basis of the corresponding EoS. The comparison with phenomenology shows the sensitivity of \\(T_{lim}\\) to the microscopic EoS. These results open the possibility of a direct check of the microscopic theory of the nuclear matter EoS. Indeed, all the considered microscopic EoS reproduce the empirical saturation point, but their behaviour at finite temperature can be quite different. ## II The Microscopic EOS Microscopic calculations of the nuclear EoS at finite temperature are quite few. The variational calculation by Friedman and Pandharipande [1] was one of the first few semi-microscopic investigation of the finite temperature EoS. The results predict a liquid-gas phase transition, with a critical temperature \\(T_{c}=18-20\\) MeV. Later, Brueckner calculations at finite temperature [2] confirmed these findings with very similar values of \\(T_{c}\\). The Van der Waals behaviour, which leads to the liquid-gas phase transition, was also found in the finite temperature relativistic Dirac-Brueckner (DB) calculations of ref. [4, 3]. A liquid-gas phase transition was clearly observed, but at a much lower value, \\(T_{c}\\approx 10MeV\\). It seems unlikely that such lower critical temperature can be attributed to relativistic effects, since the critical density is a fraction of the saturation one, where relativistic effects are expected to play no role. It is more likely that this lower value of \\(T_{c}\\) is due to the smaller value of the effective mass, and we will present evidence of that later. More recently, chiral perturbation theory at finite temperature was used [22] to calculate the nuclear matter EoS, up to three-loop level of approximation. The theory is a low density expansion, and it appears appropriate to study the critical point, where the density is a fraction of the saturation density. Again a Van der Waals behaviour was found, with a critical temperature \\(T_{c}\\approx 25MeV\\). This set of nuclear matter EoS can be considered representative of the possible predictions from microscopic many-body theory. Here in the sequel of this section we will remind briefly the non-relativistic Bloch and De Dominicis formalism, used in our calculations, which is an extension to finite temperature of the Bethe-Brueckner-Goldstone (BBG) expansion. The formalism used in Dirac-Brueckner calculations at finite temperature is formally very similar, as we will discuss later. For the chiral perturbation the formalism is of course quite different, and we refer the reader to the original paper [22]. The finite temperature Bloch and De Dominicis linked diagram expansion is based on the Grand-canonical representation and has the property to lead, in the zero temperature limit, to the BBG expansion of the ground state energy. The grand canonical potential per particle \\(\\omega\\) is written as the sum of the unperturbed potential \\(\\omega_{0}^{\\prime}\\) and a correlation term \\(\\Delta\\omega\\), \\[\\omega=\\omega_{0}^{\\prime}+\\Delta\\omega \\tag{1}\\] corresponding to the one-body grand canonical potential, and a power series expansion in the interaction \\(H_{1}\\) involving connected diagrams only, respectively. The unperturbed potential is defined by, \\[\\omega_{0}^{\\prime}=\\omega_{0}-\\sum_{k}U_{k}n(k) \\tag{2}\\] with \\(n(k)\\) the finite temperature Fermi distribution, \\(\\omega_{0}\\) the grand canonical potential of the independent particle hamiltonian \\(H_{0}^{\\prime}\\), and the summation over the single particle potential \\(U_{k}\\) represents the first potential insertion diagram [2]. Therefore, \\(\\omega_{0}^{\\prime}\\) includes all one-body contributions and its explicit form reads \\[\\omega_{0}^{\\prime}=-{{2\\over\\pi^{2}}}\\int_{0}^{+\\infty}k^{2}dk[ \\frac{1}{\\beta}\\log(1+e^{-\\beta(e_{k}\\mu)}) \\tag{3}\\] \\[+U(k)n(k)]\\]\\(\\mu\\) being the chemical potential, and \\[\\Delta\\omega=\\frac{2}{(2\\pi)^{3}}\\sum_{ISJT}\\hat{J}^{2}\\hat{T}^{2} \\int dq\\int P^{2}dPe^{-\\beta(\\overline{E}_{Pq}-2\\mu)}\\] \\[\\cdot d(q,P)\\arctan\\left[\\frac{\\pi(q|K^{SJT}(\\overline{E}_{Pq})|q |)q^{2}\\overline{Q}(q,P)}{d(q,P)}\\right],\\] where the density of state \\(d\\) is given by, \\[d(q,P)=|\\frac{\\partial\\overline{E}_{qP}}{\\partial q}|=|\\frac{2\\hbar^{2}q}{m}+ \\frac{\\partial}{\\partial q}\\overline{U}_{qP}|. \\tag{5}\\] The two-particle energy \\(\\overline{E}_{qP}\\), the Pauli operator \\(\\overline{Q}_{qP}\\) and the potential felt by two particle \\(\\overline{U}_{qP}\\), are all angle averaged quantities [2]. These angular averaging is expected to be accurate, allowing us to make the contribution of different channels additive, since then, only the diagonal part of the finite temperature scattering matrix \\(K\\) contributes. The quantum numbers \\(lSJT\\) specify the two-body channel and \\(\\hat{A}=\\sqrt{2A+1}\\). The single particle potential and the two-body scattering matrix \\(K\\) satisfy the self-consistent equations \\[U({\\bf k}_{1})=\\sum_{\\sigma\\tau}\\sum_{{\\bf k}_{2}}\\langle k_{1}k_{2}|K(\\omega) |k_{1}k_{2}\\rangle_{A}n(k_{2}). \\tag{6}\\] and \\[\\langle k_{1}k_{2}|K(\\omega)|k_{3}k_{4}\\rangle\\!\\!=\\!\\!\\langle k_{1}k_{2}|v|k_ {3}k_{4}\\rangle\\!\\!+\\] \\[\\sum_{k^{\\prime}_{3}k^{\\prime}_{4}}\\langle k_{1}k_{2}|v|k^{\\prime}_{3}k^{ \\prime}_{4}\\rangle\\stackrel{{ n_{>}(k^{\\prime}_{3})n_{>}(k^{ \\prime}_{4})}}{{\\omega-e}}\\langle k^{\\prime}_{3}k^{\\prime}_{4}|K(\\omega)|k_{3} k_{4}\\rangle. \\tag{7}\\] In Eq. (4) \\[\\langle k_{1}k_{2}|{\\cal K}(\\omega)|k_{3}k_{4}\\rangle=\\] \\[(n_{>}(k_{1})n_{>}(k_{2})n_{>}(k_{3})n_{>}(k_{4}))^{\\frac{1}{2} }\\langle k_{1}k_{2}|K(\\omega)|k_{3}k_{4}\\rangle \\tag{8}\\] In all the previous equations \\(\\omega=E_{k_{1}}+E_{k_{2}}\\), \\(e=E_{k^{\\prime}_{3}}+E_{k^{\\prime}_{4}}\\), with \\(E_{k}=\\hbar^{2}k^{2}/2m\\,+\\,U_{k}\\). Eq. (7) coincides with the Brueckner equation for the Brueckner \\(G\\) matrix at zero temperature, if the single particle occupation number \\(n(k)\\) are taken at \\(T=0\\). At finite temperature \\(n(k)\\) is a Fermi distribution. In Eqs. (7,8) \\(n_{>}(k)=1-n(k)\\). It has to be noticed, that only the principal part has to be considered in the integration, thus making \\(K\\) a real matrix. Eqs. 6 and 7 have to be solved self-consistently for the single particle potential. For a given density and temperature we solve the self-consistent equations along with the Eq. (9) for the chemical potential \\(\\tilde{\\mu}\\), \\[\\rho=\\sum_{k}n(k)=\\sum_{k}\\frac{1}{e^{\\beta(E_{k}-\\tilde{\\mu})}+1} \\tag{9}\\]Then we obtain the grand canonical potential \\(\\omega\\) from Eq. (4). Finally we extract the free energy per particle \\(f\\) from the relation, \\[f=\\omega\\rho+\\tilde{\\mu}. \\tag{10}\\] The pressure \\(p\\) is calculated performing a numerical derivative of \\(f\\), i.e. \\(p=\\rho^{2}\\partial f/\\partial\\rho\\). Notice that the chemical potential \\(\\tilde{\\mu}\\) extracted from Eq. (9) does not coincide with the exact thermodynamical chemical potential \\(\\mu\\) given by \\[\\mu=\\frac{\\partial F}{\\partial N}=f+\\rho(\\frac{\\partial f}{\\partial\\rho}) \\tag{11}\\] which is the one actually adopted, in order to satisfy the Hugenholtz-Van Hove theorem [2]. It turns out that [2] the dominant diagrams in the expansion are the ones that correspond to the zero temperature BBG diagrams, where the temperature is introduced in the occupation numbers only, represented by Fermi distributions, thus justifying this commonly used procedure of naively introducing the temperature effect. The same prescription has been used in Dirac-Brueckner calculations. The formalism is therefore in principle very similar. ## III The limiting temperature of finite nuclei Following ref. [5] the limiting temperature can be evaluated within the liquid drop model, which should be accurate enough for medium-heavy nuclei. The nucleus is described in terms of a droplet surrounded by a vapour, in thermal and mechanical equilibrium. In the model one adds to the droplet pressure and chemical potential the contributions due to the Coulomb force and surface tension, which are evaluated assuming a spherical droplet. These additional terms read, \\[\\delta P = P_{C}+P_{S}=\\left(\\frac{Z^{2}e^{2}}{5A}\\rho-2\\alpha(T)\\right)/R\\] \\[\\delta\\mu = \\frac{6Z^{2}e^{2}}{5AR} \\tag{12}\\] where \\(R\\) is the droplet radius \\(R=(\\frac{3A}{4\\pi\\rho})^{1/3}\\), \\(\\rho\\) is the droplet density and for \\(\\alpha(T)=\\alpha_{0}(1+\\frac{3}{2}T/T_{c})(1-T/T_{c})^{3/2}\\), with \\(T_{c}=20\\) MeV the nuclear matter critical temperature and the surface tension at zero temperature \\(\\alpha_{0}=1.14\\) MeV fm\\({}^{-2}\\), obtained from the semi-empirical mass formula. The Coulomb interaction introduces and additional positive pressure \\(P_{C}\\) and a repulsive contribution to the bulk chemical potential \\(\\mu\\), while the surface tension provides and additional negative pressure term which tends to stabilize the system. At increasing temperature the surface tension decreases and the system becomes unstable against Coulomb dissociation. The simplest way to observe the modifications introduced by these terms is to consider the plot of the chemical potential as a function of pressure, both for nuclear matter and for the droplet model. The intersection between the liquid and the vapour branches defines the coexistence point in nuclear matter. The additional terms will only shift the liquid branch, since the vapour is assumed to be uniform and uncharged, leading to a new coexistence point. This procedure was followed for the set of nuclear matter EoS discussed in the previous section. At the lowest densities in the vapour region, needed in the calculations, the microscopic EoS was extended following ref. [2]. ## IV Results and Discussion To illustrate the procedure followed in the microscopic calculations of EoS and \\(T_{lim}\\) in the framework of many-body theory, the nuclear matter free energy is reported in Fig. 1a as a function of density for various temperature in the case of the Bonn B potential [23]. The points indicates the actual microscopic calculations, the full lines the corresponding polynomial fits. The figure illustrates the precision and stability of the numerical procedure. The three-body force, discussed in [2], was included with adjusted parameters to reproduce the correct saturation point. From the free energy, by numerical derivative, one gets the pressure depicted in Fig. 1b. The critical temperature for the liquid-gas phase transition is the lowest temperature for which the isotherm is monotonic and the critical point is the corresponding inflexion point on the isotherm. From Fig. 1b the critical temperature appears to be around \\(T_{c}\\approx 18\\) MeV, slightly below the value obtained in ref. [2] for the Argonne v\\({}_{14}\\) potential [24] ( \\(T_{c}\\approx 20\\) MeV ). This shows that there is some sensitivity of \\(T_{c}\\) on the NN interaction. It has to be stressed that the two EoS have very close saturation points. As it is well known, the Dirac-Brueckner approach gives in general a better saturation point than the conventional Brueckner calculations (without three-body force). It has been shown that this is mainly due to the modification of the nucleon Dirac spinor inside nuclear matter, which can be described by the contribution of the so-called Z-diagram [25], corresponding to the virtual creation of a nucleon-antinucleon pair. The Z-diagram can be viewed as a particular three-body force, which is repulsive at all densities. The density dependence of this contribution was studied in ref. [25] and was found to be of the type \\(\\Delta e=C\\rho^{8/3}\\), with the coefficient \\(C\\) depending on the NN interaction. In ref. [23] it was found that such a term can account very precisely for the difference between the Dirac-Brueckner calculation and the corresponding non-relativistic Brueckner one. Finite temperature Dirac-Brueckner calculations are quite few in the literature [3, 4]. Furthermore, for our analysis we need the free energy as a function of density at small steps of the temperature. Fortunately it is possible to estimate accurately the temperature dependence of the free energy at a given density by a simplified procedure, avoiding the complexity of the full finite temperature Dirac-Brueckner calculations. Once the zero temperature EoS is known, we assume that the free energy at \\(T\ eq 0\\) can be obtained by including the variations of both entropy and internal energy of a free Fermi gas with the value of the effective mass ( at \\(k=k_{F}\\) ) equal to the one calculated at the same density and at \\(T=0\\). In this way one neglects the variation with temperature of the effective mass and of the interaction energy. Both these variations turn out to be small at the Brueckner level [2], and indeed the same procedure applied to to non-relativistic Brueckner calculations give excellent agreement with the full calculations [2]. We applied this procedure to the EoS of ref. [3], by fitting the Dirac-Brueckner EoS at \\(T=0\\) and calculating the free energy at finite temperature from the corresponding effective mass. At variance with the previous calculations of ref. [2], we preferred here to fit directly the EoS at zero temperature instead of applying the relativistic correction due to Z-diagram mentioned above. This should avoid any possible bias from the NN interaction. In any case, the final results are quite similar to the previous calculations. We found a critical temperature \\(T_{c}\\approx 12\\) MeV, in comparison with the value of 10 MeV reported in ref. [3]. This reasonable agreement is a further check of the simplified procedure adopted. Since the limiting temperature \\(T_{lim}\\) is expected to be a small fraction of the critical temperature \\(T_{c}\\), the error introduced by the simplified procedure can be considered small enough for an accurate treatment of the Dirac-Brueckner case. In DB calculations the single particle energy \\(E_{k}\\) is written as [23] \\[E_{k}=\\sqrt{M^{*2}\\,+\\,k^{2}}+U_{V}\\quad,\\quad M^{*}\\,=\\,M+U_{S} \\tag{13}\\] where \\(U_{S}\\) and \\(U_{V}\\) are the scalar and vector single particle potentials respectively. In the non-relativistic limit the square root is expanded in power of \\(k/M^{*}\\). If one neglects the momentum dependence of the scalar and vector potentials, \\(M^{*}\\) can be identified with the non-relativistic effective mass to be used in the finite temperature calculations for the Fermi gas model. In the region of the liquid-gas phase transition the non-relativistic expansion is fully justified. This is equivalent to a parabolic approximation for the single particle energy. This procedure results in values of the effective mass which are substantially smaller than in the conventional non-relativistic Brueckner calculations [23], where no parabolic approximation for the single particle potential is used [26]. For the EoS calculated within chiral perturbation theory, all the expressions are semi-analytical and the whole procedure is much simpler. Plots of the chemical potential as a function of pressure for nuclear matter are reported in Fig. 2. The intersection between the liquid and the vapour branches defines the coexistence point in nuclear matter. Increasing the temperature, the curve shrinks and should collapse to a point at \\(T_{c}\\), which can be thus determined in this way. The values extracted along this procedure are in good agreement with the values obtained from the plot of pressure vs. density, Fig. 1b. This illustrates the consistency and precision of the numerical procedure. For the droplet model, including the corrections of Eq. (12), the new liquid branch, indicated by the dashed lines in Fig. 2, shows a shift with respect to nuclear matter. At low enough temperature an intersection between the liquid and vapour branches still occurs, which corresponds to the coexistence point between the liquid droplet and the nuclear matter vapour and assures that the droplet is stable. Increasing the temperature, the curve shrinks and well below \\(T_{c}\\) it is possible to find a temperature for which the intersection between the liquid droplet and the vapour branches just disappears, as indeed reported in Fig. 2. This determines \\(T_{lim}\\). The droplet-vapour coexistent point, and consequently \\(T_{lim}\\), depends on the mass and charge of the system. Fig. 3 summarizes the results of the calculations, in comparison with the data obtained from the phenomenological analysis [7, 20]. For completeness and for sake of comparison, also the results for the Av\\({}_{14}\\) potential of ref. [2] is reported. The calculated values of the limiting temperature \\(T_{lim}\\), for the considered set of microscopic nuclear matter EoS, show an overall trend which clearly reflect the corresponding trend for the critical temperature \\(T_{c}\\) of each EoS. Smaller values of \\(T_{c}\\) results in a smaller value of \\(T_{lim}\\). The ratio between \\(T_{lim}\\) and \\(T_{c}\\) for Skyrme forces was extensively studied in ref. [20]. It was found that this ratio is close to \\(1/3\\) with a small dispersion. The microscopic EoS analyzed in Fig. 3 give values which follow closely this value, except the Dirac-Brueckner case, which gives a value closer to \\(1/4\\) This could be attributed to the approximate procedure we used for this EoS, but in any case a value of \\(1/3\\) would not alter the trend reported in Fig. 3. More importantly, the comparison of the values of \\(T_{lim}\\) from microscopic EoS with the phenomenological values emphasizes the sensitivity of \\(T_{lim}\\) to the EoS. This comparison appears as a crucial test for any microscopic EoS. The EoS from ref. [22], as noticed by the authors, produces a too large value of the nucleon effective mass, and this is probably the reason of the too high value of \\(T_{c}\\). In fact, a large effective mass reduces the increase with temperature of the kinetic energy and therefore of the free energy. On the contrary, the DB results seem to indicate that the corresponding EoS has a too small \\(T_{c}\\). Notice that this would be very difficult to verify with other phenomenological analysis. The reason for such a small value of \\(T_{c}\\), and therefore of a too small value of \\(T_{lim}\\), can be attributed again to the value of the effective mass, which is smaller than in the non-relativistic case. However, other characteristic of the EoS could play a role, like the values of the chemical potential or of the compressibility at low density (i.e. in the gas phase). The non-relativistic BHF results appear to agree quite closely with the phenomenological values. Some dependence on the NN interaction is present, but this uncertainty is within the phenomenological uncertainty. Therefore, phenomenology appears to favour this set of EoS. These results also support the interpretation of \\(T_{lim}\\) as the temperature for the mechanical instability and the onset of the multifragmentation regime. ###### Acknowledgements. We thanks very much Dr. N. Kaiser for his kindness in providing us an extended numerical table of the Equation of State developed in ref. [22]. One of us ( O. E. N. ) expresses many thanks for the kind hospitality during his stay at the Centro de Fisica das Int. Fund. in Lisbon, where part of this work has been developed. ## References * [1] B. Friedman and V. R. Pandharipande, _Nucl. Phys._ **A 361**, 502 (1981). * [2] M. Baldo and L.S. Ferreira, Phys. Rev. **C 59**, 682 (1999). * [3] B. ter Haar and R. Malfliet, _Phys. Rev. Lett._**56**, 1237 (1986); _Phys. Rep._**149**, 207 (1987). * [4] H. Huber, F. Weber and M.K. Weigel, Phys. Rev. C57, 3484 (1999) * [5] S. Levit and P. Bonche, Nucl. Phys. **A437**, 426 (1985). * [6] H. Q. Song and R. K. Su, Phys. Rev. **C44**, 2505 (1991). * [7] J.B. Natowitz, R. Wada. K. Hagel, T.Keutgen, M. Murray, A. Makeev, L. Qin, P. Smith and C. Hamilton, Phys. Rev. **C65**, 034618 (2002). * [8] J. Pochdzalla _et al._, Phys. Rev. Lett. **75**, 1040 (1995). * [9] R. Wada _et al._, Phys. Rev. **C55**, 227 (1997). * [10] J. Cibor _et al._, Phys. Lett. B473, 29 (2000). * [11] D. Cussol _et al._, Nucl. Phys. A561, 298 (1993). * [12] J. Bondorf, R. Donangelo, I.N. Mishustin and H. Schulz, Nucl. Phys. **A444**, 460 (1985). * [13] J. Bondorf, R. Donangelo, I.N. Mishustin, C. Pethick, H. Schulz, and K. Sneppen, Phys. Lett. **B150**, 57 (1985) ; Nucl. Phys. **A443**, 321 (1985). * [14] J. Bondorf, R. Donangelo, H. Schulz, and K. Sneppen, Phys. Lett. **B162**, 30 (1985) * [15] A.S. Botvina _et al._, Nucl. Phys. **A475**, 663 (1987). * [16] D. Gross _et al._ Prog. Part. Nucl. Phys. **30**, 155 (1993), and references therein. * [17] W. A. Friedman, Phys. Rev. Lett. **60**, 2125 (1988). * [18] L.P Csernai and J. I. Kapusta, Phys. Rep. **131** 223 (1986), and references therein. * [19] J.P. Bondorf _et al._, Phys. Rept. **257**, 133 (1995). * [20] J.B. Natowitz, K. Hagel, Y. Ma, M. Murray, L. Qin, R. Wada and J. Wang, Phys. Rev. Lett. **89**, 212701 (2002). * [21] M. Baldo, Y. H. Cai, G. Giansiracusa, U. Lombardo, H. Q. Song, _Phys. Lett._**B340**, 13 (1994). * [22] S. Fritsch, N. Kaiser and W. Weise, Phys. Lett. B545, 73 (2002) * [23] R. Machleidt, _Adv. Nucl. Phys._**19**, 189 (1989). * [24] R.B. Wiringa, R.A. Smith and T.L. Ainsworth, Phys. Rev. **29C**, 1207 (1984). * [25] G. E. Brown, W. Weise, G. Baym and J. Speth, _Comm. Nucl. Part. Phys._**17**, 39 (1987). * [26] M. Baldo and A. Fiasconaro, Phys. Lett. **B491**, 240 (2000). **Figure captions** Fig. 1a - Free energy per particle as a function of Fermi momentum at different temperatures for the Bonn potential. From top to bottom the different curves correspond to temperatures \\(T=2,8,12,16,20,24,28\\) MeV. The points represent the results of the Brueckner-Hartree-Fock calculations at finite temperature, the curves are the corresponding polynomial fits. Fig. 1a - Isotherms of pressure vs. Fermi momentum corresponding to the free energy plots of Fig. 1a. The sequence of temperatures is the same as in Fig. 1a (from bottom to top). Fig. 2 - Chemical potential vs. pressure for the Bonn potential from the Brueckner-Hartree-Fock calculations of Figs. 1a,1b (full line) at a given temperature. The dotted line indicates the corresponding plot for the nucleus \\({}^{208}Pb\\). At this temperature the nucleus starts to be unstable, see the text for details. Fig. 3 - Limiting temperatures as a function of mass numbers for different Equation of State in comparison with the phenomenological values (open squares with error bars). \\begin{tabular}{|c|} \\hline \\(\\square\\) & Natowitz et al. \\\\ \\(\\blacktriangledown\\) & DB (Mallfliet et al.) \\\\ \\(\\blacklozenge\\) & BHF+Bonn+TBF \\\\ \\(\\blacksquare\\) & Chiral pert. (Kaiser et al.) \\\\ \\(\\blacklozenge\\) & BHF+Av14+TBF \\\\ \\hline \\end{tabular}
The limiting temperature \\(T_{lim}\\) of a series of nuclei is calculated employing a set of microscopic nuclear Equations of State (EoS). It is shown that the value of \\(T_{lim}\\) is sensitive to the nuclear matter Equation of State used. Comparison with the values extracted in recent phenomenological analysis appears to favour a definite selection of EoS' s. On the basis of this phenomenological analysis, it seems therefore possible to check the microscopic calculations of the nuclear EoS at finite temperature, which is hardly accessible through other experimental informations. **PACS numbers:** 21.65.+f, 21.30.-x, 25.70.-z, 26.50.+x, 26.60.+c
Provide a brief summary of the text.
arxiv-format/0402027v1.md
# Singular vector ensemble forecasting systems and the prediction of flow dependent uncertainty Stephen Jewson RMS, London, United Kingdom Maarten Ambaum and Christine Ziehmann _Correspondence address:_ RMS, 10 Eastcheap, London, EC3M 1AJ, UK. Email: [email protected] ## 1 Introduction We are interested in the question of how to make forecasts of the distribution of future temperatures over time-scales of one or two weeks. The best predictors for this distribution come from numerical weather forecasting models, and, in particular, from ensemble integrations of such models. The ensembles are generated by running the forecast model many times from different initial conditions, and, in some cases, by using stochastic parameterisations. The precise methods used to generate the ensemble of different initial conditions differ from one forecast system to another. For instance, ECMWF uses a method based on perturbing the initial state using the singular vectors of the linearised propagator over a finite time period (Molteni et al., 1996) while NCEP uses the breeding vector method (Toth and Kalnay, 1993). The predictors one can derive from these models are a mixture of information and error, and it is non-trivial to convert these predictors into optimal probabilistic forecasts. The most straightforward method used to derive a prediction of the future distribution of temperatures is to build a linear regression model between the ensemble mean (as input) and the temperatures being predicted (as output). Such a model gives a mean-square error minimising prediction of the temperature, as well as a prediction of the uncertainty around that temperature. We will refer to the regression model as a _first generation_ calibration model. It has been in use since the 1970s (see, for example, Leith (1974)). Second generation calibration models use more information from the ensemble than just the ensemble mean in an attempt to predict flow dependent variations in the uncertainty of the temperature prediction. Second generation models include the rank histogram (Talagrand et al., 1997), the best members method (Roulston and Smith, 2003) and the spread-scaling method (Jewson, 2003b). These models are similar in that they calibrate the mean level of the uncertainty and the variability of the uncertainty in the same way. This, it turns out, is not ideal, and as a result the second generation models do not, generally, perform as well as linear regression. To understand why not one can consider the case in which the variability of the ensemble spread contains no information at all. In such a situation an effective calibration method would ignore this variability and produce a forecast with a constant level of uncertainty derived entirely from past forecast error statistics. However, all of the second generation calibration models fail this test and would actually _inflate_ the variability in the uncertainty still further, because of the need to inflate the mean level of spread. The third generation of calibration models addresses this issue, and calibrates the mean and the variability of the uncertainty in separate ways. Jewson et al. (2003) describe such a model and show an example where the mean level of uncertainty needs to be increased while the variability of the uncertainty needs to be decreased. Why should this be necessary? One explanation is that site specific temperaturesare affected by small-scale processes that increase the mean level of uncertainty, but do not change the variability in the uncertainty. However, in this paper we will investigate another possibility: that it is the methods used to generate the ensembles themselves, and in particular the truncated singular vector approach used at ECMWF, that result in ensemble forecasts that require that the mean and the variability of the uncertainty be calibrated separately. We approach this question using a simple linear stochastic model of the prediction of forecast uncertainty. This model allows us to calculate both the exact uncertainty and the uncertainty predicted by singular vector methods, and to compare the two. In section 2 we give a brief description of how the initial conditions are generated in the ECMWF ensemble forecasting system. In section 3 we describe the simple model we will use to study the properties of singular vector forecasting systems in general. In section 4 we present our results and in section 5 we summarise. ## 2 The ECMWF ensemble prediction system The method used at ECMWF to create an ensemble forecast can be described as follows. ### Step 1: linearisation of the propagator We write the entire atmospheric model as: \\[\\frac{dx}{dt}=F(x) \\tag{1}\\] where \\(x(t)\\) is the atmospheric state and \\(F\\) is a non-linear function representing the dynamics of the model. We will assume henceforth that the model is perfect, and hence that \\(F(x)\\) is also an accurate representation of the non-linear dynamics of the atmosphere. We will only consider forecast errors that arise due to errors in the initial conditions. If we now consider a forecast made from the current state \\(x\\), and write an initial condition error as a small perturbation \\(e\\) around \\(x\\) we have: \\[\\frac{d(x+e)}{dt} = F(x+e)\\] \\[= F(x)+\\frac{dF}{dx}e+ \\] Subtracting equation 1 from equation 2, and ignoring higher order terms, gives: \\[\\frac{de}{dt} = \\frac{dF}{dx}e\\] \\[= A(x(t))e\\] This is a linear equation for the development of initial condition errors in the forecast. This can be solved to give: \\[e(t)=\\exp\\left(\\int Adt\\right)e_{0} \\tag{4}\\] Writing \\(B(t)=\\int Adt\\) gives: \\[e(t)=\\exp(B(t))e_{0} \\tag{5}\\] Expanding this gives: \\[e=(1+B+ )e_{0} \\tag{6}\\] and ignoring higher order terms: \\[e(t)=(1+B(t))e_{0} \\tag{7}\\] This is now a linear equation which gives the forecast error at time \\(t\\) in terms of the initial condition error at time \\(0\\). ### Step 2: creation of initial conditions The first 25 singular vectors of matrix \\(1+B\\) are calculated. ### Step 3: creation of the ensemble forecast Positive and negative versions of each singular vector are propagated forwards using equation 1 to give 50 forecasts. The spread of these forecasts gives an indication of the uncertainty in the forecast. ## 3 The model Our model for the process by which forecast errors in the ECMWF model develop is just equation 7: \\[e=(1+B)e_{0} \\tag{8}\\] where \\(e_{0}\\) is a vector of initial condition errors, \\(e\\) is a vector of final forecast errors, and \\(1+B\\) is a matrix representing the process by which the forecast errors grow. For simplicity, we will study a 2 dimensional system. We will assume that the matrix \\(B\\) varies in time, to represent variations in the state of the atmosphere. Our model for \\(B\\) will be that each of the four matrix elements are independent of the other elements, are independent of themselves in time, and are given by a standard normal distribution. Our model for the real initial condition errors \\(e_{0}\\) will be that these errors are drawn from a bivariate normal distribution with correlation of zero. Within this model the variations in the statistics of the distribution of forecast errors are driven by variations in the extent to which the error propagator \\(1+B\\) causes the initial condition distribution to grow or not. ### Generating the real forecast uncertainty For each point in time (i.e. for each randomly generated \\(1+B\\)) we define the real forecast uncertainty by performing an ensemble of integrations over 1000 values for \\(e_{0}\\) sampled from the distribution of possible values (the bivariate normal). We take the resulting ensemble of 1000 values of \\(e\\) and calculate the distribution for the first element to represent observing a single variable. The standard deviation of this distribution gives a measure of the real uncertainty in the forecast (by definition). ### Predicting the forecast uncertainty using a full singular vector system We now imagine that we want to predict the uncertainty defined in section 3.1 using the singular vector method. We assume that we know the exact propagator for the forecast errors i.e. we also use \\(1+B\\) to predict the uncertainty. In real forecast systems, the process by which forecast errors grow is not entirely understood. However, in our system we understand it completely. We generate the ensemble prediction by calculating the two right singular vectors of \\(1+B\\), and using them to initialise \\(e_{0}\\) four times, for positive and negative versions of each singular vector (mimicking the ECMWF initialisation system). This generates an ensemble of 4 values for \\(e\\) (which are the left singular vectors scaled by their singular values) and the standard deviation of this ensemble gives the predicted uncertainty. We then compare the temporal variations in the predicted uncertainty with the temporal variations in the actual uncertainty. ### Predicting the forecast uncertainty using the first singular vector We now imagine that we want to predict the uncertainty using the _truncated_ singular vector method. This is closer to the system used at ECMWF, which is based on the first 25 singular vectors of a system with many thousands of degrees of freedom. We now generate the ensemble prediction using only the _first_ singular vector of \\(1+B\\). We initialise, as before, using positive and negative versions of this singular vector. This generates an ensemble of 2 values for \\(e\\), and the standard deviation of this ensemble gives our prediction of the uncertainty. ### Predicting the forecast uncertainty using the second singular vector Our third and final method for using singular vectors to predict the uncertainty uses only the _second_ singular vector. Otherwise this method is identical to the second method. Results Figure 1 and figure 2 show examples of the initial conditions and forecast errors from the random initial condition model, along with the left singular vectors of the matrix \\(1+B\\) scaled by their singular values, for six arbitrarily chosen forecast days. We see that the initial condition error ball becomes an ellipse of final forecast errors, and that the singular vectors are aligned with the principal axes of this ellipse, exactly as we would expect. ### Temporal variation of the real forecast uncertainty We now consider the temporal variations in the real uncertainty, generated from the ensemble of initial conditions sampled from the bivariate normal distribution. Figure 3 shows a time series of 50 days of forecast uncertainty generated by the model. We see that the forecast uncertainty varies in time. This is because of the random variations in the matrix \\(B\\), which mimic the flow dependent changes in the processes that control the growth of forecast errors. The top left panels of figure 8, figure 9 and figure 10 show the distribution of the real uncertainty. This distribution is repeated as a dotted line in the other panels of these figures for purposes of comparison. ### Temporal variation of the forecast uncertainty from the full singular vector model We now attempt to predict the temporal variations in the uncertainty using the full singular vector ensemble. A scatter plot of the real uncertainty (horizontal axis) and the predicted uncertainty (vertical axis) is shown in the lower left panel of figure 5. We see a strong relation between the two, although the predicted uncertainty is larger than the real uncertainty. The lower right panel of figure 8 shows the distribution of predicted uncertainty (solid line), which is clearly too wide relative to the real uncertainty (dotted line). We calculate the empirical correlation between this predicted uncertainty and the real uncertainty: it is very close to 1.0. This shows the value of using singular vectors: we can avoid having to sample the whole initial condition error ball by using vectors which efficiently span the forecast error space. We now consider calibration of the forecast, since the mean and the standard deviation of the uncertainty prediction are both wrong (as shown by the distribution of the uncertainty in figure 8). We try a very simple calibration consisting of a scaling of the uncertainty forecast (the \"spread-scaling\" method of Jewson (2003b)). Since we are dealing with a linear system this is equivalent to scaling the initial condition singular vectors. The effect of this calibration on the distribution is shown in the lower right panel of figure 9. We see that this simple calibration method succeeds in setting both the mean and the standard deviation of the uncertainty to be correct, even though there is only a single calibration parameter. A scatter plot of the calibrated forecasts is shown in figure 6, lower left panel. In summary, our full singular vector uncertainty forecast has a correlation with the real uncertainty of one, and, post-calibration, the mean and the standard deviation are correct. It is producing a perfect forecast of the actual uncertainty. ### Temporal variation of the forecast uncertainty predicted from the first singular vector We now consider the temporal variations in the uncertainty predicted using just the first singular vector. This system is an analogy to the ECMWF prediction system, which uses a truncated set of singular vectors to create an ensemble. A time series of the predicted uncertainty is shown in figure 4 (dotted line) along with the real uncertainty (solid line). The upper left panel of figure 5 shows a scatter plot of the real and the predicted uncertainty. The upper right panel of figure 8 shows the distribution of the predicted uncertainty. The correlation between this predicted uncertainty and the actual uncertainty is approximately 0.95: 3 estimates of this correlation from a set of independent experiments are shown in table 1. From the correlation and the scatter plot we see that the use of only a single singular vector is not as accurate as using the full set of singular vectors, as expected. The mean and the variability of the predicted uncertainty are wrong, and, as before, we can attempt to calibrate them. This time, however, when we calibrate with a simple scaling designed to correct the mean uncertainty, this does _not_ correct the standard deviation of the uncertainty correctly, as is shown in the top right panel of figure 9. A scatter plot of the results of this calibration are shown in the upper left panel of figure 6. The ratio of the standard deviation of the predicted uncertainty to the standard deviation of the actual uncertainty after the scaling calibration is given in the second column of table 1. We see that the calibrated forecast overestimates the variability of the uncertainty. In a real forecast system this would lead to overprediction of extreme events. We then use a more complex calibration system that corrects the predicted uncertainty using a shift and a scaling. Because such a calibration system has two parameters it can (and does) correct both the mean and the variance of the uncertainty (as is shown in the top right panel of figure 10) although it cannot, of course, improve the correlation. A scatter plot of the results of this calibration step are shown in the upper left panel of figure 7. ### Temporal variation of the forecast uncertainty predicted from the second singular vector Finally we consider the temporal variations in uncertainty predicted using only the second singular vector. The correlation between the actual and the predicted uncertainty is close to zero: values from our 3 independent experiments are shown in table 2, and in the upper right panel of figure 5. The distribution of predicted uncertainty is shown in figure 8, lower left panel. Calibration using a single scaling again does not succeed in correcting both the mean and the variability of the uncertainty, and again the ratio of the variability of the predicted uncertainty to the real uncertainty is too high, as can be seen from the calibrated distribution in figure 9. In fact, the overestimation of the variability of the predicted uncertainty is considerably higher than when using the first singular vector. Scatter plots of the results from calibrating with a simple scaling, and with a shift and scaling, are shown in the top right panels of figure 6 and 7. The effect of a shift and scaling on the distribution are shown in figure 10. ## 5 Discussion We have investigated how singular vectors can be used to predict the evolution of errors in a simple two dimensional linear stochastic system. The system is designed to mimic the development of errors in forecasts of the real atmosphere, and the use of singular vectors is designed to mimic the use of singular vectors in the ECMWF ensemble prediction system. We generate (or rather _define_) the real uncertainty in our system by using initial conditions sampled from a bivariate normal distribution. We then use this real uncertainty as a basis for comparison for uncertainty predicted using singular vectors. Our first singular vector system uses both singular vectors to predict the uncertainty. The predictions are very successful, and show a 100% correlation with the actual uncertainty. However, they still need calibration. A simple calibration consisting of a scaling is sufficient to match both the mean and the variability of the predicted uncertainty to reality. Our second singular vector system uses only the leading singular vector to predict the uncertainty. These predictions are designed to mimic the ECMWF system, which uses a truncated set of singular vectors to predict forecast uncertainty. The predictions we generate from the truncated system are less successful and do not have 100% correlation with the actual uncertainty. They also overestimate the variability of the uncertainty by about 20%, even after a \"spread-scaling\" calibration step. Because of this if we want to calibrate the forecast to have the correct mean and variability in the level of uncertainty we need to use two parameters. Our third singular vector system uses only the second singular vector to predict the uncertainty. In this case the correlation with the real uncertainty is very poor. However, the variability of the uncertainty is again overestimated even after spread-scaling calibration. This shows that overestimation of the variability of the uncertainty is not caused by the first singular vector per se, but just by the use of only a single singular vector, whichever it is. In the cases where the correlation between the predicted and actual uncertainty is less than 100% it is not necessarily the best thing to do to enforce the mean and the variance of the uncertainty to be correct. In fact in a real forecast system we cannot implement this method for calibration anyway because we do not know the variability of the real uncertainty. An alternative calibration method is to calibrate the uncertainty prediction so as to maximise the log-likelihood, as used in Jewson et al. (2003). We will consider this possibility, and analyse the effect of such a calibration system on the mean and the variability of the uncertainty in the context of our simple model in a subsequent article. To the extent that our system captures some of the dynamics of the full ECMWF forecast system, we conclude that the use of truncated singular vectors is one reason why Jewson et al. (2003) have found that the predictions of uncertainty from that system need calibration using third generation calibration models that treat the mean of the uncertainty and the variability of the uncertainty separately. It also suggests that forecasts from the ECMWF system calibrated using second generation calibration models will tend to overestimate the variability of the uncertainty and overpredict extreme events. One idea that arises from this work is that it might be worth trying to calibrate the uncertainty of ensemble forecasts using a standard CDF-based distribution transform, which would convert the spread of the forecast ensemble to a prediction of the uncertainty, and enforces a sensible distribution for the latter. The calibration could fix the parameters of the predicted distribution using maximum likelihood. This might be a better calibration model than the spread regression model of Jewson et al. (2003), which only considers predictions of the uncertainty based on linear transformations of the ensemble standard deviation. ## 6 Legal statement The lead author was employed by RMS at the time that this article was written. However, neither the research behind this article nor the writing of this article were in the course of his employment, (where 'in the course of his employment' is within the meaning of the Copyright, Designs and Patents Act 1988, Section 11), nor were they in the course of his normal duties, or in the course of duties falling outside his normal duties but specifically assigned to him (where 'in the course of his normal duties' and 'in the course of duties falling outside his normal duties' are within the meanings of the Patents Act 1977, Section 39). Furthermore the article does not contain any proprietary information or trade secrets of RMS. As a result, the lead author is the owner of all the intellectual property rights (including, but not limited to, copyright, moral rights, design rights and rights to inventions) associated with and arising from this article. The lead author reserves all these rights. No-one may reproduce, store or transmit, in any form or by any means, any part of this article without the author's prior written permission. The moral rights of the lead author have been asserted. ## References * Jewson (2003a) S Jewson. Maximum likelihood calibration of ensemble spread in a linear stochastic model. _In preparation_, 2003a. Technical report. * Jewson (2003b) S Jewson. Moment based methods for ensemble assessment and calibration. _arXiv:physics/0309042_, 2003b. Technical report. * Jewson et al. (2003) S Jewson, A Brix, and C Ziehmann. A new framework for the assessment and calibration of ensemble temperature forecasts. _Atmospheric Science Letters_, 2003. Submitted. * Leith (1974) C Leith. Theoretical skill of Monte Carlo forecasts. _Monthly Weather Review_, 102:409-418, 1974. * Molteni et al. (1996) F Molteni, R Buizza, T Palmer, and T Petroliagis. The ECMWF ensemble prediction system: Methodology and validation. _Q. J. R. Meteorol. Soc._, 122:73-119, 1996. * Roulston and Smith (2003) M Roulston and L Smith. Combining dynamical and statistical ensembles. _Tellus A_, 55:16-30, 2003. * 25, available from ECMWF, Shinfield Park, Reading RG2 9AX, UK, 1997. * Toth and Kalnay (1993) Z. Toth and E. Kalnay. Ensemble forecasting at NMC: The generation of perturbations. _Bull. Am. Meteorol. Soc._, 74:2317-2330, 1993. \\begin{table} \\begin{tabular}{l l l} expt & correlation & sd ratio \\\\ 1 & 0.95 & 1.22 \\\\ 2 & 0.95 & 1.20 \\\\ 3 & 0.96 & 1.20 \\\\ \\end{tabular} \\end{table} Table 1: The second column shows the correlation between the uncertainty predicted using the first singular vector and the real uncertainty. The third column shows the overestimation of the uncertainty after calibration using spread-scaling (where the real uncertainty is 1). The three rows show three independent numerical experiments. \\begin{table} \\begin{tabular}{l l l} expt & correlation & sd ratio \\\\ 1 & -0.03 & 1.89 \\\\ 2 & 0.04 & 1.99 \\\\ 3 & -0.02 & 1.79 \\\\ \\end{tabular} \\end{table} Table 2: As for table 1 but for the uncertainty predicted using the second singular vector. Figure 1: Initial conditions, forecasts and left singular vectors scaled by their singular values examples, 1,2,3 Figure 2: Initial conditions, forecasts and left singular vectors scaled by their singular values examples, 4,5,6 Figure 3: Time series of the real uncertainty, generated using an ensemble of 1000 members based on initial conditions from a bivariate normal distribution. Figure 4: Time series of the real uncertainty (solid line) with the uncalibrated prediction of the uncertainty from the first singular vector (dotted line). Figure 5: Scatter between the real uncertainty (horizontal axes) and the 3 predictions of the uncertainty (vertical axes). The top left panel shows the uncertainty predicted from the first singular vector, the top right panel shows the uncertainty predicted from the second singular vector and the lower left panel shows the uncertainty predicted from both singular vectors. All predicted uncertainties are uncalibrated. In the top left panel we see that the predicted uncertainty is strongly related to the actual uncertainty. All the points on the diagonal line are situations where the real uncertainty is dominated by the first singular vector, and hence where the prediction using the first singular vector is a good one. The points below the diagonal line correspond to situations where the first singular vector is less important (presumably because it is more or less orthogonal to the observation axis) and where the second singular vector becomes important. In these cases the predictions of uncertainty are poor because the second singular vector is not being used. In the top right panel we see much lower correlation. There is only a very weak diagonal line, corresponding to these situations when the second singular vector dominates the uncertainty. The points below this line correspond to situations where the first singular vector dominates. The real uncertainty in these situations is often large, but poorly predicted. In the lower left panel we see a very high correlation between real and predicted uncertainty. There is a very small spread because of sampling errors. The slope of the diagonal line is not one: the predictions have a larger standard deviation than the real uncertainty. Figure 6: As figure 5 but for the uncertainties calibrated using spread-scaling. In the lower left panel, the slope of the line is now 1: only a single scaling is needed to calibrate the forecast. Figure 7: As figure 5 and figure 6 but for the uncertainties calibrated using a shift and a scaling. Figure 8: Distributions of uncertainty, derived from 1000 samples using kernel smoothing. The top left panel shows the distribution of the real uncertainty, and this curve is repeated in the other panels as a dotted line. The top right panel shows the distribution of uncertainty predicted using the first singular vector. The lower left panel shows the distribution of uncertainty predicted using the second singular vector. The lower right panel shows the distribution of uncertainty predicted using both singular vectors. All three predicted uncertainties are uncalibrated. We note that some of these curves show non-zero density for negative values. This is an artefact of the kernel smoothing. In the top right hand corner we note that the predicted density (solid line) is much wider than the actual density (dotted line), as is the case in the lower right hand panel. Figure 9: As for figure 8 but for predicted uncertainties calibrated using spread-scaling. In the lower right hand panel the calibration has rendered the predicted distribution more or less correct, while in the top right hand panel the distribution is still too wide. This, we believe, is analogous to the overestimation of the variability of the uncertainty seen in real ensemble forecasts by Jewson et al. (2003). Figure 10: As for figure 8 but for predicted uncertainties calibrated using a shift and a scaling. The distribution in the top right hand panel is more or less correct now.
The ECMWF ensemble weather forecasts are generated by perturbing the initial conditions of the forecast using a subset of the singular vectors of the linearised propagator. Previous results show that when creating probabilistic forecasts from this ensemble better forecasts are obtained if the mean of the spread and the variability of the spread are calibrated separately. We show results from a simple linear model that suggest that this may be a generic property for all singular vector based ensemble forecasting systems based on only a subset of the full set of singular vectors.
Condense the content of the following passage.
arxiv-format/0402099v3.md
# Bayesian Estimation for Land Surface Temperature Retrieval: The Nuisance of Emissivities J. A. Morgan The Aerospace Corporation P. O. Box 92957 Los Angeles, CA 90009 ## I Introduction This paper derives the joint prior probability for surface temperature and emissivity for the land surface temperature retrieval problem in remote sensing. It presents analysis necessary for formulating a Bayesian approach to that problem, together with a Monte Carlo simulation of land surface temperature (LST) and surface emissivity extractions. After a brief description of the problem and the method of attack, the maximum entropy estimator for the mismatch between sensed and forward model radiance is given. Next, the joint prior probability for surface temperature and emissivity is obtained. This quantity is required in order to construct a useable estimator for surface temperature and emissivity. With the prior probability in hand, it is a simple matter to construct expressions for the expected values of surface temperature and emissivity consistent with sensor aperture radiances and available prior knowledge. Finally, a sample temperature-emissivity separation is presented using MODTRAN calculations both for the forward model and for simulated sensor radiances. (c)2004 The Aerospace Corporation ## II The Temperature-Emissivity Separation Problem amd its Discontents Increasingly capable remote sensing technology has sparked interest in exploiting thermal emission from surfaces both for remote sensing of surface temperature and of emissivity. On the one hand, surface temperature studies form a significant portion of the science goals of MODIS, ASTER, and MTI, while AVHRR has been used operationally for sea surface and land surface temperature studies for many years. On the other, the use of emissivity information in thermal portions of the spectrum for geological remote sensing has grown rapidly in recent years, and is as prominent in the science goals of MODIS and similar instruments as is surface temperature. Accordingly, the problem of temperature-emissivity separation merits close examination. The entirety of the information about a radiating surface comes from the thermal radiation it emits, conventionally parameterized as the product of blackbody radiation at the surface temperature \\(T\\) and the emissivity \\(\\epsilon_{k}\\), \\[I_{s}=\\epsilon_{k}B_{k}(T) \\tag{1}\\] at each wavenumber \\(k\\). Suppose one observes a radiating patch of a surface at each of \\(n\\) wavenumber intervals, and that one knows how to correct for the effects of line-of-sight (LOS) attenuation and contamination by radiance from other sources. One then has \\(n\\) equations of the form (1), but \\(n+1\\) unknowns, including the surface temperature. In the absence of knowledge about \\(T\\) or \\(\\epsilon_{k}\\) from extraneous sources, one has an underdetermined problem. A variety of methods has been proposed for handling the temperature-emissivity separation (TES) problem[1]. In most approaches to this problem, simultaneous LST and band emissivity retrieval depends upon specifying an emissivity value in one or more reference bands. The MODIS Land Surface Temperature (LST) algorithm[2] seeks a pair of reference channels in a part of the thermal spectrum in which the emissivity of natural surfaces displays very limited variation, and may therefore be regarded as known with good confidence. Multiband emissivities inferred on this basis are called \"relative\" emissivities[3]. Algorithms using this approach include the reference channel method[4] and emissivity normalization[5]. In the former, a value of emissivity is assumed for one band, and in the latter, an approximate surface temperature is obtained bynoting that emissivity cannot exceed unity, in order to close the system of equations. Other relative emissivity retrieval methods include the temperature-independent spectral index method[6],[7] and spectral ratios[8]. The study by Li et al.[3] shows that all of these relative emissivity retrieval algorithms are closely related, and argues that they may be expected to show comparable performance. A different approach has been proposed for analysis of Multispectral Thermal Imager (MTI) data[9], in which radiances are collected from a surface with looks at nadir and 60 degrees off-nadir, assuming a known angular dependence of emissivity, in order to balance equations and unknowns. The generalized split-window LST algorithm[10] likewise uses dual looks in a regression-law based approach. The basis of the \"grey body emissivity\" approach[11] is the slow variation of emissivity with wavelength for certain natural targets. The physics-based MODIS LST algorithm[12] exploits observations taken at day and at night, on the assumption that band emissivites do not change over periods of a few weeks. It is clear that the methods described depend upon _a priori_ assumptions about the variation of emissivity, either with wavelength, or with look angle, or over time, from which one would like to be free. The work described in this paper uses Bayesian inference to retrieve estimates of surface parameters. This approach allows one to treat emissivities as \"nuisance\" parameters which may be integrated out of a posterior distribution function between parsimoniously chosen, and hence \"uninformative,\" limits. It might appear odd to use as an approach to the separation of temperature and emissivity a Bayesian estimator which, in essence, allows one to ignore the actual value of emissivity. Equation (1) shows that thermal radiance is linear in emissivity. However, the Planck function goes as a fairly high power of the temperature in the LWIR, and is close to exponential in temperature in the MWIR. Any roughness in the treatment of sensed radiance-as in allowing the assumed emissivity in the estimator to take on a wide range of values-may, therefore, be expected to lead to comparatively small errors in the inferred surface temperature. In fact, it turns out that the posterior distribution for surface temperature to be developed gives sharp limits to allowable surface temperature even in the presence of considerable latitude in the value of possible emissivities. In most cases, only a narrow range of surface temperature is consistent with the sensed radiance in multiple bands, whatever be assumed about emissivity. Once a reasonably good estimate of surface temperature is in hand, it is a simple matter to insert it into estimators for the individual band emissivities, and for the uncertainty in those values consistent with available knowledge. The _a priori_ limits on emissivity may then be contracted, and a new estimate of surface temperature obtained. The expectation values of surface temperature and emissivity may thus be refined iteratively. It is also possible to search for a (local) maximum for the posterior likelihood for these parameters. Because the TES problem is underdetermined, this will not give a unique global maximum, but, given the insensitivity of surface temperature estimates to small emissivity errors, the local maximum may be expected to give results close to the physical values for the parameters of interest. ## III Maximum Entropy Estimators for Surface Parameters Consider the problem of estimating surface temperature and emissivity from radiance detected by a remote sensor. The sensor supplies measurements of radiance \\(I\\) at the aperture. A forward model is required to compute the value of aperture radiance as a function of, among other things, the parameters we wish to extract. Assume initially (for simplicity) that the sensor has fine spectral resolution. The forward model radiance may be described at each wavenumber \\(k\\) by a form of the Duntley equation [13] \\[I_{F}(k)=\\epsilon_{k}B_{k}(T)exp(-\\frac{\\tau_{k}}{\\mu})+\\frac{\\rho_{k}}{\\pi} F_{k}^{\\downarrow}(0)exp(-\\frac{\\tau_{k}}{\\mu})+I_{k}^{\\uparrow}(\\tau,\\mu) \\tag{2}\\] \\(I_{k}^{\\uparrow}(\\tau,\\mu)\\) and \\(F_{k}^{\\downarrow}(0)\\) are the upwelling diffuse radiance at nadir optical depth \\(\\tau\\) (top of the atmosphere, or TOA, for spaceborne sensors; \\(\\mu\\) is the cosine of the angle with respect to zenith) and the downwelling irradiance at the surface, respectively. \\(B_{k}(T)\\) is the Planck function at surface temperature \\(T\\). The emissivity is \\(\\epsilon_{k}\\), and the surface reflectance \\(\\rho_{k}=1-\\epsilon_{k}\\). The form of (2) is what one would get assuming a Lambertian surface obeying Khirchoff's law. The analysis presented below does not depend upon a Lambertian approximation to surface reflectances; in fact, it makes no assumption regarding their angular behavior [14]. In what follows it will be assumed that the only unknown quantities in the preceding equation are T and \\(\\epsilon_{k}\\). Generalization of the analysis which follows to the case of reflectance not equal to one minus the emissivity poses no difficulties. An estimator for the probability that, given observed radiances \\(I(k)\\), the surface parameters \\(T\\) and \\(\\epsilon_{k}\\) take on particular values, is constructed in the following way [15],[16],[17], (_vide._ also a related discussion in Landau and Lifschitz[18], pp. 343-5). The posterior probability for \\(T\\) and \\(\\epsilon_{k}\\) is given by Bayes' theorem as \\[P(T,\\epsilon_{k}\\mid I,K)=P(T,\\epsilon_{k}\\mid K)\\frac{P(I\\mid T,\\epsilon_{k },K)}{P(I\\mid K)} \\tag{3}\\] where \\(K\\) denotes available knowledge. The quantity \\(P(I\\mid K)\\), the prior probability of the radiance \\(I\\), may be absorbed into an overall normalization and does not concern us further. It may be that the surface \\(T\\) is of interest, whatever the value of emissivity. In this case, one is free to denigrate \\(\\epsilon_{k}\\) as a \"nuisance\" parameter and integrate it out of (3), as will be done below. Consider the remaining factors in (3) in turn, starting with the direct probability \\(P(I\\mid T,\\epsilon_{k},K)\\) of observing radiance \\(I\\) given \\(T\\), \\(\\epsilon_{k}\\), and other _a priori_ knowledge \\(K\\). With aid of the forward model, it is possible to recast this quantity in more tractable form. By hypothesis, \\[I(k)=I_{F}(k)+\\epsilon_{k} \\tag{4}\\]where the error in spectral radiance \\(e_{k}\\) is attributed to noise processes. The prior probability for the noise \\(P(e_{k}\\mid T,\\epsilon_{k},K)\\) is now obtained by a maximum entropy argument. If the noise power is assumed known, the noise probability is the function which maximizes the information-theoretic entropy subject to constraints imposed by the value of the noise power and overall normalization of probability, \\[S=-\\int_{-\\infty}^{+\\infty}P(e\\mid K)log(P(e\\mid K))de-\\] \\[\\lambda_{1}\\int_{-\\infty}^{+\\infty}e^{2}P(e\\mid K)de-\\lambda_{2} \\int_{-\\infty}^{+\\infty}P(e\\mid K)de. \\tag{5}\\] The function maximizing (5) is a Gaussian, \\[P(e\\mid K)=\\frac{1}{\\sqrt{2\\pi}\\sigma}exp\\left[-\\frac{e^{2}}{2\\sigma^{2}}\\right] \\tag{6}\\] where the Lagrange multipliers for noise power and normalization have been written in terms of the RMS noise radiance \\(\\sigma\\). Upon substituting (4) for the noise term, the probability of detecting a radiance \\(I\\) given \\(T\\), \\(\\epsilon\\), and noise \\(\\sigma\\) becomes \\[P(I\\mid T,\\epsilon,\\sigma)=exp\\left[-\\frac{(I-I_{F})^{2}}{2\\sigma^{2}}\\right] \\frac{dI}{\\sigma} \\tag{7}\\] This is also the likelihood function for \\(T\\) and \\(\\epsilon\\). In order to formulate an estimator for \\(T\\) and \\(\\epsilon\\), it remains to find the prior probability \\[P(T,\\epsilon\\mid K)=f(T,\\epsilon)dTd\\epsilon \\tag{8}\\] (The appropriate prior for noise is known to be the Jeffreys form, but is omitted here because it is assumed that the noise contribution is known.) We now follow Jaynes' prescription for finding an uninformative prior probability[19],[20],[21]. Assume two equivalent observers record the same sensor aperture radiance originating as thermal radiation from a surface, and interpret it in terms of Planckian emission characterized by a surface temperature and emissivities, subject to LOS attenuation. Vladimir detects surface thermal emission \\(I\\) in a solid angle \\(\\Omega\\), and describes the surface with parameters \\(T\\) and \\(\\epsilon\\), and the attenuation with optical depth \\(\\frac{\\tau}{\\mu}\\): \\[I=\\epsilon_{k}B_{k}(T)exp(-\\frac{\\tau_{k}}{\\mu}) \\tag{9}\\] He assigns prior probability in light of his knowledge regarding the problem \\[f(T,\\epsilon)dTd\\epsilon \\tag{10}\\] On the other hand, Estragon agrees with Vladimir on the definition of the Planck function, emissivity, and LOS attenuation, but describes the same situation with surface emission \\(I^{\\prime}\\) in \\(\\Omega\\)', and parameters \\(T^{\\prime}\\), \\(\\epsilon^{\\prime}\\), and \\(\\frac{\\tau^{\\prime}}{\\mu^{\\prime}}\\), reporting \\[I^{\\prime}=\\epsilon^{\\prime}_{k^{\\prime}}B_{k^{\\prime}}(T^{\\prime})exp(-\\frac {\\tau^{\\prime}_{k^{\\prime}}}{\\mu^{\\prime}}) \\tag{11}\\] and assigning the prior probability \\[g(T^{\\prime},\\epsilon^{\\prime})dT^{\\prime}d\\epsilon^{\\prime} \\tag{12}\\] In order for the pair to agree as to the form of the estimator, the priors must be related by \\[g(T^{\\prime},\\epsilon^{\\prime})dT^{\\prime}d\\epsilon^{\\prime}=J^{-1}f(T, \\epsilon)dTd\\epsilon \\tag{13}\\] where \\[J=det\\left[\\frac{\\partial(T^{\\prime},\\epsilon^{\\prime})}{\\partial(T,\\epsilon)}\\right] \\tag{14}\\] is the Jacobian determinant for the transformation between descriptions in the parameter space. Assuming both are sober, Vladimir and Estragon must always be able to relate their descriptions of the sensed radiance by a Lorentz transformation. Let us consider active transformations for concreteness. Suppose that Vladimir wishes to describe events in a frame of reference moving at velocity \\(\\beta=v/c\\) along the observation axis, denoted \\(x\\), with respect to the frame preferred by Estragon. (It is convenient, although not actually necessary, to suppose that the \\(x\\) axis is also the axis of photon propagation.) Lorentz invariance requires that Vladimir's (unprimed) and Estragon's (primed) description of events be invariant under the Lorentz transformation given by \\[x^{\\prime}=\\gamma(x-\\beta ct) \\tag{15}\\] \\[t^{\\prime}=\\gamma(t-\\beta x/c) \\tag{16}\\] where \\[\\gamma=\\frac{1}{\\sqrt{1-\\beta^{2}}}. \\tag{17}\\] The four-momentum of a photon travelling along the x-axis is \\[\\mathsf{p}=\\left(\\begin{array}{c}\\hbar k/c\\\\ \\hbar k\\\\ 0\\\\ 0\\end{array}\\right). \\tag{18}\\] Applying (15) and (16) to the components of (18), we see that Estragon and Vladimir relate their description of frequency or wavenumber by \\[k^{\\prime}=\\gamma(1-\\beta)k\\] \\[\\equiv\\eta k. \\tag{19}\\] How does the pair relate their descriptions of radiance? Let a bundle of \\(\\delta n\\) photons with mean energy \\(p^{0}=\\hbar ck\\) and uncertainty \\(\\delta p^{0}=\\hbar c\\delta k\\) originate in a small area \\(\\delta A\\) of the radiating surface in a small time interval \\(\\delta t\\), collimated within a small solid angle \\(\\delta\\Omega\\), and propagate unattenuated to an observer. The surviving photons arriving at the observer's location comprise a collisionless photon gas. A single photon in the bundle occupies a phase space volume \\[V_{x}\\times V_{k}=\\delta A(c\\delta t)\\times\\hbar^{3}k^{2}\\delta k\\delta\\Omega \\tag{20}\\] while the bundle occupies a \\(6\\,\\delta n\\)-dimensional phase space volume \\[V_{phase}=\\left[V_{x}V_{k}\\right]^{\\delta n}. \\tag{21}\\]Equation (21) is invariant at any point on a photon trajectory. According to a standard result in statistical physics, Liouville's theorem ([18], pp. 9-10; 178), it has the same value at every point on that trajectory. So long as the photons remain collisionless they can neither leave their original volume of phase space, nor enter another. However Vladimir or Estragon choose to describe the patch of emitting surface \\(\\delta A\\), the time interval \\(\\delta t\\), the solid angle interval \\(\\delta\\Omega\\) or the photon wavenumber \\(k\\), they must agree as to the number of photons \\(\\delta n\\) in the bundle. Hence, both (20) and the ratio \\[N=\\frac{\\delta n}{V_{x}V_{k}}=\\frac{\\delta n}{\\hbar^{3}\\delta A\\delta tk^{2} \\delta k\\delta\\Omega} \\tag{22}\\] are invariant along a photon trajectory. Spectral radiance is defined as \\[I_{k}\\equiv\\frac{d(energy)}{d(time)d(area)d(frequency)d(solidangle)}. \\tag{23}\\] Rewriting the radiance as \\[I_{k}=\\frac{\\hbar k\\delta n}{\\delta A\\delta t\\delta k\\delta\\Omega}=\\hbar^{4}k ^{3}N, \\tag{24}\\] gives \\[\\frac{I_{k}}{k^{3}}=\\text{invariant} \\tag{25}\\] for any component of the total radiance along a given line of sight[22]. Equation (25) has the same value in any frame of reference[23], with two consequences for this problem: 1. The Planck function obeys \\[\\frac{B_{\\eta k}(\\eta T)}{\\eta^{3}}=B_{k}(T) \\tag{26}\\] as may also be seen by direct substitution in \\[B_{k}(T)=\\frac{1}{\\pi^{2}}\\frac{k^{3}}{\\left[exp\\left[\\frac{\\hbar ck}{k_{B}T} \\right]-1\\right]}. \\tag{27}\\] 2. Vladimir and Estragon must agree that the attenuated surface emission obeys \\[\\frac{\\epsilon_{k^{\\prime}}^{\\prime}B_{k^{\\prime}}(T^{\\prime})exp(-\\frac{ \\tau_{k^{\\prime}}^{\\prime}}{\\mu^{\\prime}})}{k^{\\prime 3}}=\\frac{\\epsilon_{k}B_{k}(T) exp(-\\frac{\\tau_{k}}{\\mu})}{k^{3}} \\tag{28}\\] Consider first the case of no attenuation, \\(\\tau=0\\). Then \\[\\frac{\\epsilon_{k^{\\prime}}^{\\prime}B_{k^{\\prime}}(T^{\\prime})}{k^{\\prime 3}}= \\frac{\\epsilon_{k}B_{k}(T)}{k^{3}} \\tag{29}\\] or \\[\\epsilon_{k^{\\prime}}^{\\prime}B_{k^{\\prime}}(T^{\\prime})=\\eta^{3}\\epsilon_{k} B_{k}(T). \\tag{30}\\] One also has, from (26), \\[B_{\\eta k}(T^{\\prime})=\\eta^{3}B_{k}(T^{\\prime}/\\eta). \\tag{31}\\] Combining (19), (30), and (31) gives \\[\\epsilon_{k^{\\prime}}^{\\prime}B_{k}(T^{\\prime}/\\eta)=\\epsilon_{k}B_{k}(T). \\tag{32}\\] Now, Vladimir and Estragon also agree that, while they must lie between 0 and 1, emissivities are otherwise completely arbitrary functions of wavenumber, and by hypothesis have no dependence upon temperature. In \\[\\frac{B_{k}(T^{\\prime}/\\eta)}{B_{k}(T)}=\\frac{\\epsilon_{k}}{\\epsilon_{\\eta k} ^{\\prime}} \\tag{33}\\] the right-hand side can have no dependence upon \\(T\\) or \\(T^{\\prime}\\), while the left-hand side cannot be an arbitrary function of \\(\\eta\\) or \\(k\\). The only remaining possibility is \\[\\frac{B_{k}(T^{\\prime}/\\eta)}{B_{k}(T)}=\\frac{\\epsilon_{k}}{\\epsilon_{\\eta k} ^{\\prime}}=\\text{const.} \\tag{34}\\] The set of Lorentz transformations forms a group[24], so this relation holds for the identity with \\(\\beta=0\\), \\(\\gamma=\\eta=1\\). The constant must therefore equal unity. Next allow \\(\\tau\\) to differ from zero in (28). In \\[\\frac{k^{3}\\epsilon_{k^{\\prime}}^{\\prime}B_{k^{\\prime}}(T^{\\prime})}{k^{\\prime 3 }\\epsilon_{k}B_{k}(T)}=\\frac{exp(-\\frac{\\tau_{k}}{\\mu})}{exp(-\\frac{\\tau_{k^{ \\prime}}}{\\mu^{\\prime}})} \\tag{35}\\] the left-hand side has, by hypothesis, no dependence upon LOS transmission, while the right-hand has no dependence upon surface properties so, again, both sides equal a constant, and, from (26) and (34), we find \\[\\frac{k^{3}\\epsilon_{\\eta k}^{\\prime}B_{k^{\\prime}}(T^{\\prime})}{k^{\\prime 3 }\\epsilon_{k}B_{k}(T)}=\\frac{\\epsilon_{k^{\\prime}}^{\\prime}B_{\\eta k}(\\eta T)} {\\epsilon_{k}\\eta^{3}B_{k}(T)}=1 \\tag{36}\\] or \\[\\frac{exp(-\\frac{\\tau_{k}}{\\mu})}{exp(-\\frac{\\tau_{k^{\\prime}}}{\\mu^{\\prime}} )}=1 \\tag{37}\\] LOS attenuation does not affect the validity of (34). Thus, the most general relation which respects a Lorentz transformation carrying wavenumber \\(k\\) to \\(k^{\\prime}=\\eta k\\) is \\[T^{\\prime}=\\eta T \\tag{38}\\] \\[\\epsilon_{k^{\\prime}}^{\\prime}=\\epsilon_{\\eta k}^{\\prime}=\\epsilon_{k}. \\tag{39}\\] The Jacobian is therefore \\[J=\\eta \\tag{40}\\] and \\[f(T,\\epsilon_{k})=\\eta g(\\eta T,\\epsilon_{\\eta k}^{\\prime}). \\tag{41}\\] Invocation of the principle of indifference1 to assert Estragon and Vladimir must use the identical description of events, and thus assign the same prior probabilities, Footnote 1: As given by Jaynes[19], p. 128, in an extension of the original concept introduced by Laplace to encompass indifference between descriptions by distinct but equally cogent observers. \\[f(T,\\epsilon)=g(T,\\epsilon) \\tag{42}\\] leads to the functional equation \\[f(T,\\epsilon_{k})=\\eta f(\\eta T,\\epsilon_{k}). \\tag{43}\\]The solution of (43) is \\[f(T,\\epsilon)=\\frac{\\text{const.}}{T} \\tag{44}\\] yielding \\[f(T,\\epsilon)dTd\\epsilon=\\frac{\\text{const.}}{T}dTd\\epsilon \\tag{45}\\] for the prior probability. One now argues this form of the prior is least informative as to emissivity. No functional dependence upon a parameter should enter the form of the prior probability that is not imposed by the requirements of invariance and indifference. Any such dependence would amount to the admission that we possess additional knowledge about emissivity beyond that assumed. That is, (45) is the unique choice of prior probability that assumes nothing about the value of \\(\\epsilon_{k}\\) beyond what is dictated by the problem statement. A standard argument (found, for example, in [15], pp. 9-15) then shows that prior knowledge about limits on the value of emissivity should appear in the limits of integration used in constructing marginal distributions for \\(T\\). Surface temperature thus obeys the Jeffreys prior, while emissivity obeys the Bayes prior. Both results may appear somewhat surprising, especially that for emissivity. From the manner in which it appears in the expression for radiance, one's naive expectation might be that emissivity is a scale parameter. However, the relation between the description of emissivity as seen by Vladimir and Estragon more resembles what one would expect of a location parameter: They must agree on the value of emissivity, but are free to assign it to different wavenumbers. The result just obtained will now be extended to the situation in which radiance is sensed in bands wide enough that that it cannot be regarded as a function of wavenumber, but must be treated as an integral over a passband. One then writes, for the contribution of surface emission to the total radiance at the sensor aperture in band i, \\[\\int_{k_{1}}^{k_{2}}\\!\\!\\!\\epsilon_{k}B_{k}(T)exp(-\\frac{\\tau_{k}}{\\mu})dk \\equiv\\epsilon_{i}\\int_{k_{1}}^{k_{2}}\\!\\!\\!B_{k}(T)exp(-\\frac{\\tau_{k}}{\\mu} )dk \\tag{46}\\] It is always possible to do this by the mean value theorem for integrals, and it is frequently the case that the right-hand side of (46) expresses all available knowledge concerning the radiant properties of the emitting surface. Vladimir describes the surface emission by \\[\\epsilon_{i}\\int_{k_{1}}^{k_{2}}\\!\\!B_{k}(T)exp(-\\frac{\\tau_{k}}{\\mu})dk=\\int_ {k_{1}}^{k_{2}}\\!\\!\\!\\epsilon_{k}B_{k}(T)exp(-\\frac{\\tau_{k}}{\\mu})dk \\tag{47}\\] with \\[\\epsilon_{k}=\\left\\{\\begin{array}{c}0,k<k_{1}\\\\ \\epsilon_{i},k_{1}\\leq k\\leq k_{2}\\\\ 0,k>k_{2}\\end{array}\\right.\\] while, by (37)-(39), Estragon describes things by \\[\\int_{\\gamma k_{1}}^{\\gamma k_{2}}\\epsilon_{k^{\\prime}}^{\\prime}B_{k^{\\prime} }(\\gamma T)exp(-\\frac{\\tau_{k^{\\prime}}^{\\prime}}{\\mu^{\\prime}})dk^{\\prime}\\] \\[\\equiv\\epsilon_{i}^{\\prime}\\int_{\\gamma k_{1}}^{\\gamma k_{2}}B_{k^{\\prime}}( \\gamma T)exp(-\\frac{\\tau_{k^{\\prime}}^{\\prime}}{\\mu^{\\prime}})dk^{\\prime} \\tag{48}\\] with \\[\\epsilon_{k^{\\prime}}^{\\prime}=\\left\\{\\begin{array}{c}0,k^{\\prime}<\\gamma k _{1}\\\\ \\epsilon_{i},\\gamma k_{1}\\leq k^{\\prime}\\leq\\gamma k_{2}\\\\ 0,k^{\\prime}>\\gamma k_{2}\\end{array}\\right.\\] Comparison of the two expressions for surface emission, (47) and (48), leads to the immediate conclusion that the Jacobian connecting the two descriptions of surface temperature and band emissivity is \\[J=det\\left[\\frac{\\partial(T^{\\prime},\\epsilon^{\\prime})}{\\partial(T,\\epsilon )}\\right]=\\gamma, \\tag{49}\\] and \\[f(T,\\epsilon)dTd\\epsilon=\\frac{\\text{const.}}{T}dTd\\epsilon \\tag{50}\\] once more. The result just obtained allows us to derive estimators for surface temperature and emissivity. The starting point is a calculation of the marginal posterior probability for T given observed radiance in a finite number of bands when the surface emissivity in band \\(i\\) is known to lie between \\(\\epsilon_{min}(i)\\) and \\(\\epsilon_{max}(i)\\). This quantity is computed for each band by integrating (3) between these limits, upon inserting (7) and (45). Evaluating the integral requires completing the square in the exponent of (7). To accomplish this, define auxilliary quantities \\(a\\), \\(b\\), and \\(c\\), obtained from (2): \\[a=\\left[\\int_{k_{1}}^{k_{2}}\\left(B_{k}(T)-\\frac{1}{\\pi}F_{k}^{\\downarrow}(0) \\right)exp(-\\frac{\\tau_{k}}{\\mu})dk\\right]^{2}, \\tag{51}\\] \\[b=b_{1}b_{2} \\tag{52}\\] with \\[b_{1}=2\\left[\\int_{k_{1}}^{k_{2}}\\left(B_{k}(T)-\\frac{1}{\\pi}F_{k}^{\\downarrow}( 0)\\right)exp(-\\frac{\\tau_{k}}{\\mu})dk\\right] \\tag{53}\\] \\[b_{2}=\\left[\\int_{k_{1}}^{k_{2}}\\left(\\frac{1}{\\pi}F_{k}^{\\downarrow}(0)exp(- \\frac{\\tau_{k}}{\\mu})+I_{k}^{\\uparrow}(\\tau,\\mu)\\right)dk-I_{i}\\right], \\tag{54}\\] and \\[c=\\left[\\int_{k_{1}}^{k_{2}}\\left(\\frac{1}{\\pi}F_{k}^{\\downarrow}(0)exp(-\\frac {\\tau_{k}}{\\mu})+I_{k}^{\\uparrow}(\\tau,\\mu)\\right)dk-I_{i}\\right]^{2} \\tag{55}\\] Then (dropping subscript i for the moment) (3) and (7) give \\[P(T,\\epsilon\\mid I,\\sigma)\\propto\\frac{1}{\\sqrt{2\\pi}\\sigma}exp\\left[-\\frac{( a\\epsilon^{2}+b\\epsilon+c)}{2\\sigma^{2}}\\right] \\tag{56}\\] The marginal distribution obtained by integrating over the nuisance parameter \\(\\epsilon\\) is \\[P(T\\mid I,\\sigma)\\propto\\frac{1}{\\sqrt{2\\pi}\\sigma}\\int_{\\epsilon_{min}}^{ \\epsilon_{max}}\\!\\!\\exp\\left[-\\frac{(a\\epsilon^{2}+b\\epsilon+c)}{2\\sigma^{2}} \\right]d\\epsilon \\tag{57}\\]Completing the square in the exponent allows this to be written as \\[P(T\\mid I,\\sigma)\\propto\\frac{1}{\\sqrt{a}}exp\\left[-\\frac{\\left[c-b^{2}/4a\\right] }{2\\sigma^{2}}\\right]H(\\epsilon_{max},\\epsilon_{min}) \\tag{58}\\] where \\[H(\\epsilon_{max},\\epsilon_{min}) = erf\\left[\\frac{\\sqrt{a/2}(\\epsilon_{max}+b/2a)}{\\sigma}\\right] \\tag{59}\\] \\[-erf\\left[\\frac{\\sqrt{a/2}(\\epsilon_{min}+b/2a)}{\\sigma}\\right]\\] for each band i. In (59) the error function is \\[erf(x)=\\frac{2}{\\sqrt{\\pi}}\\int_{0}^{x}exp\\left(-t^{2}\\right)dt \\tag{60}\\] The joint posterior probability for observing radiances \\(I_{i},i=1,n\\) is \\[P(T\\mid I_{i,i=1,n},\\sigma)=\\prod_{i=1}^{n}P(T\\mid I_{i},\\sigma). \\tag{61}\\] Assuming \\(T\\) is known to lie between a minimum and a maximum, an estimator for T given radiance in band i is \\[\\langle T\\rangle=\\frac{\\int_{T_{min}}^{T_{max}}TP(T\\mid I_{i},\\sigma)\\frac{dT }{T}}{\\int_{T_{min}}^{T_{max}}P(T\\mid I_{i},\\sigma)\\frac{dT}{T}} \\tag{62}\\] while a joint estimator for T given radiances in all n bands is \\[\\langle T\\rangle=\\frac{\\int_{T_{min}}^{T_{max}}TP(T\\mid I_{i,i=1,n},\\sigma) \\frac{dT}{T}}{\\int_{T_{min}}^{T_{max}}P(T\\mid I_{i,i=1,n},\\sigma)\\frac{dT}{T}} \\tag{63}\\] An estimator for the emissivity in band \\(i\\) is given by \\[\\langle\\epsilon_{i}\\rangle=\\frac{\\int_{\\epsilon_{min}}^{\\epsilon_{max}}\\epsilon P (\\langle T\\rangle,\\epsilon\\mid I_{i},\\sigma)d\\epsilon}{\\int_{\\epsilon_{min}}^ {\\epsilon_{max}}P(\\langle T\\rangle,\\epsilon\\mid I_{i},\\sigma)d\\epsilon} \\tag{64}\\] This form has the advantage that estimates of the surface \\(T\\) are significantly less sensitive to discrepancies between sensed and modeled radiances than are estimates of emissivity. An estimate of \\(T\\) obtained from (58)-(63) with uninformative limits on emissivity may be close enough in practice for accurate emissivity retrievals by (64). (Equation (64) may be evaluated in closed form with elementary functions; however, the resulting expression is quite cumbersome and is omitted here.) ## IV Monte Carlo Simulation of MODIS Land Surface Temperature Retrieval A land surface temperature retrieval algorithm has been developed using the results just derived. While the intent of the work reported in this paper is to unshackle LST estimation from emissivity knowledge, the algorithm also retrieves emissivity estimates, and may be thought of as a TES algorithm if desired. It is intended to illustrate the application of Bayesian analysis to thermal remote sensing, and is purported to be optimal neither in execution speed nor accuracy. The requirement for only one forward model calculation per retrieval suggests that it will not impose an extreme computational burden in practice (even though the forward model to be described requires two MODTRAN calculations). The algorithm is used to simulate LST retrieval from a notional exoatmospheric sensor that records radiance from a patch of the Earth's surface at a specified signal-to-noise ratio(SNR). It is assumed that the dominant noise contribution arises from the shot noise of the radiance incident upon the sensor aperture. In outline, the algorithm works as follows. Equation (58) gives the distribution of surface temperature consistent with observed radiances and the initial range of emissivities. This distribution typically differs from zero only within a narrow range about the true surface temperature. Within this range, (63) is used for each of \\(n\\) bands and for all bands jointly to compute \\(n+1\\) separate estimates of the surface temperature. The actual surface temperature is assumed to lie between the extreme values of this set of expected values, which now determine the allowable range. The joint temperature distribution and the various expectation values for surface temperature are next refined using the contracted range of _a-priori_ credible surface temperatures in (58), now calculated with a finer temperature mesh. After a few iterations of this procedure, the different surface temperature expectation values obtained from (62) and (63) reliably converge to a single value lying close to the true surface temperature. A convergence radius \\(\\eta\\) for the different estimates of 0.01 K was used. Emissivities are then obtained by substitution of the joint surface temperature estimate into the expression for band emissivity expectation values, (64). Equation (56) being a Gaussian distribution, it is possible to refine the _a-priori_ limits on credible emissivites by specifying a threshold of \\(m\\) standard deviations. Six is used for the examples presented here. The revised _a-priori_ emissivity limits and surface temperature limits may then be used to restart the entire sequence just outlined, if desired. This additional iterative loop was repeated once in the simulation presented here. Surface temperature and emissivity values show only marginal changes as a result of the second iteration, indicating convergence of the retrieval. The starting point is the posterior distribution for surface temperature (58). To compute it, the coefficients \\(a(T)\\), \\(b(T)\\), and \\(c\\) are obtained as a function of surface temperature. Isaacs two-stream MODTRAN4 calculations with SALB=1.0 supply those forward-model spectral quantities independent of surface temperature: Attenuation along the line-of-sight, downwelling radiance at the surface, and upwelling radiance at TOA. The surface thermal emission component is computed directly from the Planck function and spectral attenuation. A further MODTRAN calculation with SALB=0.0 is used to obtain estimates of scattered solar radiance and of the \"scattered thermal\" radiance component of the total radiance returned by MODTRAN4. TRAN. The calculation of \\(a(T)\\) and \\(b(T)\\) used in the retrievals departs from (51)-(53) in one regard. Approximation of the total TOA radiance computed by MODTRAN with the (never exact) Duntley equation becomes increasingly inaccurate as the surface temperature increases. The dominant contribution to the discrepancy is the surface emission portion of the scattered thermal radiance. Subtracting an estimate of this term gives a corrected TOA radiance in much better accord with the predictions of the Duntley equation. The estimate is calculated as a function of the unknown surface emissivity and (necessarily erroneous) forward model boundary temperature, using the difference between the scattered thermal contributions computed with SALB=1.0 and SALB=0.0. Rather than subtracting the approximate scattered surface thermal radiance from the total MODTRAN radiance, the correction was implemented in a mathematically equivalent way by adding that portion of the scattered thermal radiance component linear in surface emissivity to the surface thermal emission terms in (51)-(53). It should be noted that while the forward model assumes knowledge of atmospheric parameters, MODTRAN-computed quantities used in the forward model have no dependence upon true surface temperature or emissivity. The boundary temperature parameter used in MODTRAN affects the forward radiance calculations primarily through the scattered thermal contribution. In addition, MODTRAN adjusts the atmospheric temperature profile in the lower zones to interpolate smoothly between surface conditions and a fiducial layer in the atmosphere. For these reasons the forward model boundary temperature should not differ greatly from a physically reasonable value. The algorithm is executed according to these steps: 1. Perform the forward model radiative transfer calculations with MODTRAN. 2. Calculate the individual band posterior probabilities, and the joint posterior probability over all bands, as a function of surface temperature with (62) and (63), contracting as necessary the range of T to that giving nonvanishing joint posterior probability in (61). 3. Calculate expectation values for surface temperature over the posterior probabiliry for each band individually, and over the joint posterior probability. This calculation gives \\(n+1\\) surface temperature estimates. 4. Perform convergence test \\(max|\\langle T_{i}\\rangle-\\langle T_{j}\\rangle|<\\eta\\) over all pairs of surface temperature estimates. If the convergence test is satisfied, proceed to step 5. Otherwise, iterate by repeating steps 2-4. 5. Compute expectation values for band emissivities using (64). 6. Adjust \\(\\epsilon_{min},\\epsilon_{max}\\) to \\(\\pm m\\) standard deviations about \\(\\langle\\epsilon_{i}\\rangle\\) for each band. (7. Repeat steps 2-6 if desired.) Monte Carlo simulations of LST retrieval in a selected subset of MODIS bands illustrate the performance of the algorithm. The bands chosen appear in Table 1. MODTRAN calculations are used both as simulated TOA radiances in MODIS bands and as the forward model. Each Monte Carlo realization of TOA radiance is calculated using a mid-latitude summer atmosphere with MODTRAN parameters listed in Table 2 selected as uniform deviates within the limits shown, including \"true\" surface T and band emissivities. It is unlikely that the atmospheric profile or other parameters required to specify a forward radiative transfer model will be reliably known to high accuracy. In order to simulate the effect of imperfect knowledge on the forward model, a second draw of random numbers is used to introduce errors in the fallible forward model as shown in the last column. Thus, for examples, the surface visibility, which MODTRAN uses to parameterize aerosol effects, is chosen to lie between 5 and 30 km for each Monte Carlo realization. A random error of up to \\(\\pm 4\\) km is added to this value of visibility for use in calculation of the fallible forward model for each realization. The MODTRAN model default water vapor profile, given as (grams precipitable water)/ (kilograms air), is randomly scaled between the limits shown (subject to the constraint that relative humidity cannot exceed 100%). The range of perturbed forward model parameters is truncated at the limits specified in Table 2. The forward model cannot, of course, incorporate knowledge of the true surface temperature, but it does require an initial guess for that quantity. This guess is obtained by varying the forward model boundary temperature randomly from \"truth\" by \\(\\pm 20K\\), without truncation. Inclusion of the scattered thermal radiance contribution in the forward model notably improves retrieval accuracy, despite boundary temperature uncertainties of this magnitude in the forward model. \\begin{table} \\begin{tabular}{l l l} MODIS band & wavelength limits & notional snr \\\\ \\hline 20 & 3.660-3.840 \\(\\mu\\) & 350 \\\\ 22 & 3.929-3.989 \\(\\mu\\) & 350 \\\\ 23 & 4.020-4.080 \\(\\mu\\) & 350 \\\\ 29 & 8.400-8.700 \\(\\mu\\) & 1000 \\\\ 31 & 10.870-11.280 \\(\\mu\\) & 1000 \\\\ 32 & 11.770-12.270 \\(\\mu\\) & 1000 \\\\ \\end{tabular} \\end{table} Table 1: MODIS bands used in simulations \\begin{table} \\begin{tabular}{l l l l} MODTRAN & minimum & maximum & fallible forward \\\\ parameter & value & value & model error \\\\ \\hline nadir view angle & 125 deg & 180 deg & \\(\\pm 0.125\\) deg \\\\ surface visibility & 5 km & 30 km & \\(\\pm\\) 4 km \\\\ column water vapor & 0.33 \\(\\times\\) MLS & 1.00 & \\(\\pm 0.2\\) \\\\ thin cirrus altitude & 8 km & 12 km & \\(\\pm 0.5\\) km \\\\ thin cirrus thickness & 1 m & 20 m & \\(\\pm 25\\) m \\\\ thin cirrus opacity & 0.05/\\(km\\) & 0.2/\\(km\\) & \\(\\pm 0.025/km\\) \\\\ solar azimuth & 0 deg & 90 deg & \\(\\pm 0.125\\) deg \\\\ solar elevation & 20 deg & 60 deg & \\(\\pm 0.125\\) deg \\\\ viewing azimuth & 0 deg & 90 deg & \\(\\pm 0.125\\) deg \\\\ viewing elevation & 35 deg & 90 deg & \\(\\pm 0.125\\) deg \\\\ surface T & 268 deg K & 328 deg K & N/A \\\\ \\end{tabular} \\end{table} Table 2: Monte Carlo parameter rangesNext, the simulated MODIS TOA radiances are contaminated with notional sensor noise simulated as a zero-mean Gaussian random process with standard deviation equal to the noise equivalent radiance \\(NE\\Delta R\\). The algorithm also requires an estimated variance for the noise radiance (\\(\\sigma\\) in (6)), which should be of order \\(NE\\Delta R\\). \\(NE\\Delta R\\) is parameterized in terms of a signal-to-noise ratio. SNR was chosen to lie on the low end of values inferred from MODIS \\(NE\\Delta T\\) values[25] with the aid of the following estimate. The error in TOA radiance from noise sources is estimated as, roughly, \\[\\delta I=\\frac{\\partial I}{\\partial T}\\delta T+O(\\delta T)^{2} \\tag{65}\\] \\[\\cong\\frac{\\partial I_{s}}{\\partial T}NE\\Delta T\\] where \\[I_{s}=\\epsilon\\int_{\\Delta k}B_{k}(T)exp(-\\frac{\\tau_{k}}{\\mu})dk \\tag{66}\\] is the attenuated surface thermal emission at TOA, leading to \\[SNR=\\left(\\frac{\\delta I}{I}\\right)^{-1}\\cong\\left(\\frac{\\partial log(I_{s})} {\\partial T}NE\\Delta T\\right)^{-1} \\tag{67}\\] This quantity was computed for each Monte Carlo realization; the SNR values used for the retrievals, which apppear in Table 1, conservatively underestimate (67). Performance of the algorithm appears not too sensitive to the exact noise contamination added to the simulated band radiances, nor to the exact noise variance assumed in the retrieval, as long as neither is grossly erroneous. In fact, it is possible to adjust the assumed value of \\(\\sigma\\) in the estimator to contract or expand the range of viable surface temperatures consistent with sensor radiances without disastrously biasing the retrieval, as described below. One thousand Monte Carlo realizations each were calculated for day and night, with a mid-latitude summer atmosphere. Generous bounds for the initial _a-priori_ limits on LST and band emissivity were assumed, subject to the physical upper bound on surface emissivity: \\[\\begin{array}{c}200K\\leq T\\leq 500K\\\\ 0.75\\leq\\epsilon_{i}\\leq 0.99\\end{array} \\tag{68}\\] Note that both limits for LST lie outside the range sampled by the Monte Carlo draws. Mean errors and error standard deviations for the retrieved surface temperatures and band emissivities, with respect to \"true\" values, appear in Table 3. In the majority of cases, acceptable estimates of LST and band emissivities were obtained using all six bands from Table 2, with the SNR chosen to equal the assumed noise variance in the simulations which appears in Table 1. However, in about 4% of the simulations it proved impossible to find an acceptable solution with all six bands in this manner. The solution instead tended to badly erroneous values (e.g., \\(T\\leq 100K\\), and emissivities pegged at the limits of the prior). Inspection of the posterior probabilities from (62) revealed that in these anomalous cases one or more of the individual band posterior probabilities fails to overlap significantly with the product of the remaining posterior distributions, leading to a joint posterior probability that effectively vanishes. A number of remedies is available when this difficulty arises. The number of successful retrievals rises sharply when the range of the band emissivity prior is expanded to 0.7-0.999, but at the cost of reducing their overall accuracy somewhat. Experimentation shows that, in all of the anomalous cases, it is possible to get a satisfactory LST retrieval with some three-band subset of the original six. The LST so obtained can then be inserted into the expectation value for band emissivity to yield emissivities for all six bands. Finally, the support of the joint posterior probability can be broadened by increasing the noise radiance \\(\\sigma\\) in its calculation. This last approach was used to obtain Table 3. The effect on retrievals of increasing \\(\\sigma\\) for subsets of the bands differed negligibly from that of increasing \\(\\sigma\\) for all bands by the same factor. The most intractable of the anomalous cases (one each nighttime and daytime) required increasing \\(\\sigma\\) by a factor of 7.0 in order to obtain an acceptable solution. Examination of the \\(\\chi^{2}\\) statistic for retrieval errors shows that the estimator is (slightly) biased for normal, and significantly biased for anomalous, retrievals. For the 963 normal nighttime retrievals the mean and standard deviation of the surface temperature error are \\(\\delta T=-0.26\\pm 1.06K\\), with \\(\\chi^{2}=1.06\\) per degree of freedom. For 958 normal daytime retrievals, the corresponding figures are \\(\\delta T=-0.21\\pm 1.18K\\) and \\(\\chi^{2}=\\)1.03 per degree of freedom. The values for 37 anomalous nighttime retrievals are \\(\\delta T=-1.71\\pm 1.32K\\) and \\(\\chi^{2}=2.64\\) per degree of freedom, with \\(\\delta T=-1.08\\pm 1.92K\\) and \\(\\chi^{2}=\\)1.29 per degree of freedom, for 42 anomalous daytime cases. However, the distribution of errors is very accurately Gaussian (as should be expected). If the mean error is subtracted before calculating \\(\\chi^{2}\\), the result is identically 0.999 per degree of freedom for surface temperature (and all six band emissivities) for all retrievals. Forward models for the anomalous cases systematically have large errors in one or more of the randomly-varied simulation parameters. Thus, the surface visibility and the boundary temperature are both more likely to lie near the limits of their range than in the middle. In particular, the column water vapor scaling factor is over three times as likely to exceed 1.1 for anomalous retrievals as for normal \\begin{table} \\begin{tabular}{l l l} Case & Day & Night \\\\ \\hline LST error(K) & \\(-0.25\\pm 1.23\\) & \\(-0.31\\pm 1.11\\) \\\\ \\(\\epsilon_{20}\\) error & \\(-0.004\\pm 0.022\\) & \\(-0.003\\pm 0.035\\) \\\\ \\(\\epsilon_{22}\\) error & \\(-0.009\\pm 0.034\\) & \\(+0.001\\pm 0.034\\) \\\\ \\(\\epsilon_{23}\\) error & \\(-0.008\\pm 0.048\\) & \\(-0.007\\pm 0.038\\) \\\\ \\(\\epsilon_{29}\\) error & \\(-0.004\\pm 0.031\\) & \\(-0.003\\pm 0.022\\) \\\\ \\(\\epsilon_{31}\\) error & \\(-0.005\\pm 0.023\\) & \\(-0.005\\pm 0.022\\) \\\\ \\(\\epsilon_{32}\\) error & \\(-0.007\\pm 0.028\\) & \\(-0.006\\pm 0.029\\) \\\\ \\end{tabular} \\end{table} Table 3: Monte Carlo simulation results: mean errors, standard deviationsones (25/37 vs 203/963 nighttime; 27/42 vs 182/958 daytime), with only three daytime and two nighttime anomalous retrievals occurring for a column water vapor scaling less than unity. The frequency of anomalous retrievals appears, to some extent, to be an artifact of inserting large errors in the forward model to approximately simulate imperfect knowledge of atmospheric conditions. In any event, all 2000 Monte Carlo realizations led to a successful retrieval of both LST and six band emissivities. ## V Discussion Points which should be addressed in further developments of practical algorithms: 1. It appears that this approach to TES works largely because the range of plausible surface temperature values consistent with band radiances and an uninformative range of band emissivity is quite constricted, as a consequence of the strong temperature dependence of the Planck function. It turns out not to be terribly difficult to get a temperature estimate that is close enough to truth that it can be inserted into the least sophisticated imaginable estimator for band emissivity (64), and still lead to acceptably accurate results. Once the algorithm has gotten to an iteration in which the current range of temperature and band emissivities is restricted to a neighborhood sufficiently close to the true values that the posterior distribution is jointly Gaussian in \\(T\\) as well as in the \\(\\epsilon_{i}\\), it is apparent both that convergence to the true values will occur as assumed in this algorithm, and that these values will maximize the likelihood. However, at present, there is no proof in hand that the procedure outlined above actually converges, or that, given that it does, it converges to the true surface temperature and emissivity combination. It appears to do both to good accuracy in practice. However, the algorithm did-initially-fail to converge to an acceptable solution in about 4% of the realizations. As recounted in the previous section, it proved possible in every case to adapt the search strategy so as to successfully retrieve both LST and emissivities for all bands. The successful recovery strategies all had the effect of maximizing the numerical joint posterior probability, by some combination of 1) eliminating from the estimator band posterior probabilities whose effective nonvanishing support did not intersect that of the joint probability of the remaining bands, or forcing intersection by broadening the support of the outlier posterior probabilities by 2) loosening limits on the prior, or 3) increasing the noise radiance parameter assumed in the retrieval. The solutions obtained are, in any event, not unique, because the TES problem is underdetermined. Considered as a surrogate for maximum likelihood solutions, the algorithm solutions approximate only local maxima, and it might be possible to find maxima which give very poor account of temperature and emissivity. This has not happened in simulations performed to date. 2. The algorithm as presently formulated appears to be unnecessarily complicated. It seems certain that its operation can be significantly streamlined. For practical applications, it will be necessary to eliminate redundant elements of the calculation. 3. The model for band radiances in this memo treated them independently, apart from the prior knowledge that the surface temperature for all bands must be the same. Bretthorst [15],[16],[17] has addressed problems involving more sophisticated models for observations, in an approach which would appear to offer real advantages in the present context. 4. Perhaps the least satisfactory feature of this algorithmic approach is its dependence upon an accurate forward radiance model. To the extent MODTRAN can be regarded as supplying radiance estimates which are zero-mean error estimates of the true radiance, the effect of radiance prediction error on this algorithm may simply be regarded as a contribution to the noise variance. But in real life, a forward model can be expected to have systematic errors that need not originate as unbiased stationary Gaussian processes. The question whch has been addressed in this work is: Given an accurate forward model (in the sense just described), what surface temperature and band emissivities are consistent with observed radiances and knowledge of their error statistics? A harder question, which will be the focus of further developments, is: Given a fallible, but reasonably accurate, forward model, what surface temperatures and band emissivities can possibly be consistent with observations and available knowledge, no matter what the forward model error, so long as it falls within known limits? ## VI Conclusion A simple argument, based on inherent physical symmetries that the description of surface thermal emission must obey, leads to the appropriate prior probabilities for surface temperature and emissivity. These lead to the maximum entropy estimator for the mismatch between sensed and modeled radiance in the presence of noise, from which an estimator of surface temperature may be constructed that treats emissivity as a nuisance parameter. MODTRAN-based simulations show that temperature-emissivity separation is successfully performed by iteration between the temperature estimator, and a similar estimator for surface emissivity. ## References * [1] Dash, P., F.-M. Gottsche, F.-S. Olesen, and H. Fischer, \"Land surface temperature and emissivity estimation from passive sensor data: theory and practice-current trends,\" _Int. J. Remote Sensing_, vol. 23, pp. 2563-2594, 2002 * [2] Wan, Z.-M., _MODUS Land-Surface Temperature Algorithm Theoretical Basis Document_, Institute for Computational Earth System Science, University of California, Santa Barbara, 1999 * [3] Li Z.-L.F. Becker, M. P. Stoll, and Z. Wan, \"Evaluation of Six Methods for Extracting Relative Emissivity Spectra from Thermal Infrared Images,\" _Rem. Sens. Env._, vol. 69, pp. 197-214, 1999 * [4] Kahle, A. B., and R. E. Alley, \"Separation of Temperature and Emittance in Remotely Sensed Radiance Measurements,\" _Rem. Sens. Env._, vol. 42, pp. 107-111, 1992* [5] Kealy, P. S., and S. J. Hook, \"Separating Temperature and Emissivity in Thermal Infrared Multispectral Scanner Data: Implications for Recovering Land Surface Temperatures,\" _IEEE Trans. Geosci. Remote Sensing_, vol. 31, pp. 1155-1164, 1993 * [6] Petitcolin, F., and E. F. Vermote, \"Land Surface Reflectance, Emisivity and Temperature from MODIS Middle and Thermal Infarcad data,\" _Rem. Sens. Env._, vol 83(1-2), 112-134, 2002 * [7] Li Z.-L., and F. Becker, \"Feasibility of Land Surface Temperature and Emissivity Determination from AVHRR Data,\" _Rem. Sens. Env._, vol. 43, pp. 67-85, 1993 * [8] Watson, K., \"Spectral Ratio Method for Measuring Emissivity,\" _Rem. Sens. Env._, vol. 42, pp. 113-116, 1992 * [9] Borel, C. C., and J..Szymanski, \"Physics-based Water and Land Temperature Retrieval,\" in Handbook of Science Algorithms for the Multispectral Thermal Imager, B. W. Smith, Ed., Los Alamos National Laboratory and Savannah River Technology Center, 1998 * [10] Wan, Z.-M., and J. Dozier, \"A generalized split-window algorithm for retrieving land-surface temperature from space,\" _IEEE Trans. Geosci. Remote Sensing_, vol. 34, pp. 892-905, 1996 * [11] Barducci, A., and I. Pipii, \"Temperature and emissivity retrieval from remotely sensed images using the 'Grey body emissivity' method,\" _IEEE Trans. Geosci. Remote Sensing_, vol. 34, pp. 681-695, 1996 * [12] Wan, Z.-M., and Z.-L. Li, \"A physics-based algorithm for land-surface emissivity and temperature from EOS/MODIS data,\" _IEEE Trans. Geosci. Remote Sensing_, vol. 35, pp. 980-996, 1997 * [13] Duntley, S. Q., \"The Reduction of Apparent Contrast by the Atmosphere,\" _J. Opt. Soc. Am_ vol. 38, p 179, 1948 * [14] It happens that Lambertian behavior is usually considered a good approximation in the LWIR. A Lambertian assumption is regarded as more questionable in the MWIR. Looking ahead to later simulated MODIS retrievals, Wan and Li [12] have examined this question for MODIS MWIR surface imaging bands (Bands 20,22, and 23) and have concluded that in MWIR bands surfaces may be adequately approximated as Lambertian reflectors obeying Khirchoff's law. For the present application, the relevant point is the applicability of Khirchoff's law, rather than Lambert's. * [15] Dettthorst, L., _Bayesian Spectral Analysis and Parameter Estimation_, Dissertation, Washington University, St. Louis, MO, 1987 * [16] Bretthorst, L., \"Bayesian Spectrum Analysis and Parameter Estimation,\" in Berger, J., S. Fienberg, J. Gani, K. Krickenberg, and B. Singer, Eds, _Lecture Notes in Statistics_, **48**, Springer-Verlag, New York, 1988 * [17] Bretthorst, L., \"Excerpts from Bayesian Spectrum Analysis and Parameter Estimation,\" in Erickson, G. J., and C. R. Smith, _Maximum-Entropy and Bayesian Methods in Science and Engineering, Volume 1: Foundations_, Kluwer, Dordrecht, 1988, pp. 75-145 * [18] Landau, L. D., and L. Lifschitz, _Statistical Physics_, Addison-Wesley: Reading, MA, 1958 * [19] Jaynes, E., \"Prior Probabilities,\" _IEEE Trans. on Systems Science and Cybernetics_, vol. SSC-4, pp. 227-241, 1968 * [20] Jaynes, E., \"The Well-Posed Problem,\" _Found. Physics 3_, pp. 477-493, 1973 * [21] Jaynes, E., \"Marginalization and Prior Probabilities,\" in _Bayesian Analysis in Econometrics and Statistics_, A. Zellner, Ed., North-Holland Publishing Co.: Amsterdam, 1980 * [22] Misner, C. W., K. S. Thorne, and J. A. Wheeler, _Gravitation_, Freeman: San Francisco, 1973 * [23] Weinberg, S., _Gravitation and Cosmology_, John Wiley and Sons: New York, 1972 * [24] Einstein, A., \"Zur elektrodynamik bewegter Korper,\" _Ann. Physik_, vol. 17, pp. 891-921, 1905 * [25] Guenther, B., X. Xiong, V. V. Salmonson, W. L. Barnes, and J. Young, \"On-orbit performance of the Earth Observing System Moderate Resolution Spectroradiometer; first year of data,\" _Rem. Sens. Env. 83_, pp. 16-30, 2002
An approach to the remote sensing of land surface temperature is developed using the methods of Bayesian inference. The starting point is the maximum entropy estimate for the posterior distribution of radiance in multiple bands. In order to convert this quantity to an estimator for surface temperature and emissivity with Bayes' theorem, it is necessary to obtain the joint prior probability for surface temperature and emissivity, given available prior knowledge. The requirement that any pair of distinct observers be able to relate their descriptions of radiance under arbitrary Lorentz transformations uniquely determines the prior probability. Perhaps surprisingly, surface temperature acts as a scale parameter, while emissivity acts as a location parameter, giving the prior probability \\[P(T,\\epsilon\\mid K)=\\frac{const}{T}dTd\\epsilon\\] Given this result, it is a simple matter to construct estimators for surface temperature and emissivity. A Monte Carlo simulation of land surface temperature retrieval in selected MODIS bands is presented as an example of the utility of the approach. Remote Sensing, Land Surface Temperature, Sea Surface Temperature.
Write a summary of the passage below.
arxiv-format/0402295v2.md
# Improved extremal optimization for the Ising spin glass A. Alan Middleton Department of Physics, Syracuse University, Syracuse, NY 13244 23rd November 2018 ###### Exploring the low temperature behavior of disordered materials, such as spin glasses and other random magnets [1], is quite challenging due to the very phenomena, glassy dynamics and multiple metastable states, that are important in such materials. Scaling arguments [2; 3; 4] indicate that many properties of the glassy state, including the scaling of the energy of excitations and correlation functions, can be found by studying the ground state and its response to perturbations. Significant effort has been invested in identifying models whose ground states can be computed in time polynomial in the system size [5]. Where no polynomial-time algorithm is known, exact and heuristic methods which take time exponential in system size are used. This enterprise is intimately connected with concepts developed in computer science, especially the distinction between P and NP-hard optimization problems [6]. The Ising spin glass (ISG) is a prototypical example of a disordered magnet. NP-hard problems such as the 3D ISG are, of course, particularly challenging. Exact methods for the 3DISG with Gaussian bond weights can solve \\(12^{3}\\)-spin samples with open boundary conditions [7]. Such sizes have not proven to be sufficiently large to decide between alternate pictures for the low-temperature behavior. Heuristic genetic methods mix configurations and can therefore generate large scale \"moves\": such methods are used for samples with \\(14^{3}\\) spins for \\(\\pm J\\) couplings [8]. Heuristics with local moves generally have difficulty finding the exact ground state, due to the large barriers separating metastable states. Techniques such as flat histogram methods [9] can partially lower free energy barriers between metastable states. In this Communication, I study a modified version of extremal optimization (EO) [10]. EO is a local search algorithm that preferentially flips spins with low \"fitness\". The version presented here, \"jaded\" extremal optimization (JEO) increases the fitness of a spin by an amount proportional to the number of times it has been flipped. The goal of this adjustment is to reduce the repetition in exploring paths in configuration space, so that more possibilities can be quickly explored. Empirically, this simple change dramatically increases the effectiveness of the EO algorithm for finding ground states of two- and three-dimensional spin glass samples. As exact ground states are needed for studies of excitations and scaling, the algorithm is, for the most part, stringently tested by demanding that it find the ground states computed by exact methods. Both EO and JEO take time exponential in the system size to find the exact ground state, but the rate of growth is slower for JEO. Though JEO introduces an extra parameter, large improvements are achieved with only modest tuning. ## I Extremal optimization and extended algorithm A principle motivation for applying EO is to explore the energy landscape near the trial configuration by unconditionally modifying \"unfit\" variables. Preferentially (but not exclusively) changing variables with low fitness tends to raise the expected fitness while maintaining large fluctuations. The algorithm differs some from traditional Monte Carlo algorithms that conditionally select variables according to the expected improvement. In EO, the potential moves are selected according to their rank by fitness, rather than a Boltzmann distribution by weight. A correspondence can be defined between fitness and the Hamiltonian for the Ising spin glass [10]. The Hamiltonian for spins \\(s_{i}\\), indexed by position \\(i\\), in a \\(d\\)-dimensional ISG of linear size \\(L\\) is \\[H=-\\sum_{\\langle ij\\rangle}J_{ij}s_{i}s_{j}, \\tag{1}\\] where \\(J_{ij}\\) are random bond strengths each chosen with probability \\(P(J_{ij})=e^{-J_{ij}^{2}/2}/\\sqrt{2\\pi}\\) for nearest neighbor spins with \\(1\\leq i,j\\leq N=L^{d}\\). When \\(d=2\\), algorithms with running times polynomial in \\(N\\) are available [11] to find the ground state. When \\(d\\geq 3\\), finding the ground state energy is NP-hard, so that finding ground states for the worst-case choice of \\(J_{ij}\\) is expected to take time exponential in \\(N\\). In the context of EO, one choice for the fitness variable \\(\\lambda_{i}\\) for a spin variable \\(s_{i}\\) is \\[\\lambda_{i}=\\lambda_{i}^{0}\\equiv s_{i}(\\sum_{j\\in U_{i}}J_{ij}s_{j}), \\tag{2}\\] where \\(U_{i}\\) are the set of unsatisfied bonds (\\(s_{i}J_{ij}s_{j}<0\\)) containing \\(s_{i}\\). (Allowing for site-dependent constant shifts \\(\\lambda_{i}^{0}\\rightarrow\\lambda_{i}^{0}+\\kappa_{i}\\) as in Ref. [12] did not affect the comparisons here.) The configuration energy is relatedto the fitness by \\(H=-\\frac{1}{2}\\sum_{i}\\lambda_{i}^{0}+\\sum_{ij}|J_{ij}|\\). Any increase in the fitness decreases the total energy. Given the fitness variables \\(\\lambda_{i}^{0}\\), there are a variety of strategies one could employ to attempt to improve the total fitness. The simplest version of EO takes \"greedy\" steps: the algorithm repeatedly flips the least fit variable until a static state is achieved. The greedy method converges quite rapidly, but in a spin glass the convergence is to a local minimum that is generally quite far from the optimal solution, both in configuration of the \\(\\{s_{i}\\}\\) and often in energy per degree of freedom \\(H/N\\). Similar greedy approaches for decision problems such as SAT, which seeks truth assignments for Boolean formula so that all clauses contain a true value, can be quite successful for given ensembles of problems [13]. An improved method, \\(\\tau\\)-EO [10], sorts the spins by \\(\\lambda_{i}\\) and chooses the \\(m\\)th spin in the list with probability proportional to \\(m^{-\\tau}\\). This favors the choice of spins with low fitness, but allows for the occasional choice of sites with very high fitness. Fluctuations arising from the stochastic choice among spins with low fitness and the ranking of spins by the total weight of broken bonds, rather than energy improvement, allow the search to escape metastable states. It is argued [10] that for large systems, the optimal choice of \\(\\tau\\) approaches \\(\\tau=1\\). The extension considered in this paper (JEO) adjusts the fitness by an amount proportional to the number of times \\(k_{i}\\) that a site \\(i\\) has been previously chosen, that is, \\[\\lambda_{i}=\\lambda_{i}^{\\Gamma}\\equiv\\lambda_{i}^{0}+\\Gamma k_{i}, \\tag{3}\\] where \\(\\Gamma\\) is a site-independent \"aging\" parameter. The variables are sorted by \\(\\lambda_{i}^{\\Gamma}\\) and then selected by rank as in \\(\\tau\\)-EO. The \\(\\tau\\)-EO algorithm corresponds to the choice \\(\\Gamma=0\\). Setting \\(\\Gamma\ eq 0\\) reduces the probability of selecting moves that have been flipped many times before. For configurations near (or in) the ground state, it is favorable for _some_ spins to have low fitness, in order that a number of other spins can maximize their fitness. When \\(\\Gamma=0\\), these spins, which are actually in their ground state orientation relative to the other spins, will be flipped in futility. Shifting the \\(\\lambda_{i}\\) during the algorithm also breaks the finite set of offsets between fitnesses of distinct spins that exist at \\(\\Gamma=0\\) (due to the finite number of bond configurations at each site). This adaptive scheme has similarities to a variety of methods for solving problems such as SAT (satisfiability of sets of logical constraints) that flavor repeated selection of the same move, such as Novelty [14] and variants of WALKSAT and GSAT [15; 16]. In contrast with these other schemes, the selection process in JEO is combined with the power law distribution for selecting ranked moves. Spin glasses with continuous disorder differ from SAT problems as they have less local degeneracy but also possess a global up-down symmetry, so that distinct methods may be appropriate. In order to select spins quickly, I used the approximate selection method described in Ref. [12]. The spins are stored in a heap structure [17] according to their current fitness. This structure is a tree that is relatively cheap to maintain (\\(O(\\log N)\\) total cost to select a spin and update the tree). Each spin has a parent (except for the root) and at most two children. Each child is more fit than its parent and the root of the tree contains the least fit spin. This structure does not guarantee any other interlevel sorting, so that a spin \\(i\\) that is deeper in the tree than, but not a direct descendant of, a given spin \\(i^{\\prime}\\), may have a lower fitness. The heap structure does maintain a useful approximate sorting, though. To select a spin to flip, a level \\(\\ell\\) is selected with probability proportional to \\(2^{-(\\tau-1)\\ell}\\) and then a random spin within level \\(\\ell\\) is chosen. The spin at this site is then inverted. The fitness of the neighboring spins is adjusted and the heap is updated using standard methods [17]. EO does not take advantage of the special structure of the 2D problem: it is not necessary or even expected that it will find the solution in time polynomial in the system size. Polynomial-time solvable problems have been used to study algorithms, for example, for hard mean-field problems [18]. For some classes of problems, heuristics can find solutions in polynomial time [19; 13]. In the 2DISG, large low-energy excitations may make local algorithms especially inefficient. ## II Performance of the algorithm In this section, I compare the performance of the extended EO algorithm, JEO, against \\(\\tau\\)-EO as applied Ising spin glasses with Gaussian disorder. When feasible, comparisons with ground states found using exact methods provide a precise and direct test for convergence. _Two-dimensional spin glass_. The 2DISG models are on a square lattice with \\(L^{2}\\) spins and open boundary conditions. To determine the 2D ground state, each sample is mapped [11] to a general weighted matching problem. The matching problem for a graph is to find a set of edges with minimal total weight such that each vertex belongs to exactly one edge. The weighted graph for a 2DISG sample has edges dual to the lattice bonds, with weight \\(|J_{ij}|\\) for an edge that crosses a bond with weight \\(J_{ij}\\), and extra edges of weight zero that ensure that the frustration of each plaquette is maintained: unfrustrated (frustrated) plaquettes give an even (odd) number of the bonds dual to the edges of the plaquette in the matching. To find the minimum weight matching and hence the ground state energy for a 2DISG sample, I used the Blossom IV algorithm developed by Cook and Rohe [20]. The exact ground state energy of each 2DISG sample was input to the \\(\\tau\\)-EO and JEO codes. When the heuristic codes found this energy, the codes terminated. The primary results from these computations were the distributions of the running times, measured in number of spin flips, to find the true ground state. The time to solution is a function of both the seed used to generate the sample and an independent \"algorithm seed\" used to generate the random initial configuration and to select spin flips. In a given sample, the distribution of times to find a ground state was roughly Poissonian. This suggests that restarting the algorithm with different initial configurations or seeds for selecting flips does not significantly decrease the mean running time. This conclusion was consistent with empirical trials of restarting the algorithm: the algorithm does not get stuck in history dependent traps. Given a sample \\(k\\), the median \\(t_{m}^{k}\\) of the running time was estimated from the solution time for 100 algorithm seeds. The results reported here are for \\(\\overline{t}_{m}\\), the sample mean of \\(t_{m}^{k}\\). The \\(\\Gamma=0\\) data is in agreement with previously results for \\(\\tau\\)-EO, with \\(\\overline{t}_{m}\\) minimal at \\(\\tau\\approx 1.5\\). The results for the mean solution time \\(\\overline{t}_{m}\\) for optimal \\(\\tau\\) and \\(\\Gamma\\) are summarized in Fig. 1. As suggested by the data plotted in Fig. 2, \\(\\overline{t}_{m}\\) is not very sensitive to the exact choice of parameters, as long as \\(\\tau\\) is in the range \\(1.5<\\tau<2.5\\) and the optimal \\(\\Gamma\\) (on the order of \\(10^{-3}\\) to \\(10^{-1}\\)) is found to within a factor of about 2, for the sizes studied here. The best running times for \\(\\tau\\)-EO grow much more rapidly than those for JEO. For \\(L=16\\), JEO is of the order \\(10^{4}\\) times faster than \\(\\tau\\)-EO. Extrapolation suggests that the advantage of JEO increases significantly with \\(L\\). For comparison, an exponential dependence \\(\\overline{t}_{m}=15\\cdot 2^{L}\\) is shown in Fig. 1. This function does a good job of describing the JEO data for \\(L=4\\) through \\(L=32\\). In separate runs, for comparison, the heuristic algorithm was terminated when the energy was within 1% of the exact ground sate energy. These approximate solutions were found much more rapidly than exact solutions (\\(\\approx 10^{5}\\) times faster for \\(L=32\\)). Three-dimensional spin glass.A similar comparison was carried out for 3DISG samples with Gaussian disorder. The \\(L^{3}\\) spins in the 3DISG samples lie on a cubic lattice with periodic boundary conditions. For 3DISG samples of size up to \\(6^{3}\\), the spin glass server at the University of Koln [21] (which applies branch-and-cut [5]) was used to generate exact solutions. The termination condition of the algorithm was modified, as exact ground states for the larger samples were not readily available. All samples were simulated in parallel with \\(n=10\\) algorithm seeds. When the minimal record energy for eight (8) of the samples were identical, the algorithm was terminated. This criterion produced configurations equal to the exact solutions for all \\(L=4,6\\) samples (45 at each size). This suggests that true ground states were found with a high probability for \\(L=8\\) and possibly also \\(L=10\\). The summary results are plotted in Fig. 3. Given the termination criterion, JEO was of the order of \\(10^{2}\\) times faster than \\(\\tau\\)-EO in converging to a potential solution for \\(L=8\\) samples. Very roughly, \\(L=6\\) samples were solved in \\(\\approx 10\\)s on average both on the Koln spin glass server (a 400 MHz Sun Ultra) and using JEO (on a 1 GHz Intel P 5). Further studies would be needed to provide better estimates of the confidence in the ground states and how to improve such confidence. ## III Discussion JEO extends the extremal optimization algorithm of Boettcher and Percus by adaptively reducing the frequency of flipping previously selected spins. As a local move can lead to avalanche-like behavior, due to induced changes in the fitness of neighbors, this modification also reduces the frequency of flipping larger domains. This extension of EO does add a parameter, the aging parameter \\(\\Gamma\\). However, a near-optimal value for \\(\\Gamma\\) for each Figure 1: Plot of \\(\\overline{t}_{m}\\), the sample mean of the median time to find the ground state, measured in spin flips, using \\(\\tau\\)-EO (squares) and JEO (circles), for the 2DISG with optimal \\(\\tau\\) and, for JEO, \\(\\Gamma\\). The triangles indicate the same measure of time to find the ground state energy to within 1% accuracy. The line shows, for comparison, a running time exponential in \\(L\\), \\(\\overline{t}_{m}=15\\cdot 2^{L}\\), consistent with the results for JEO. The uncertainties are comparable to the symbol size. Figure 2: Plot of \\(\\overline{t}_{m}\\) for 2DISG samples of size \\(L=8\\), for \\(\\Gamma\\) ranging from \\(\\Gamma=0\\) (i.e., \\(\\tau\\)-EO) through \\(\\Gamma=0.5\\), as a function of the power law for rank selection, \\(\\tau\\). For clarity, the error bars, which are of order 10% of the values for all points, are not shown. The solid lines are added only to group the points. Choosing \\(\\Gamma\\approx 0.1\\) and \\(\\tau\\approx 2.0\\) minimizes the run time. problem type at a given size can be found quickly and less tuning of the parameter \\(\\tau\\) is required than for \\(\\tau\\)-EO. One possible avenue of exploration is to check whether avalanche regions correspond to important domains or excitations in the sample. Possible modifications of JEO include using a selection distribution with sharp cutoffs [22], rather than power-law distributions. Other schemes for reducing the fitness of frequently repeated moves could be considered, such as modifying the fitness using non-linear functions of the number of flips at a spin. Regardless of the exact details of the role of domains and possible improvements, empirical testing shows that the aging of the spins during state-space exploration greatly reduces the time for EO to find the ground state of the ISG in two and three dimensions. Though the 2D model was used to make a precise comparison with exact results, the exponential equilibration times for the 2D ISG using extremal optimization are consistent with those that would be seen for an NP-hard optimization problem with a similar local solution strategy. It may be useful to use an algorithm like JEO to locally improve the configurations formed by whole sample crossover in genetic algorithms [23]. As exact solutions for small samples can be found with confidence in a relatively small number of steps, in machine time very similar to that for branch-and-cut, this simple algorithm also provides a very convenient way to study small 3D samples. I thank Stefan Boettcher for discussions and comments. The K\\(\\ddot{\\mathrm{o}}\\)in spin glass server was quite useful for this work. I thank the Kavli Institute for Theoretical Physics and the Schloss Dagstuhl Seminar (03381) for their hospitality. This work was supported in part by the National Science Foundation (grants DMR-0109164 and DMR-0219292). ## References * (1)_Spin Glasses and Random Fields_, A. P. Young, ed., (World Scientific, Singapore, 1998). * (2) P. W. Anderson and C. M. Pond, Phys. Rev. Lett. **40**, 903 (1978). * (3) A. J. Bray and M. A. Moore, J. Phys. C **17**, L463 (1984). * (4) D. S. Fisher and D. A. Huse, Phys. Rev. B **38**, 386 (1988). * (5) M. J. Alava, P. M. Duxbury, C. Moukarzel, and H. Rieger, _Phase Transitions and Critical Phenomena_, Vol. 18, C. Domb and J. L. Lebowitz, eds., (Academic Press, San Diego, 2001); A. K. Hartmann and H. Rieger, _Optimization Problems in Physics_ (Wiley-VCH, Berlin, 2002). * (6) M. R. Garey and D. S. Johnson, _Computers and Intractability_ (W. H. Freeman and Co., New York, 1979); C. H. Papadimitriou, _Computational Complexity_ (Addison-Wesley, 1994); O. C. Martin, R. Monasson, and R. Zechina, Th. Comp. Sci. **265**, 3 (2001). * (7) M. Palasini, F. Liers, M. Juenger, A. P. Young, Phys. Rev. B **68**, 064413 (2003). * (8) K. F. Pal, Phys. A **233**, 60 (1996); A. K. Hartmann, Phys. Rev. E **60**, 5135 (1999); Eur. Phys. J. B **13**, 539 (2000). * (9) J.-S. Wang, Eur. Phys. J. B **8**, 287 (1998); F. Wang and D. P. Landau, Phys. Rev. Lett. **86**, 2050 (2001). * (10) S. Boettcher and A. G. Percus, Phys. Rev. Lett. **86**, 5211 (2001). * (11) F. Barahona, J. Phys. A **15**, 3241 (1982). * (12) S. Boettcher and A. G. Percus, Artificial Intelligence **119**, 275 (2000). * (13) E. Koutsoupias and C. H. Papadimitriou, Inf. Proc. Lett. **43**, 53 (1992). * (14) D. McAllester, B. Selman, and H. Kautz, in _Proc. 14th National Conference on A. I. (AAAI-97)_ (AAAI, 1997). * (15) I. P. Gent and T. Walsh, in _Proc. 11th National Conference on A. I. (AAAI-93)_ (AAAI, 1993). * (16) H. Kautz and B. Selman, in _IJCAI Proc. 1993_ (Morgan Kaufmann, San Francisco, 1993). * (17) T. H. Cormen, C. E. Lieserson, and R. L. Rivest, _Introduction to Algorithms_ (MIT Press, Cambridge, Mass., 1990). * (18) F. Ricci-Tersenghi, M. Weigt, R. Zecchina, Phys. Rev. E **63**, 026702 (2001). * (19) J. Schwarz and A. A. Middleton, cond-mat/0309240. * (20) W. Cook and A. Rohe, INFORMS J. Comp. **11**, 138 (1999). * (21) Available at [http://www.informatik.uni-koeln.de/](http://www.informatik.uni-koeln.de/) ls_juenger/research/sgs/sgs.html. * (22) A. Franz, K. H. Hoffmann, and P. Salamon, Phys. Rev. Lett. **86**, 5219 (2001). * (23) O. C. Martin and Houdayer, Phys. Rev. Lett. **83**, 1030 (1999). Figure 3: Plot of the sample average of the median running times for \\(\\tau\\)-EO (squares) and JEO (circles) for the Gaussian Ising spin glass on a cubic lattice. The algorithm terminated when 8 of the minimal record energies agreed among 10 parallel samples. The parameter \\(\\tau\\) was fixed for JEO at a near-optimal \\(\\tau=1.7\\) and near-optimal values of \\(\\Gamma=0.1,0.1,0.05\\) for \\(L=4,6,8\\), respectively, were used. The gain for JEO over \\(\\tau\\)-EO is approximately a factor of 100 at \\(L=8\\). The line shows \\(\\overline{t}_{m}=0.05\\cdot 2^{3.4\\cdot L}\\), for a rough comparison.
A version of the extremal optimization (EO) algorithm introduced by Boettcher and Percus is tested on 2D and 3D spin glasses with Gaussian disorder. EO preferentially flips spins that are locally \"unfit\"; the variant introduced here reduces the probability to flip previously selected spins. Relative to EO, this adaptive algorithm finds exact ground states with a speed-up of order \\(10^{4}\\) (\\(10^{2}\\)) for \\(16^{2}\\)- (\\(8^{3}\\)-) spin samples. This speed-up increases rapidly with system size, making this heuristic a useful tool in the study of materials with quenched disorder.
Summarize the following text.
arxiv-format/0402649v1.md
# For A Lecture on Scientific Meteorology within Statistical (\"Pure\") Physics Concepts M. Ausloos SUPRATECS1, Institute of Physics, B5, University of L\\(\\ddot{u}\\)ge, B-4000 Liege, Belgium Footnote 1: SUPRATECS = Services Universitaires Pour la Recherche et les Applications Technologies de Matériaux Electroéarniques, Composites et Supraconduceurs November 4, 2021 ## I Introduction and Foreword This contribution to the 18th Max Born Symposium Proceedings, cannot be seen as an extensive review of the connection between meteorology andvarious aspects of modern statistical physics. Space and time (and weather) limit its content. Much of what is found here can rather be considered to result from a biased view point or attitude and limited understanding of the author frustrated because the developments of what he thought was a science (meteorology) turns out to be unsatisfactory to him and sometimes misleading. It seems that other approaches might be thought of. New implementations carried forward. Some are surely made and understood by meteorologists but are not easily available in usual physics literature. Thus the lines below may be rather addressed to physicists. The paper will be satisfactory if it attracts work toward a huge field of interest with many many publications still with many unanswered questions. As an immediate warning it is emphasized that deep corrections to standard models or actual findings can NOT be found here nor are even suggested. Only to be found is a set of basic considerations and reflections expecting to give lines of various investigations, and hope for some scientific aspects of meteorology in the spirit of modern statistical physics ideas. A historical point is in order. The author came into this subject starting from previous work in econophysics, when he observed that some \"weather derivatives\" were in use, and some sort of game initiated by the Frankfurt Deutsche Borse1 in order to attract customers which could predict the temperature in various cities within a certain lapse of time, and win some prize thereafter1. This subject therefore was obviously similar to predicting the S&P500 or other financial index values at a certain future time. Whence various techniques which were used in econophysics, like the detrended fluctuation analysis, the multifractals, the moving average crossing techniques, etc. could be attempted from scratch. Footnote †: This notion seems to be a measure that the energy suppliers could use to hedge their supply in adverse temperature conditions. Beside the weather (temperature) derivatives other effects are of interest. Climate is said to be fast changing nowadays. Much is said and written about e.g. the ozone layer and the Kyoto \"agreement\". The El Ni\\(\\tilde{n}\\)o system is a great challenge to scientists. Since there is some data available under the form of time series, like the Southern Oscillation Index, it is of interest to look for trends, coherent structures, periods, correlations in noise, etc. in order to bring some knowledge, if possible basic parameters, to this meteorological field and expect to import some modern statistical physics ideas into such climatological phenomena. It appeared that other data are available like those obtained under various experiments, put into force by various agencies, like the Atlantic Stratocumulus Transition Experiment (ASTEX) for ocean surfaces or those of the Atmospheric Radiation Measurement Program (ARM) of the US Department of Energy, among others. Much data is about cloud structure, e.g the cloud base height evolution, on liquid water paths, brightness temperature, ; they can often freely downloaded from the web. Therefore many time series can be analyzed. However it appeared that the data is sometimes of rather limited value because of the lack of precision, or are biased because the raw data is already transformed through models, and arbitrarily averaged (\"filtered\") whence even sometimes lacking the meaning it should contain. Therefore a great challenge comes through in order to sort out the wheat from the chaff in order to develop meaningful studies. In Sect.2, I will comment on the history of meteorology, observe that the evolution of such an old science is slow and limited by various _a priori_ factors. Some basic recall on clouds and their role on climate and weather will be made (Sect.3). This should remind us that the first modern ideas of statistical physics were implemented on cloud studies through fractal \\(geometry\\). Indeed, modern and pioneering work on clouds is due to Lovejoy who looked at the perimeter-area relationship of rain and cloud areas\\({}^{2}\\), fractal dimension of their shape or ground projection. He discovered the statistical self-similarity of cloud boundaries through area-perimeter analyses of the geometry from satellite pictures. He found the fractal dimension \\(D_{p}\\simeq 4/3\\) over a spectrum of 4 orders of magnitude in size, for small fair weather cumuli (\\(\\sim\\) 1021 km) up to huge stratus fields (\\(\\sim\\) 103 km). Occasional scale breaks have been reported\\({}^{3,4}\\) due to variations in cloudiness. Cloud size distributions have also been studied from a scaling point of view. It is hard to say whether there is perfect scaling\\({}^{5-7}\\); why should there be scaling? I will point out, as others do, the basic well known pioneering work of modern essence, like the Lorenz model. It was conceived in order to induce predictability, but it turned out rather to be the basic nonlinear dynamical system describing chaotic behavior. However this allows for bringing up to its level the notion of fractal ideas for meteorology work, thus scaling laws, and modern data analysis techniques. I will recall most of the work to which I have contributed, being aware that I am failing to acknowledge many more important reports than those, - for what I deeply apologize. There is a quite positive view of mine however. Even though the techniques have not yet brought up many codes implemented in weather and climate evolution prediction, it was recently stressed\\({}^{8}\\) in a sarcastic way (\"Chaos : useful at last?\") that some applications of nonlinear dynamics ideas are finding their way onto weather prediction\\({}^{9}\\), even though it has to be said that there is much earlier work on the subject\\({}^{10}\\). There are (also) very interesting lecture notes on the web for basic modules on meteorological training courses, e.g. one available through ECMWF website\\({}^{11}\\). But I consider that beyond the scientifically sound and highly sophisticated computer models, there is still space for simple technical and useful approaches, based on standard statistical physics techniques and ideas, in particular based on the scaling hypothesis for phase transitions\\({}^{12}\\) and percolation theory features\\({}^{13}\\). These constraint allow me to shorten the reference list! A few examples will be found in Sect. 4. At the end of this introduction, I would crown the paper with references to two outstanding scientists. First let me recall Friedmann\\({}^{2}\\) who said that \"if you can't be a good mathematician, you try to become a good physicist, and those who can't become meteorologists\". Another, Heisenberg was surely aware about errors and prediction difficulties resulting from models.\\({}^{3}\\) Both men should be guiding us to new endeavors with modesty anyway. ## II Historical Introduction From the beginning of times, the earth, sky, weather have been of great concern. As soon as agriculture, commerce, travelling on land and sea prevailed, men have wished to predict the weather. Later on airborne machines need atmosphere knowledge and weather predictions for best flying. Nowadays there is much money spent on weather predictions for sport activities. It is known how the knowledge of weather (temperature, wind, humidity,..) is relevant, (even \\(fundamental\\)!), e.g. in sailing races or in Formula 1 and carrally races. Let it be recalled the importance of knowing and predicting the wind (strength and directions), pressure and temperature at high altitude for the (recent) no-stop balloon round the world trip. A long time ago, druids and other priests were the up-to-date meteorologists. It is known that many proverbs on weather derive from farmer observations, - one of the most precise ones reads (in french) \"Apres la pluie, le beau temps\", which is (still) correct, in spite of the Heisenberg principle, and modern scientific advances. After land travel and commerce, the control of the seas was of great importance for economic, whence political reasons. Therefore there is no surprise in the fact that at the time of a British Empire, and the Dutch-Spanish-Portuguese rivalry the first to draw sea wind maps was Halley14. That followed the \"classical\" isobaths and isoheights (these are geometrical measures!!) for sailors needing to go through channels. Halley, having also invented the isogons (lines of equal magnetic fields) drew in ca. 1701, the first trade wind and monsoon maps14, over the seas4. It may be pointed out that he didnot know Coriolis forces yet. A second major step for meteorology seems to be due to Karl Theodor who between 1781 and 1792 was responsible for the Palatinate Meteorological Society. He invited (39) friends around the world\\({}^{14}\\), from Massachusetts to Ural, to make three measurements per day, and report them to him, in order to later publish \"Ephemerides\", in Manheim. I am very pleased to point out that Heinrich Wilhelm Brandes(1777-1834), Professor of Mathematics and Physics at the University of Breslau was the first\\({}^{14}\\) who had the idea of displaying weather data (temperature, air pressure, a.s.o.) on geographical maps\\({}^{5}\\). Later von Humboldt (1769-1859) had the idea to connect points in order to draw isotherms\\({}^{14}\\). No need to say that this was a bold step, - the first truly predictive step implying quantitative thermodynamics data. Most likely a quite incorrect result. It is well known nowadays that various algorithms will give various isotherms, starting from the same temperature data and coordinate table. In the same line of lack of precision, there is no proof at all that the highest temperature during the 2003 summer was 42.6\\({}^{\\circ}\\)C at the point of measurement at Cordoba Airport on Aug. 12, 2003. There is no proof that the highest temperature was 24.9\\({}^{\\circ}\\)C in Poland, in Warsaw-Okecie, on that day in 2003. There is no proof that we will ever know the lowest temperature in Poland in 2003 either. In fact the maximum or minimum temperature as defined in meteorology\\({}^{15,16}\\) are far from the ones acceptable in physics laboratories. Therefore what isotherms are drawn from such data? They connect data points which values are obtained at different times! Whatis their physical meaning? One might accept to consider that isotherms result from some truly averaged temperature during one day (!) at some location (?). What is an average temperature? In meteorology, it is NOT the ratio of the integral of the temperature distribution function over a time interval to that time interval\\({}^{15,16}\\). Nor is it a spatial mean. In fact, what is a \"mean temperature\" for a city, a country, for the world? One might propose that one has to measure the temperature everywhere and continuously in time, then make an average. Questions are not only how many thermometers does one need, but also what precision is needed. Does one need to distribute the thermometers homogeneously? What about local peculiarities? like nuclear plants? Should we need a fractal distribution in space of thermometers? Might one use a Monte Carlo approach to locate them such that statistical theories give some idea on error bars. What error bars? There is no error bar ever given on weather maps, in newspapers or TV, on radio and rarely in scientific publications. Errors are bad! and forgotten. There is rarely a certitude (or risk) coefficient which is mentioned. It might not be necessary for the public, but yet we know, and it will be recalled later, that for computer work and predictions the initial values should be well defined. Therefore it seems essential to concentrate on predicting the uncertainty in forecast models of weather and climate as emphasized elsewhere\\({}^{17,18}\\). ## III Climate and Weather. The Role of Clouds Up to von Humboldt there was no correlation discussed, no model of weather, except for qualitative considerations, only through the influence of the earth rotation, moon phases, Saturn, or Venus or Jupiter or constellation locations, etc. However the variables of interest were becoming to be known, but predictive meteorology and more generally climate (description and) fore casting had still a need for better observational techniques, data collecting, subsequent analysis[19], and model outputs. Earth's climate is clearly determined by complex interactions between sun, oceans, atmosphere, land and biosphere[20, 21]. The composition of the atmosphere is particularly important because certain gases, including water vapor, carbon dioxide, etc., absorb heat radiated from Earth's surface. As the atmosphere warms up, it in turn radiates heat back to the surface that increases the earth's \"mean surface temperature\", by some 30 K above the value that would occur in the absence of a radiation-trapping atmosphere[21]. Note that perturbations in the concentration of the radiation active gases do alter the intensity of this effect on the earth's climate. Understanding the processes and properties that effect atmospheric radiation and, in particular, the influence of clouds and the role of cloud radiative feedback are issues of great scientific interest[22, 23]. This leads to efforts to improve not only models of the earth's climate but also predictions of climate change[24], as understood over long time intervals, in contrast to shorter time scales for weather forecast. In fact, with respect to climatology the situation is very complicated because one does not even know what the evolution equations are. Since controlled experiments cannot be performed on the climate system, one relies on using ad hoc models to identify cause-and-effect relationships. Nowadays there are several climate models belonging to many different centers[25]. Their web sites not only carry sometimes the model output used to make images but also provide the source code. It seems relevant to point out here that the stochastic resonance idea was proposed to describe climatology evolution[26]. Phenomena of interest occurring on short (!) time and small (!) space scales, whence the weather, are represented through atmospheric models with a set of nonlinear differential equations based on Navier-Stokes equations[27]for describing fluid motion, in terms of mass, pressure, temperature, humidity, velocity, energy exchange, including solar radiation whence for predicting the weather6. It should be remembered that solutions of such equations forcefully depend on the initial conditions, and steps of integrations. Therefore a great precision on the temperature, wind velocity, etc. cannot be expected and the solution(s) are only looking like a mess after a few numerical steps\\({}^{29}\\). The Monte Carlo technique suggests to introduce successively a set of initial conditions, perform the integration of the differential equations and make an average thereafter\\({}^{29}\\). If some weather map is needed, a grid is used with constraints on the nodes, but obviously the precision (!) is not remarkable, - but who needs it? Footnote 6: It is fair to mention Sorel work\\({}^{28}\\) about general motion of the atmosphere in order to explain strong winds on the Mediterranean sea. The quoted reference has some interesting introduction about previous work It is hereby time to mention Lorenz's\\({}^{30}\\) famous pioneering work who simplified Navier-Stokes equations 7. However, predicting the result of a complex nonlinear interactions taking place in an open system is a difficult task\\({}^{32}\\). Footnote 7: Beautiful and thought provoking illustrations can be found on various websites\\({}^{31}\\), demonstrating Poincaré cross sections, strange attractors, cycles, bifurcations, and the like. Much attention has been paid recently\\({}^{33,34}\\) to the importance of the main components of the atmosphere, in particular clouds, (see Appendix A) in the water three forms -- vapor, liquid and solid, for buffering the global temperature against reduced or increased solar heating\\({}^{35}\\). Owing to its special properties, it is believed, that water establishes lower and upper boundaries on how far the temperature can drift from today's. However, the role of clouds and water vapor in climate change is not well understood. In fact, there may be positive feedback between water vapor and other greenhouse gases. Studies suggest that the heliosphere influences the Earth climate via mechanisms that affecting the cloud cover [36, 37]. Surprisingly the influence of solar variability is found to be strong on low clouds (3 km), whence pointing to a microphysical mechanism involving aerosol formation enhanced by ionization due to cosmic rays. At time scales of less than one day, significant fluxes of heat, water vapor and momentum are exchanged due to entrainment, radiative transfer, and/or turbulence [21, 38, 39, 40, 41, 42]. The turbulent character of the motion in the atmospheric boundary layer (ABL) is one of its most important features. Turbulence can be caused by a variety of processes, like thermal convection, or mechanically generated by wind shear, or following interactions influenced by the rotation of the Earth [39, 41]. This complexity of physical processes and interactions between them create a variety of atmospheric formations. In particular, in a cloudy ABL the radiative fluxes produce local sources of heating or cooling within the mixed-layer and therefore can greatly influence its turbulent structure and dynamics, especially in the cloud base. The atmospheric boundary layer is defined by its inner (surface) layer [20, 21, 38, 39]. In an unstably stratified ABL the dominating convective motions are generated by strong surface heating from the Sun or by cloud-topped radiative cooling processes [21]. In contrast, a stably stratified ABL occurs mostly at night in response to the surface cooling due to long-wave length radiation emitted into the space. In presence of clouds (shallow cumulus, stratocumulus or stratus) the structure of the ABL is modified because of the radiative fluxes. Thermodynamical phase changes become important. During cloudy conditions one can distinguish mainly : (i) the case in which the cloud and the sub-cloud layers are fully coupled; (ii) two or more cloud layers beneath the inversion,with the lower layer well-mixed with an upper elevated layer, decoupled from the surface mixed layer or (iii) a radiation driven elevated mixed cloud layer, decoupled from the surface. Two practical cases can be considered : the marine ABL and the continental ABL. The former is characterized by a high concentration of moisture. It is wet, mobile and has a well expressed lower boundary. The competition between the processes of radiative cooling, entrainment of warm and dry air from above the cloud and turbulent buoyancy fluxes determine the state of equilibrium of the cloud-topped marine boundary layer [21]. The continental ABL is usually dryer, less mobile, better defined lower and upper boundaries. Both cases have been investigated for their scaling properties [43, 44, 45] ## IV Modern statistical physics approaches The modern paradigm in statistical physics is that systems obey \"universal\" laws due to the underlying nonlinear dynamics independently of microscopic details. Therefore it can be searched in meteorology whether one can obtain characteristic quantities using the modern statistical physics methods as done in other laboratory or computer investigations. To distinguish cases and patterns due to \"external field\" influences or mere self-organized situations in geophysics phenomena [46] is not obvious indeed. What sort of feedback can be found? or neglected? Is the equivalent of chicken-egg priority problem in geophysics easily solved? The coupling between human activities and deterministic physics is hard to model on simple terms [47], or can even be rejected [48]. Due to the nonlinear physics laws governing the phenomena in the atmosphere, the time series of the atmospheric quantities are usually non-stationary [48, 49] as revealed by Fourier spectral analysis, - whih is usually the first technique to use. Recently, new techniques have been developed that can systematically eliminate trends and cycles in the data and thus reveal intrinsic dynamical properties such as correlations that are very often masked by nonstationarities,[50, 51]. Whence many studies reveal long-range power-law correlations in geophysics time series[46, 49, 52] in particular in meteorology[53, 54, 55, 56, 57, 58, 59, 60]. Multi-affine properties[45, 61, 62, 63, 64, 65, 66, 67, 68, 69] can also be identified, using singular spectrum or/and wavelets. There are different levels of essential interest for sorting out correlations from data, in order to increase the confidence in predictability[70]. There are investigations based on long-, medium-, and short-range horizons. The \\(i\\)-diagram variability (\\(iVD\\)) method allows to sort out some short range correlations. The technique has been used on a liquid water cloud content data set taken from the Atlantic Stratocumulus Transition Experiment (ASTEX) 92 field program[71]. It has also been shown that the random matrix approach can be applied to the empirical correlation matrices obtained from the analysis of the basic atmospheric parameters that characterize the state of atmosphere[72]. The principal component analysis technique is a standard technique[73] in meteorology and climate studies. The Fokker-Planck equation for describing the liquid water path[74] is also of interest. See also some tentative search for power law correlations in the Southern Oscillation Index fluctuations characterizing El Ni\\(\\tilde{n}\\)o[75]. But there are many other works of interest[76]. ### Ice in cirrus clouds In clouds, ice appears in a variety of forms, shapes, depending on the formation mechanism and the atmospheric conditions[3, 4, 41, 61, 77, 78]. The cloud inner structure, content, temperature, life time,.. can be studied. In cirrus clouds, at temperatures colder than about \\(-40^{\\circ}\\) C ice crystals form. Becauseof the vertical extent, ca. from about 4 to 14 km and higher, and the layered structure of such clouds one way of obtaining some information about their properties is mainly by using ground-based remote sensing instruments (see Appendix B), and searching for the statistical properties (and correlations) of the radio wave signal backscattered from the ice crystals. This backscattered signal received at the radar receiver antenna is known to depend on the ice mass content and the particle size distribution. Because of the vertical structure of the cirrus cloud it is of interest to examine the time correlations in the scattered signal on the horizontal boundaries, i.e., the top and bottom, and at several levels within the cloud. We have reported[60] along the DFA correlations in the fluctuations of radar signals obtained at isodepths of \\(winter\\) and \\(fall\\) cirrus clouds. In particular we have focussed attention on three quantities: (i) the backscattering cross-section, (ii) the Doppler velocity and (iii) the Doppler spectral width. They correspond to the physical coefficients used in Navier Stokes equations to describe flows, i.e. bulk modulus, viscosity, and thermal conductivity. It was found that power-law time correlations exist with a crossover between regimes at about 3 to 5 min, but also \\(1/f\\) behavior, characterizing the top and the bottom layers and the bulk of the clouds. The underlying mechanisms for such correlations likely originate in ice nucleation and crystal growth processes. ### Stratus clouds In another case, i.e. for stratus clouds, long-range power-law correlations[55, 59] and multi-affine properties[44, 45, 67] have reported for the liquid water fluctuations, beside the spectral density[79]. Interestingly, stratus cloud data retrieved from the radiance, recorded as brightness temperature,8 at the Southern Great Plains central facility and operated in the vertically pointing mode80 (see Appendix B for a brief technical note on instrumentation) indicated for the Fourier spectrum \\(S(f)\\ \\sim\\ f^{-\\beta}\\), a \\(\\beta\\) exponent equal to \\(1.56\\pm 0.03\\) pointing to a nonstationary time series. The DFA statistical method applied on the stratus cloud brightness microwave recording\\({}^{55,81}\\) indicates the existence of long-range power-law correlations over a two hour time. Footnote 8: [http://www.phys.unm.edu/](http://www.phys.unm.edu/) duric/phy423/l1/node3.html Contrasts in behaviors, depending on seasons can be pointed out. The DFA analysis of liquid water path data measured in April 1998 gives a scaling exponent \\(\\alpha=0.34\\pm 0.01\\) holding from 3 to 60 minutes. This scaling range is shorter than the 150 min scaling range\\({}^{55}\\) for a stratus cloud in January 1998 at the same site. For longer correlation times a crossover to \\(\\alpha=0.50\\pm 0.01\\) is seen up to about 2 h, after which the statistics of the DFA function is not reliable. However a change in regime from Gaussian to non-Gaussian fluctuation regimes has been clearly defined for the cloud structure changes using a finite size (time) interval window. It has been shown that the DFA exponent turns from a low value (about 0.3) to 0.5 before the cloud breaks. This indicates that the stability of the cloud, represented by antipersistent fluctuations is (for some unknown reason at this level) turning into a system for which the fluctuations are similar to a pure random walk. The same type of finding was observed for the so called Liquid Water Path9 of the cloud. Footnote 9: The liquid water path (LWP) is the amount of liquid water in a vertical column of the atmosphere; it is measured in cm\\({}^{-3}\\); sometimes in cm!! The value of \\(\\alpha\\approx 0.3\\) can be interpreted as the \\(H_{1}\\) parameter of the multifractal analysis of liquid water content [44, 45, 62] and of liquid water path [67]. Whence, the appearance of broken clouds and clear sky following a period of thick stratus can be interpreted as a non equilibrium transition or a sort of fracture process in more conventional physics. The existence of a crossover suggests two types of correlated events as in classical fracture processes: nucleation and growth of diluted droplets. Such a marked change in persistence implies that specific fluctuation correlation dynamics should be usefully inserted as ingredients in _ad hoc_ models. The non equilibrium nature of the cloud structure and content [82] should receive some further thought henceforth. It would have been interesting to have other data on the cloud in order to understand the cause of the change in behavior. ### Cloud base height The variations in the local \\(\\alpha\\)-exponent (\"multi-affinity\") suggest that the nature of the correlations change with time, so called intermittency phenomena. The evolution of the time series can be decomposed into successive persistent and anti-persistent sequences. It should be noted that the intermittency of a signal is related to existence of extreme events, thus a distribution of events away from a Gaussian distribution, in the evolution of the process that has generated the data. If the tails of the distribution function follow a power law, then the scaling exponent defines the critical order value after which the statistical moments of the signal diverge. Therefore it is of interest to probe the distribution of the fluctuations of a time dependent signal \\(y(t)\\) prior investigating its intermittency. Much work has been devoted to the cloud base height [64, 65, 66], under various ABL conditions, and the LWP [67, 74]. Neither the distribution of the fluctuations of liquid water path signals nor those of the cloud base height appear to be Gaussian. The tails of the distribution follow a power law pointing to \"large events\" also occurring in the meteorological (space and time) framework. This may suggest routes for other models. ### Sea Surface Temperature Time series analysis methods searching for power law exponents allow to look from specific view points, like atmospheric[83] or sea surface temperature fluctuations[84]. These are of importance for weighing their impacts on regional climate, whence finally to greatly increase predictability of precipitation during all seasons. Currently, scientists rely on climate patterns derived from global sea surface temperatures (SST) to forecast precipitation e.g. the U.S. winter. For example, rising warm moist air creates tropical storms during El Ni\\(\\tilde{n}\\)o years, a period of above average temperatures in the waters in the central and eastern tropical Pacific. While the tropical Pacific largely dictates fall and winter precipitation levels, the strength of the SST signal falls off by spring through the summer. For that reason, summer climate predictions are very difficult to make. Recently we have attempted to observe whether the fluctuations in the Southern Oscillation index (\\(SOI\\)) characterizing El Ni\\(\\tilde{n}\\)o were also prone to a power law analysis. For the \\(SOI\\) monthly averaged data time interval 1866-2000, the tails of the cumulative distribution of the fluctuations of \\(SOI\\) signal it is found that large fluctuations are more likely to occur than the Gaussian distribution would predict. An antipersistent type of correlations exist for a time interval ranging from about 4 months to about 6 years. This leads to favor specific physical models for El Ni\\(\\tilde{n}\\)o description[75]. ## V Conclusions Modern statistical physics techniques for analyzing atmospheric time series signals indicate scaling laws (exponents and ranges) for correlations. A few examples have been given briefly here above, mainly from contributed papers in which the author has been involved. Work by many other authors have not been included for lack of space. This brief set of comments is only intended for indicating how meteorology and climate problems can be tied to scaling laws and inherent time series data analysis techniques. Those ideas/theories have allowed me to reduce the list of quoted references, though even like this I might have been unfair. One example can be recalled in this conclusion to make the point: the stratus clouds break when the molecule density fluctuations become Gaussian, i.e. when the molecular motion becomes Brownian-like. This should lead to better predictability on the cloud evolution. Many other examples can be imagined. In fact, it would be of interest for predictability models to examine whether the long range fluctuations belong to a Levy-like or Tsallis or rather than to a Gaussian distribution as in many self organized criticality models. This answer, if positive, would enormously extend the predictability range in weather forecast. **Acknowledgments** Part of this studies have been supported through an Action Concertee Program of the University of Liege (Convention 02/07-293). Comments by A. Pekalski, N. Kitova, K. Ivanova and C. Collette are greatly appreciated. **Appendix A. CLOUDS** It may be of interest for such a type of proceedings to define clouds, or at least to review briefly cloud classifications. Clouds can be classified by theiraltitude 1. High clouds: High clouds are those having a cloud base above 6000 m, where there is little moisture in the air. Typically they contain ice crystals, often appear thin and wispy, and sometimes appear to create a halo around the sun or the moon. High clouds are Cirrus (Ci), Cirrostratus (Cs), Cirrocumulus (Cc) and Cumulonimbus (Cb) 2. Middle clouds: Middle clouds have a cloud base between 2000 m and 6000 m. These clouds are usually composed of water droplets, but sometimes contain ice crystals, if the air is cold enough. Middle clouds are Altostratus (As) and Altocumulus (Ac). 3. Low clouds: Low clouds have a cloud base at less than 2000 m and appear mostly above the sea surface. Usually composed of water droplets except in winter at high latitudes when surface air temperature is below freezing. Low clouds are Stratus (St), Stratocumulus (Sc) and Cumulus (Cu). \\[\\text{or by their shape:}\\] 1. Cirrus: The cirrus are thin, hair-like clouds found at high altitudes. 2. Stratus: The stratus clouds are layered clouds with distinguished top and bottom. Found at all altitudes, they generally thinner when high: (i) High: The high stratus are called Cirrostratus; (ii) Middle: The middle stratus are called Altostratus; (iii) Low: The low ones are stratus and nimobstratus 3. Cumulus: The cumulus clouds are fluffy heaps or puffs. They are found at all altitudes; generally they are lighter and smaller when become high: (i) High: The high cumulus are called Cirrocumulus; (ii) Middle: The middle cumulus are called Altocumulus; (iii) Low: The low ones are Stratocumulus and Cumulus. 4. Cumulonimbus: The cumulonimbus clouds are thunder clouds. Some large cumulus clouds are height enough to reach across low, middle and even high altitude. Note that due to their relatively small sizes and life time cumulus clouds produce short time series for remote sensing measurements (see App. B). Therefore such clouds and data series are not often suitable for many techniques mentioned in this report. It is fair to point out on such clouds the study pertaining to a phenomenon of Abelian nature, rain fall[85, 86]. Much work has been devoted to rain of course, see e.g. Andrade et al.[86, 87, 88] or Lovejoy et al.[89, 90, 91, 92] **APPENDIX B. Experimental techniques and data acquisition** Quantitative observations of the atmosphere are made in many different ways. Experimental/observational techniques to study the atmosphere rely on physical principles. One important type of observational techniques is that of _remote sounding_, which depends on the detection of electromagnetic radiation emitted, scattered or transmitted by the atmosphere. The instruments can be placed at aircrafts, on balloons or on the ground. Remote-sounding techniques can be divided into _passive_ and _active_ types. In passive remote sounding, the radiation measured is of natural origin, for example the thermal radiation emitted by the atmosphere, or solar radiation transmitted or scattered by the atmosphere. Most space-born remote sounding methods are passive. In active remote sounding, a transmitter, e.g. a radar, is used to direct pulses of radiation into the atmosphere, where they are scattered by atmospheric molecules, aerosols or inhomogeneities in the atmospheric structure. Some of the scattered radiation is then detected by some receiver. Each of these techniques has its advantages and disadvantages. Remote sounding from satellites can give near-global coverage, but can provide only averaged values of the measured quantity over large regions, of order of hundreds of kilometers in horizontal extent and several kilometers in the vertical direction. Ground-based radars can provide data with very high vertical resolution (by measuring small differences in the time delays of the return pulses), but only above the radar site. For a presentation of remote sensing techniques the reader can consult many authors [92, 93, 94, 95, 21, 80], or the ARM site [96, 97, 98]. For example, microwave radiometers work at frequencies of 23.8 and 31.4 GHz. At the DOE ARM program SGP central facility, in the vertically pointing mode, the radiometer makes sequential 1 s radiance measurements in each of the two channels while pointing vertically upward into the atmosphere. After collecting these radiances the radiometer mirror is rotated to view a blackbody reference target. For each of the two channels the radiometer records the radiance from the reference immediately followed by a measurement of a combined radiance from the reference and a calibrated noise diode. This measurement cycle is repeated once every 20 s. Note that clouds at 2 km of altitude moving at 10 m s\\({}^{-1}\\) take 15 s to advect through a radiometer field-of-view of approximately 5\\({}^{\\circ}\\). The 1 s sky radiance integration time ensures that the retrieved quantities correspond to a specific column of cloud above the instrument. The Belfort Laser Ceilometer (BLC) [92, 98] detects clouds by transmitting pulses of infrared light (l=910 nm) vertically into the atmosphere (with a pulse repetition frequency fr=976.6 Hz) and analyzing the backscattered signals from the atmosphere. The ceilometer actively collects backscattered photons for about 5 seconds within every 30-second measurement period. The BLC is able to measure the base height of the lowest cloud from 15 up to 7350 m directly above mean ground level. The ceilometer works with a 15 m spatial resolution and reaches the maximum measurable height of 4 km. The time resolution of CBH records is 30 seconds. ## References * [1][http://deutsche-boerse.com/app/open/xelsius](http://deutsche-boerse.com/app/open/xelsius) * [2] S. Lovejoy, Science 216 (1982) 185 * [3] F. S. Rys and A. Waldvogel, Phys. Rev. Lett. 56 (1986) 784. * [4] T. C. Benner and J. A. Curry, J. Geophys. Res. 103 (1998) 28 753. * [5] R.F. Cahalan, D. A. Short, G. R. North, Mon. Weather Rev. 110 (1982) 26. * [6] R.A.J. Neggers, H.J.J. Jonker, A.P. Siebesma, AP, J. Atmosph. Sci. 60 (2002) 1060. * [7] S.M.A. Rodts, P. G. Duynkerker, H.J.J. Jonker, J.J. Ham, J. Atmosph. Sci. 60 (2002) 1895. * [8] J. Stark and K. Hardy, Science 301 (2003) 1192. * [9] F. Molteni, R. Buizza, T.N. Palmer, T. Petroliagis, Q. J. R. Meteorol. Soc. 122 (1996) 73. * [10] E.S. Epstein, Tellus 21 (1969) 739. * [11][http://www.ecmwf.int/newsevents/training/rcourse](http://www.ecmwf.int/newsevents/training/rcourse)\\({}_{-}\\)notes/index.html * [12] H. E. Stanley, _Phase transitions and critical phenomena_ (Oxford Univ. Press, Oxford, 1971). * [13] D. Stauffer and A. Aharony, _Introduction to Percolation Theory_ (Taylor & Francis, London, 1992). * [14] M. Monmonier, _Air Apparent. How meteorologists learned to map, predict, and dramatize weather_ (U. Chicago Press, Chicago, 1999). * [15][http://www.maa.org/features/mathchat/mathchat](http://www.maa.org/features/mathchat/mathchat)\\({}_{-}\\)4\\({}_{-}\\)20\\({}_{-}\\)00.html * [16] R.E. Huschke, (Ed.), Glossary of Meteorology (Am. Meteorol. Soc., Boston, 1959). * [17] T.N. Palmer, Rep. Phys. Rep. 63 (2000) 71. * [18] S.G. Philander, Rep. Phys. Rep. 62 (1999) 123. * [19] S. Lovejoy, D. Schertzer, Ann. Geophys. B 4 (1986) 401. * [20] R.A. Anthens, H.A. Panofsky, J.J. Cahir, A. Rango: _The Atmosphere_ (Bell & Howell Company, Columbus, OH, 1975). * [21] D. G. Andrews, _An Introduction to Atmospheric Physics_ (Cambridge University Press, Cambridge, 2000). * [22] R.R. Rogers,_Short Course in Cloud Physics_ (Pergamon Press, New York, 1976). * [23] B.A. Wielicki, R.D. Cess, M.D. King, D.A. Randall, E.F. Harrison, Bull. Amer. Meteor. Soc. 76 (1995) 2125. * [24] K. Hasselmann, in _The Science of Disasters_, A. Bunde, J. Kropp, H.J. Schellnhuber (Springer, Berlin, 2002) 141. * [25][http://stommel.tamu.edu/](http://stommel.tamu.edu/) baum/climate\\({}_{-}\\)modeling.html * [26] R. Benzi, A. Sutera, A. Vulpiani, J. Phys. 14 (1981) L453. * [27] L. D. Landau., E.M. Lifshitz: _Fluid Mechanics_ (Addison-Wesley, Reading, MA, 1959) * [28] M. Sorel, Ann. Sci. ENS Ser.1, 4 (1867) 255. * [29] A. Pasini, V. Pelino, Phys. Lett. A 275 (2000) 435. * [30] E. N. Lorenz, J. Atmos. Sci. 20 (1963) 130. * [31][http://astronomy.swin.edu.au/](http://astronomy.swin.edu.au/)\\({}^{\\sim}\\)pbourke/fractals/lorenz/; [http://www.wam.umd.edu/](http://www.wam.umd.edu/)\\({}^{\\sim}\\)petersd/lorenz.html * [32] J.B. Ramsey and Z. Zhang, in _Predictability of Complex Dynamical Systems_, (Springer, Berlin, 1996) 189. * [33] A. Maurellis, Phys. World 14 (2001) 22. * [34] D. Rosenfeld, W. Woodley, Phys. World 14 (2001) 33. * [35] H.-W. Ou, J. Climate 14 (2001) 2976. * [36] N.D. Marsh, H. Svensmark, Phys. Rev. Lett. 85 (2000) 5004. * [37] H. Svensmark, Phys. Rev. Lett. 81 (1998) 5027. * [38] J. R. Garratt, _The Atmospheric Boundary Layer_ (Cambridge University Press, Cambridge, 1992). * [39] H.A. Panofsky and J.A. Dutton, _Atmospheric Turbulence_ (John Wiley & Son Inc. New York 1983). * [40] R. F. Cahalan and J. H. Joseph, Mon. Weather Rev. 117 (1989) 261. * [41] A. G. Driedonks and P.G. Duynkerke, Bound. Layer Meteor. 46 (1989) 257. * [42] A. P. Siebesma and H. J. J. Jonker, Phys. Rev. Lett. 85 (2000) 214 * [43] N. Kitova, Ph. D. thesis, University of Liege, unpublished * [44] A. Davis, A. Marshak, W. Wiscombe, R. Cahalan, J. Atmos. Sci. 53 (1996) 1538. * [45] A. Marshak, A. Davis, W. Wiscombe, R. Cahalan, J. Atmos. Sci. 54 (1997) 1423. * [46] D.L. Turcotte, Phys. Rep. 62 (1999) 1377; D.L. Turcotte, _Fractals and Chaos in Geology and Geophysics_ (Cambridge University Press, Cambridge 1997). * [47] S. Corti, F. Molteni, T.N. Palmer, Nature 398 (1999) 789. * [48] O. Karner, J. Geophys. Res. 107 (2002) 4415. * [49] A. Davis, A. Marshak, W. J. Wiscombe, and R. F. Cahalan, in _Current Topics in Nonstationary Analysis_, Eds. G. Trevino, J. Hardin, B. Douglas, and E. Andreas, (World Scientific, Singapore, 1996) 97-158. * [50] Th. Schreiber, Phys. Rep. 308 (1999) 1. * [51] P.J. Brockwell and R.A. Davis, _Time Series : Theory and Methods_ (Springer-Verlag,Berlin,1991). * [52] K. Fraedrich, R. Blender, Phys. Rev. Lett. 90 (2003) 108501. * [53] E. Koscielny-Bunde, A. Bunde, S. Havlin, H. E. Roman, Y. Goldreich, H.-J. Schellnhuber, Phys. Rev. Lett. 81 (1998) 729. * [54] E. Koscielny-Bunde, A. Bunde, S. Havlin, Y. Goldreich, Physica A 231 (1993) 393. * [55] K. Ivanova, M. Ausloos, E. E. Clothiaux, and T. P. Ackerman, Europhys. Lett. 52 (2000) 40. * [56] A.A. Tsonis, P.J. Roeber and J.B. Elsner, Geophys. Res. Lett. 25 (1998) 2821. * [57] A.A. Tsonis, P.J. Roeber and J.B. Elsner, J. Climate 12 (1999) 1534. * [58] P. Talkner and R.O. Weber, Phys. Rev. E 62 (2000) 150. * [59] K. Ivanova, M. Ausloos, Physica A 274 (1999) 349. * [60] K. Ivanova, T.P. Ackerman, E.E. Clothiaux, P.Ch. Ivanov, H.E. Stanley, and M. Ausloos, J. Geophys. Res., 108 (2003) 4268. * [61] N. Decoster, S.G. Roux, A. Arneodo, Eur. Phys. J B 15 (2000) 739; S.G. Roux, A. Arneodo, N. Decoster, Eur. Phys. J. B 15 (2000) 765. * [62] A. Davis, A. Marshak, W. Wiscombe, R. Cahalan, J. Geophys. Research. 99 (1994) 8055. * [63] A. Davis, A. Marshak, H. Gerber, W. J. Wiscombe, J. Geophys. Research. 104 (1999) 6123. * [64] N. Kitova, K. Ivanova, M. Ausloos, T.P. Ackerman, M. A. Mikhalev, Int. J. Modern Phys. C 13 (2002) 217. * [65] K. Ivanova, H.N. Shirer, E.E. Clothiaux, N. Kitova, M.A. Mikhalev, T.P.Ackerman, and M. Ausloos, Physica A 308 (2002) 518. * [66] N. Kitova, K. Ivanova, M.A. Mikhalev and M. Ausloos, in \"From Quanta to Societies\", W. Klonowski, Ed. (Pabst, Lengerich, 2002) 263. * [67] K. Ivanova, T. Ackerman, Phys. Rev. E 59 (1999) 2778. * [68] C.R. Neto, A. Zanandrea, F.M. Ramos, R.R. Rosa, M.J.A. Bolzan, L.D.A. Sa, Physica A 295 (2001) 215. * [69] H.F.C. Velho, R.R. Rosa, F.M. Ramos, R.A. Pielke, C.A. Degrazia, C.R. Neto, A. Zanadrea, Physica A 295 (2001) 219. * [70] B.D. Malamud, D.L. Turcotte, J. Stat. Plann. Infer. 80 (1999) 173. * [71] K. Ivanova, M. Ausloos, A.B. Davis, T.P. Ackerman, Physica A 272 (1999) 269. * [72] M. S. Santhanam, P. K. Patra, Phys. Rev. E 64 (2001) 16102. * [73] M.J. O'Connel, Comp. Phys. Comm. 8 (1974) 49. * Atmosph. 107 (2002) 4708. * [75] M. Ausloos and K. Ivanova, Phys. Rev. E 63 (2001) 047201. * [76] J.I. Salisbury, M. Winbush, Nonlin. Process. Geophys. 9 (2002) 341. * [77] K. R. Sreenivasan, Ann. Rev. Fluid Mech. 23 (1991) 539. * [78] C. S. Kiang, D. Stauffer, G. H. Walker, O. P. Puri, J. D. Wise, Jr. and E. M. Patterson, J. Atmos. Sci. 28 (1971) 1222. * [79] H. Gerber, J.B. Jensen, A. Davis, A. Marshak, W. J. Wiscombe, J. Atmos. Sci. 58 (2001) 497. * [80] J.C. Liljegren, B.M. Lesht, IEEE Int. Geosci. and Remote Sensing Symp. 3 (1996) 1675. * [81] K. Ivanova, E.E. Clothiaux, H.N. Shirer, T.P. Ackerman, J. Liljegren and M. Ausloos, J. Appl. Meteor. 41 (2002) 56. * [82] S.S. Seker and O. Cerezci, J. Phys. D: Appl. Phys. 32 (1999) 552. * [83] J.D. Pelletier, Earth Planet. Sci. Lett. 158 (1998) 157. * [84] R.A. Monetti, S. Havlin, A. Bunde, Physica A 320 (2003) 581. * [85] D. Schertzer, S. Lovejoy, J. Geophys. Res. 92 (1987) 9693. * [86] S.T.R. Pinho, R.F.S. Andrade, Physica A 255 (1998) 483 * [87] R.F.S. Andrade, Braz. J. Phys. 33 (2003) 437. * [88] J.G.V. Miranda, R.F.S. Andrade, Physica A 295 (2001) 38; Theor. Appl. Climatol. 63 (1999) 79. * [89] Y. Tessier, S. Lovejoy, D. Schertzer, J. Appl. Meteorol. 32 (1993) 223. * [90] D. Schertzer, S. Lovejoy, J. Appl. Meteorol. 36 (1997) 1296. * [91] S. Lovejoy, D. Schertzer, J. Appl. Meteorol. 29 (1990) 1167. * [92] C. S. Bretherton, E. Klinker, J. Coakley, A. K. Betts, J. Atmos. Sci. 52 (1995) 2736. * [93] E.R. Westwater, Radio Science 13 (1978) 677. * [94] E.R. Westwater, in: _Atmospheric Remote Sensing by Microwave Radiometry_, ed. by M.A. Janssen (John Wiley and Sons, New York 1993) pp. 145-213 * [95] W.G. Rees: _Physical Principles of Remote Sensing_ (Cambridge University Press, Cambridge, 1990) * [96] G.M. Stokes, S.E. Schwartz, Bull. Am. Meteorol. Soc. 75 (1994) 1201. * [97][http://www.arm.gov](http://www.arm.gov). * [98][http://www.arm.gov/docs/instruments/static/blc.html](http://www.arm.gov/docs/instruments/static/blc.html).
Various aspects of modern statistical physics and meteorology can be tied together. Critical comments have to be made. However, the historical importance of the University of Wroclaw in the field of meteorology should be first pointed out. Next, some basic difference about time and space scales between meteorology and climatology can be outlined. The nature and role of clouds both from a geometric and thermal point of view are recalled. Recent studies of scaling laws for atmospheric variables are mentioned, like studies on cirrus ice content, brightness temperature, liquid water path fluctuations, cloud base height fluctuations, Technical time series analysis approaches based on modern statistical physics considerations are outlined.
Summarize the following text.